J37.3 Selected Methods from Explainable AI to Improve Understanding of Neural Network Reasoning for Environmental Science Applications

Wednesday, 15 January 2020: 9:00 AM
260 (Boston Convention and Exhibition Center)
Imme Ebert-Uphoff, CIRA–Colorado State Univ., Fort Collins, CO; and K. Hilburn, B. A. Toms, and E. A. Barnes

Machine learning methods are emerging in many environmental science applications, but they are often treated as a black box, i.e. used without understanding their detailed reasoning. This work discusses several tools that can shed light on the reasoning of one of the most common machine learning methods, artificial neural networks (ANNs). This effort extends recent work by McGovern et al. (2019) on interpretation of ML methods for environmental science applications. We provide an overview of selected methods developed within the field of explainable AI (XAI) for the interpretation of ANN models, with emphasis on the method of layer-wise relevance propagation (LRP), which we believe has great potential and has not yet been used in this field. These methods can be useful to a) investigate whether an ANN model uses a proper model representation, rather than exploiting artifacts; b) aid in targeted optimization and debugging of an ANN model; and c) potentially learn new science from an ANN model, e.g., by discovering new relevant properties of the input data. The focus of this presentation is to explain the methods that provide the most useful information, including their pros and cons, and to briefly illustrate them for atmospheric science applications. A proposed affiliated presentation in the AI and Climate session dives into the details of what can be learned about weather and climate patterns using LRP techniques such as Deep Taylor decomposition.
- Indicates paper has been withdrawn from meeting
- Indicates an Award Winner