Wednesday, 9 January 2019: 11:45 AM
North 124B (Phoenix Convention Center - West and North Buildings)
The number of machine-learning (ML) applications has surged within the meteorological community over the last several years. This surge includes the development and application of numerous ML techniques to improve forecasting as well as physical models while reducing computational complexity and time. Given the vast trove of available satellite-based weather imagery and the gridded structure of many meteorological datasets, deep-learning (DL) methods for providing predictions and diagnostics for numerous subdomains are experiencing increased adoption. However, full adoption will require forecasters and decision makers to interpret why model output is produced given the input, especially if the output has implications for human well-being. Due to their complex architectures, interpreting DL models can be especially difficult, and models are often treated as black boxes. This work examines contemporary methods for assessing the interpretability of a convolutional neural network (CNN) trained to predict tropical cyclone (TC) intensity based on available satellite-weather data, primarily in the IR band. CNNs excel at distilling image data into the most important feature abstractions for developing functional associations between input images and required prediction output. The goal of this work is not necessarily to produce the most accurate TC intensity model, but to assess whether such a DL architecture is capable of learning physically relevant abstractions for the problem at hand. We will describe and apply interpretability methods to the TC intensity CNN model to assess the importance of physical concepts to final predictions. We will also assess the traceability of predictions across the learned network.
- Indicates paper has been withdrawn from meeting
- Indicates an Award Winner