Wednesday, 15 January 2020: 8:45 AM
260 (Boston Convention and Exhibition Center)
Deep learning is a subset of machine learning with the ability to learn from spatiotemporal grids, without pre-processing into features such as moments, percentiles, or principal components. In addition to making better predictions, deep learning is often easier to interpret, because the model’s behaviour can be visualized in the same gridded space. As machine learning becomes more prevalent in decision-making, the ability to interpret and explain a model’s decisions is becoming crucial. This talk will focus on the interpretation of convolutional neural networks (CNN) used to predict next-hour tornado occurrence. The predictors are a storm-centered radar image from either the Multi-year Reanalysis of Remotely Sensed Storms (MYRORSS; Ortega et al., 2012) or GridRad (Homeyer and Bowman, 2017), as well as a proximity sounding. One model is trained with MYRORSS data, and the other with GridRad data.
We will present results from at least three interpretation methods, with an aim to understand storm structures and processes learned by the CNNs. We will compare results between the MYRORSS- and GridRad-trained models, to determine if relationships learned for the two datasets are different. The first interpretation method is saliency maps (Simonyan et al., 2014), which quantify the derivative of tornado probability with respect to each predictor (each radar variable at each grid cell and each sounding variable at each height). The second is class-activation maps (CAM; Selvaraju et al., 2017), which quantify the amount of evidence for next-hour tornado occurrence at each spatial location (each radar grid cell and each sounding height). The third is backwards optimization (“feature optimization”; Olah et al., 2017), which allows the model to create synthetic storms that minimize or maximize tornado probability. Finally, if enough people have participated, we will present results from an ongoing experiment that compares how humans and machines interpret storms vis-à-vis tornado prediction.
This work is part of a greater effort to understand what machines can learn about the weather. In the future we envision a tighter coupling between data science and physical science, where data science is used to improve both prediction and understanding.
Homeyer, C., and K. Bowman, 2017: "Algorithm Description Document for Version 3.1 of the Three-Dimensional Gridded NEXRAD WSR-88D Radar (GridRad) Dataset." Tech. rep., University of Oklahoma. URL http://gridrad.org/pdf/GridRad-v3.1-Algorithm-Description.pdf.
Olah, C., A. Mordvintsev, and L. Schubert, 2017: "Feature visualization." Distill, doi:10.23915/distill.00007, URL https://distill.pub/2017/feature-visualization.
Ortega, K., T. Smith, J. Zhang, C. Langston, Y. Qi, S. Stevens, and J. Tate, 2012: "The Multi-year Reanalysis of Remotely Sensed Storms (MYRORSS) project." Conference on Severe Local Storms, Nashville, Tennessee, American Meteorological Society.
Selvaraju, R., M. Cogswell, A. Das, R. Vedantam, D. Parikh, and D. Batra, 2017: "Grad-CAM: Visual explanations from deep networks via gradient-based localization." International Conference on Computer Vision, Venice, Italy, IEEE.
Simonyan, K., A. Vedaldi, and A. Zisserman, 2014: "Deep inside convolutional networks: Visualizing image classification models and saliency maps." arXiv pre-prints, 1312.
- Indicates paper has been withdrawn from meeting
- Indicates an Award Winner