representations of the input data at various levels of abstraction, has exploded in
popularity in just a few years. Some notable successes are galaxy classification
(Dieleman et al., 2015); achieving state-of-the-art performance in Go (Silver et al.,
2016, 2017) and Atari games (Such et al., 2018); creating novel human faces (Karras
et al., 2018) and animations thereof (Suwajanakorn et al., 2017); detecting objects
in gridded weather data (Liu et al., 2016; Chilson et al., 2018); and hail prediction
(Gagne, 2018). We use deep learning to predict tornadoes and damaging straight-line
wind (SLW) on a CONUS-wide grid, at lead times up to one hour, in a framework
transferable to real-time operations.
Our predictors are multi-radar images and soundings from numerical weather
prediction (NWP) models; our labels, or “verification data,” are local storm re-
ports (tornadoes and SLW) and ground-based wind observations (SLW only). The
two sources of radar data are the Multi-year Reanalysis of Remotely Sensed Storms
(MYRORSS; Ortega et al., 2012) and GridRad (Homeyer and Bowman, 2017). The
two sources of NWP data are the Rapid Update Cycle (RUC), for times before 0000
UTC 1 May 2012, and Rapid Refresh (RAP), for 0000 UTC 1 May 2012 and later.
Data are subjected to four types of pre-processing. First, storm cells are outlined
and tracked through time, using an extension of the method introduced in Homeyer
et al. (2017). Second, each tornado report and wind gust ≥ 50 kt is linked to
the nearest storm cell S, if S passes within 10 km. Third, for each storm object
(one “storm object” is one storm cell at one time step), a 3-D storm-centered radar
image is created, with the grid rotated so that storm motion is always in the +x-
direction. This image includes whichever of reflectivity, spectrum width, divergence,
vorticity, azimuthal shear, ZDR, KDP, and ρhv are available. Finally, the RAP or
RUC sounding is interpolated in space and time to the center of the storm object.
Storm-centered radar images and soundings are used to train a convolutional
neural network (CNN). The CNN runs 1-D convolutional filters over each sounding, 2-
D filters over each 2-D radar image, and 3-D filters over each 3-D image. Throughout training, the weights in these filters are tuned to reduce the loss function, based on
the discrepancy between model forecasts (probabilities) and observed labels (0 or 1,
indicating whether or not the storm is responsible for the phenomenon of interest
within the next hour).
One advantage of deep learning, in addition to better forecast quality, is that
it learns directly from pixels in spatiotemporal images, rather than specially crafted
features. This saves time and prevents the user from imposing their preconceived no-
tions on the model – i.e., computing only the features that they think are important.
This also facilitates model interpretation, as model behaviour can be visualized in the
space of the input images. Some of the most popular deep-learning-based interpre-
tation methods are activation maximization (Erhan et al., 2009), which synthesizes
input data that maximally activates the network in a certain way (e.g., generates a
100% tornado or SLW probability); visualization of feature maps, which are lower-
resolution versions of the input data, created by each neuron; and saliency-mapping
(Simonyan et al., 2014), which quantifies the influence of each variable/pixel on the
model forecast. We apply all three methods to both the wind and tornado models
and share insights arising therefrom.
Our models could be deployed in real-time, using RAP data for soundings and
Multi-radar Multi-sensor (MRMS; Smith et al., 2016) data for radar images. We
will present forecast evaluation for both tornado and wind models and compare with
appropriate baselines.
References
Chilson, C., K. Avery, A. McGovern, E. Bridge, D. Sheldon, and J. Kelly, 2018: "Automated detection of bird roosts using NEXRAD radar data and convolutional neural networks". Remote Sensing in Ecology and Conservation, in press.
Dieleman, S., K. Willett, and J. Dambre, 2015: "Rotation-invariant convolutional neural networks for galaxy morphology prediction". Nature, 450 (2), 1441–1459.
Erhan, D., Y. Bengio, A. Courville, and P. Vincent, 2009: "Visualizing higher-layer features of a deep network". Tech. rep., Université de Montréal.
Gagne, D., 2018: "Hail forecasting with interpretable deep learning". Conference on Weather Analysis and Forecasting, Denver, Colorado, American Meteorological Society.
Homeyer, C. and K. Bowman, 2017: "Algorithm Description Document for Version 3.1 of the Three-dimensional Gridded NEXRAD WSR-88D Radar (GridRad) Dataset". gridrad.org/pdf/GridRad-v3.1-Algorithm-Description.pdf.
Homeyer, C., J. McAuliffe, and K. Bedka, 2017: "On the development of above-anvil cirrus plumes in extratropical convection". Journal of the Atmospheric Sciences, 74 (5), 1617–1633.
Karras, T., T. Aila, S. Laine, and J. Lehtinen, 2018: "Progressive growing of GANs for improved quality, stability, and variation". arXiv, 1710 (10196v3).
Liu, Y., et al., 2016: "Application of deep convolutional neural networks for detecting extreme weather in climate datasets". arXiv, 1605 (01156).
Ortega, K., T. Smith, J. Zhang, C. Langston, Y. Qi, S. Stevens, and J. Tate, 2012: "The multi-year reanalysis of remotely sensed storms (MYRORSS) project". Conference on Severe Local Storms, Nashville, Tennessee, American Meteorological Society.
Silver, D., et al., 2016: "Mastering the game of Go with deep neural networks and tree search". Nature, 529 (7587), 484–489.
Silver, D., et al., 2017: "Mastering the game of Go without human knowledge". Nature, 550, 354–359.
Simonyan, K., A. Vedaldi, and A. Zisserman, 2014: "Deep inside convolutional networks: Visualizing image classification models and saliency maps". arXiv, 1312 (6034v2).
Smith, T., et al., 2016: "Multi-radar Multi-sensor (MRMS) severe weather and aviation products: Initial operating capabilities". Bulletin of the American Meteorological Society, 97 (9), 1617–1630.
Such, F., V. Madhavan, E. Conti, J. Lehman, K. Stanley, and J. Clune, 2018: "Deep neuroevolution: Genetic algorithms are a competitive alternative for training deep neural networks for reinforcement learning". arXiv, 1712 (06567v3).
Suwajanakorn, S., S. Seitz, and I. Kemelmacher-Shlizerman, 2017: "Synthesizing Obama: Learning lip sync from audio". ACM Transactions on Graphics, 36 (4).