Our predictors are composite (multi-radar) radar images and NWP-generated soundings; our labels (verification data) are tornado reports from the Storm Events archive. The two sources of radar images are the Multi-year Reanalysis of Remotely Sensed Storms (MYRORSS) (Ortega et al., 2012), which has 0.01◦ grid spacing (0.005◦ for azimuthal shear) and 5-minute time steps, covering 1998-2011; and GridRad (Homeyer and Bowman, 2017), which has 0.02◦ grid spacing and 5-minute time steps, covering selected days from 2011-present. The two sources of NWP soundings are the Rapid Update Cycle (RUC), for initialization times before 0000 UTC 1 May 2012, and the Rapid Refresh (RAP) for initialization times of 0000 UTC 1 May 2012.
Data are pre-processed (before machine learning) in four ways. First, storm cells are outlined and tracked through time, using an extension of the method introduced in Homeyer et al. (2017). Second, each tornado report is linked to the nearest storm cell, as long as the nearest storm cell passes within 10 km. Third, for each storm object (one “storm object” is one storm cell at one time step), a storm-centered reflectivity image (with dimensions of 0.32◦ × 0.32◦ at heights of 1, 2, 3, . . ., 12 km above sea level) is extracted. If MYRORSS data are available at the time, storm-centered images of low-level and mid-level azimuthal shear (with dimensions of 0.32◦ × 0.32◦ ) are also extracted. Finally, the NWP sounding is interpolated in space and time to the center of the storm object. Interpolation is done via the nearest-neighbour method, so that the entire sounding is taken from one model grid point at one model time, which preserves physical consistency among the sounding variables.
We use NWP soundings, azimuthal-shear images (if available), and reflectivity images to train a convolutional neural network (CNN), which is the most common type of deep-learning model. The output is the probability that the storm will be tornadic at any point within the next hour. The main advantage of CNN’s is that they can learn from spatiotemporal images, without the need to precalculate features (e.g., CAPE, bulk shear, maximum reflectivity inside the storm, etc.). This saves time and prevents the user from imposing their preconceived notions on the model (e.g., only precalculating the features that they think are important, rather than letting the network determine the important features). Because soundings are 1-D (defined over a column), azimuthal-shear images are 2-D (defined over a horizontal plane), and reflectivity images are 3-D (defined over a volume), our CNN performs 1-D, 2-D, and 3-D convolution. To our knowledge, this approach is completely novel. Then the features detected by 1-D, 2-D, and 3-D convolution are combined, using trainable weights, to yield the output predictions.
Our CNN could be deployed in real-time, using RAP data for soundings and MRMS (Multi-radar Multi-sensor; [Smith et al., 2016]) data, which are similar to MYRORSS, for radar images. We will present verification results for the CNN and compare with a non-machine-learning baseline.
References
Dieleman, S., K. Willett, and J. Dambre, 2015: Rotation-invariant convolutional neural networks for galaxy morphology prediction. Nature, 450 (2), 1441–1459.
Gagne, D., 2018: Hail forecasting with interpretable deep learning. Conference on Weather Analysis and Forecasting, Denver, Colorado, American Meteorological Society.
Homeyer, C. and K. Bowman, 2017: Algorithm description document for version 3.1 of the three-dimensional gridded NEXRAD WSR-88D radar (GridRad) dataset. http://gridrad.org/pdf/GridRad-v3.1-Algorithm-Description.pdf.
Homeyer, C., J. McAuliffe, and K. Bedka, 2017: On the development of above-anvil cirrus plumes in extratropical convection. Journal of the Atmospheric Sciences, 74 (5), 1617–1633.
Karras, T., T. Aila, S. Laine, and J. Lehtinen, 2018: Progressive growing of GANs for improved quality, stability, and variation. arXiv e-prints, 1710 (10196v3).
Kunkel, K., J. Biard, and E. Racah, 2018: Automated detection of fronts using a deep learning algorithm. Conference on Artificial and Computational Intelligence and its Applications to the Environmental Sciences, Austin, Texas, American Meteorological Society.
Liu, Y., et al., 2016: Application of deep convolutional neural networks for detecting extreme weather in climate datasets. arXiv e-prints, 1605 (01156).
Mahesh, A., T. O’Brien, M. Prabhat, and W. Collins, 2018: Assessing uncertainty in deep learning techniques that identify atmospheric rivers in climate simulations. Conference on Artificial and Computational Intelligence and its Applications to the Environmental Sciences, Austin, Texas, American Meteorological Society.
Ortega, K., T. Smith, J. Zhang, C. Langston, Y. Qi, S. Stevens, and J. Tate, 2012: The multi-year reanalysis of remotely sensed storms (MYRORSS) project. Conference on Severe Local Storms, Nashville, Tennessee, American Meteorological Society.
Silver, D., et al., 2017: Mastering the game of Go without human knowledge. Nature, 550 (7676), 354–359.
Smith, T., et al., 2016: Multi-radar Multi-sensor (MRMS) severe weather and aviation products: Initial operating capabilities. Bulletin of the American Meteorological Society, 97 (9), 1617–1630.
Suwajanakorn, S., S. Seitz, and I. Kemelmacher-Shlizerman, 2017: Synthesizing Obama: Learning lip sync from audio. ACM Transactions on Graphics, 36 (4).