138 Deep Learning for Real-Time Storm-Based Tornado Prediction

Thursday, 25 October 2018
Stowe & Atrium rooms (Stoweflake Mountain Resort )
Ryan A. Lagerquist, CIMMS, Norman, OK; and C. R. Homeyer, A. McGovern, C. K. Potvin, T. Sandmael, and T. M. Smith

Machine learning, defined as a process in which computers learn autonomously from data, has been used in meteorology for decades. Like other types of statistical modeling (e.g., parameterizations and model-output statistics [MOS]), machine learning is often used to supplement guidance from numerical weather prediction (NWP). Deep learning, which is a subset of machine learning in which the model builds representations of the input data at various levels of abstraction, has exploded in popularity since about 2013. Some notable successes have been galaxy classification (Dieleman et al., 2015); beating the best human players in Go (Silver et al., 2017), which many experts believed would not happen for another decade; creating novel but realistic human faces (Karras et al., 2018) and animations of human faces (Suwajanakorn et al., 2017); detecting tropical cyclones, atmospheric rivers, and fronts in meteorological grids (Liu et al., 2016; Mahesh et al., 2018; Kunkel et al., 2018); and hail prediction (Gagne, 2018). We use deep learning to predict whether or not a storm will be tornadic at any point within the next hour, in a framework suitable for real-time operations.

Our predictors are composite (multi-radar) radar images and NWP-generated soundings; our labels (verification data) are tornado reports from the Storm Events archive. The two sources of radar images are the Multi-year Reanalysis of Remotely Sensed Storms (MYRORSS) (Ortega et al., 2012), which has 0.01◦ grid spacing (0.005◦ for azimuthal shear) and 5-minute time steps, covering 1998-2011; and GridRad (Homeyer and Bowman, 2017), which has 0.02◦ grid spacing and 5-minute time steps, covering selected days from 2011-present. The two sources of NWP soundings are the Rapid Update Cycle (RUC), for initialization times before 0000 UTC 1 May 2012, and the Rapid Refresh (RAP) for initialization times of 0000 UTC 1 May 2012.

Data are pre-processed (before machine learning) in four ways. First, storm cells are outlined and tracked through time, using an extension of the method introduced in Homeyer et al. (2017). Second, each tornado report is linked to the nearest storm cell, as long as the nearest storm cell passes within 10 km. Third, for each storm object (one “storm object” is one storm cell at one time step), a storm-centered reflectivity image (with dimensions of 0.32◦ × 0.32◦ at heights of 1, 2, 3, . . ., 12 km above sea level) is extracted. If MYRORSS data are available at the time, storm-centered images of low-level and mid-level azimuthal shear (with dimensions of 0.32◦ × 0.32◦ ) are also extracted. Finally, the NWP sounding is interpolated in space and time to the center of the storm object. Interpolation is done via the nearest-neighbour method, so that the entire sounding is taken from one model grid point at one model time, which preserves physical consistency among the sounding variables.

We use NWP soundings, azimuthal-shear images (if available), and reflectivity images to train a convolutional neural network (CNN), which is the most common type of deep-learning model. The output is the probability that the storm will be tornadic at any point within the next hour. The main advantage of CNN’s is that they can learn from spatiotemporal images, without the need to precalculate features (e.g., CAPE, bulk shear, maximum reflectivity inside the storm, etc.). This saves time and prevents the user from imposing their preconceived notions on the model (e.g., only precalculating the features that they think are important, rather than letting the network determine the important features). Because soundings are 1-D (defined over a column), azimuthal-shear images are 2-D (defined over a horizontal plane), and reflectivity images are 3-D (defined over a volume), our CNN performs 1-D, 2-D, and 3-D convolution. To our knowledge, this approach is completely novel. Then the features detected by 1-D, 2-D, and 3-D convolution are combined, using trainable weights, to yield the output predictions.

Our CNN could be deployed in real-time, using RAP data for soundings and MRMS (Multi-radar Multi-sensor; [Smith et al., 2016]) data, which are similar to MYRORSS, for radar images. We will present verification results for the CNN and compare with a non-machine-learning baseline.

References

Dieleman, S., K. Willett, and J. Dambre, 2015: Rotation-invariant convolutional neural networks for galaxy morphology prediction. Nature, 450 (2), 1441–1459.

Gagne, D., 2018: Hail forecasting with interpretable deep learning. Conference on Weather Analysis and Forecasting, Denver, Colorado, American Meteorological Society.

Homeyer, C. and K. Bowman, 2017: Algorithm description document for version 3.1 of the three-dimensional gridded NEXRAD WSR-88D radar (GridRad) dataset. http://gridrad.org/pdf/GridRad-v3.1-Algorithm-Description.pdf.

Homeyer, C., J. McAuliffe, and K. Bedka, 2017: On the development of above-anvil cirrus plumes in extratropical convection. Journal of the Atmospheric Sciences, 74 (5), 1617–1633.

Karras, T., T. Aila, S. Laine, and J. Lehtinen, 2018: Progressive growing of GANs for improved quality, stability, and variation. arXiv e-prints, 1710 (10196v3).

Kunkel, K., J. Biard, and E. Racah, 2018: Automated detection of fronts using a deep learning algorithm. Conference on Artificial and Computational Intelligence and its Applications to the Environmental Sciences, Austin, Texas, American Meteorological Society.

Liu, Y., et al., 2016: Application of deep convolutional neural networks for detecting extreme weather in climate datasets. arXiv e-prints, 1605 (01156).

Mahesh, A., T. O’Brien, M. Prabhat, and W. Collins, 2018: Assessing uncertainty in deep learning techniques that identify atmospheric rivers in climate simulations. Conference on Artificial and Computational Intelligence and its Applications to the Environmental Sciences, Austin, Texas, American Meteorological Society.

Ortega, K., T. Smith, J. Zhang, C. Langston, Y. Qi, S. Stevens, and J. Tate, 2012: The multi-year reanalysis of remotely sensed storms (MYRORSS) project. Conference on Severe Local Storms, Nashville, Tennessee, American Meteorological Society.

Silver, D., et al., 2017: Mastering the game of Go without human knowledge. Nature, 550 (7676), 354–359.

Smith, T., et al., 2016: Multi-radar Multi-sensor (MRMS) severe weather and aviation products: Initial operating capabilities. Bulletin of the American Meteorological Society, 97 (9), 1617–1630.

Suwajanakorn, S., S. Seitz, and I. Kemelmacher-Shlizerman, 2017: Synthesizing Obama: Learning lip sync from audio. ACM Transactions on Graphics, 36 (4).

- Indicates paper has been withdrawn from meeting
- Indicates an Award Winner