Tuesday, 14 January 2020: 11:15 AM
156A (Boston Convention and Exhibition Center)
Storm mode (e.g., supercell, linear, etc.) is an important consideration when judging the potential for severe weather hazards. For example, modes such as quasi-linear convective systems (QLCSs) are primarily associated with severe wind and low-end (< EF2) tornado reports, whereas discrete supercells are more likely to produce significant (EF2+) tornadoes and severe hail reports. Additionally, QLCS tornadoes are typically associated with lower probabilities of detection, owing to their transient nature and complex mesoscale environments. Because of these challenges for operational meteorologists, recent work has begun to examine the application of new image classification algorithms (i.e., convolutional neural networks) for identifying storm mode and, thus, the affiliated relative risk of a particular severe weather phenomenon. One issue with this approach is the large amount of labeled examples required to achieve useful classifications—particularly when trying to distinguish between three or more storm modes. This work uses a U.S. national composite reflectivity dataset to extract “radar scenes” centered on over 500,000 severe weather (hail, wind, and tornado) reports from 1996 to 2017. The spatial attributes of these samples vary greatly depending on region, time of day, time of year, environment, and severe thunderstorm hazard. We show that these scenes can be used in a semi-supervised machine learning framework to generate realistic samples based on a smaller subset of labeled data. This approach could reduce the amount of time required to generate a manually labeled dataset of storm mode examples.
- Indicates paper has been withdrawn from meeting
- Indicates an Award Winner