S4 Verification of Hail Forecasts Produced by Machine Learning Algorithms

Sunday, 6 January 2019
Hall 4 (Phoenix Convention Center - West and North Buildings)
Sarah McCorkle, Indiana University, Bloomington, IN; and N. Snook and A. McGovern

Verification of Hail Forecasts Produced by Machine-Learning Algorithms

Abstract

Hail can result in billions of dollars worth of damage every year. The ability to forecast for significant hail events, even just a day in advance, can greatly mitigate severe hail risk. Machine-learning (ML) algorithms have already shown skill in producing reliable hail forecasts, as they can identify the areas that hail will be a threat. Using output from the High Resolution Ensemble Forecast version 2 (HREFv2) model, new forecasts were produced during the Hazardous Weather Testbed (HWT) Spring 2018 experiment, looking specifically at April 30th to June 1st. Verification is necessary to identify weaknesses in these algorithms in order to make improvements. By verifying these forecasts using reliability diagrams, it was discovered that there was a bias of over-forecasting. Isotonic regression was used to correct for the tendency to over-predict for severe hail. The raw HREFv2 data was calibrated to both the SPC practically perfect forecasts and to the observations. When calibrated to the observations, the corrected HREFv2 produced more reliable forecasts. It was concluded that post-calibrated HREFv2 forecasts should be used moving forward with the goal of implementing these algorithms into operational forecasting procedures.

- Indicates paper has been withdrawn from meeting
- Indicates an Award Winner