Using 2011–2014 SSEO verification metrics to assess uncertainty in severe weather forecasts
- Indicates paper has been withdrawn from meeting
- Indicates an Award Winner
Monday, 3 November 2014
Capitol Ballroom AB (Madison Concourse Hotel)
Model proxy storm reports were created from the Storm Scale Ensemble of Opportunity (SSEO) from 2011-2014 and verified against observed storm reports. The data was stratified into groups of spring, (April, May, June) summer (July, August, September), and winter (October-March). Analysis of verification metrics (POD, FOH, Bias, FSS) show that the high-resolution ARW and time-lagged ARW have lower POD values, higher FOH values, and a low bias causing them to under-forecast events. Conversely, the CONUS NMM, the high-resolution NMM, and the time-lagged NMM all have higher POD values, lower FOH values, and a larger bias, causing them to over-forecast events. Ranked distribution of FSS indicates that the NSSL WRF and the NMMB were the most skillful models.
After forecasts were verified, we attempted to quantify uncertainty. Forecasts were found to be the most skillful during the spring due to the stronger one-to-one correlation between maximum observed probabilities and maximum forecast probabilities. U shaped, asymmetric rank histograms of maximum probability were found in Spring, Summer and Winter. We will show various ways of codifying uncertainty at the conference.