14.2 Verification of a Sensitivity-Based Ensemble Subsetting Approach Augmented with Machine-Learning-Based Predictions of Severe Hazards

Thursday, 20 July 2023: 11:30 AM
Madison Ballroom B (Monona Terrace)
Austin A. Coleman, CIRES / WPC, Boulder, CO; Texas Tech Univ., Lubbock, TX; and B. C. Ancell

Ensemble sensitivity analysis (ESA) offers a computationally inexpensive way to diagnose sources of high-impact forecast feature uncertainty by relating a localized high-impact forecast phenomenon of interest (response function) back to early forecast conditions. These information-rich diagnostic fields provide ample opportunity both for understanding predictability from a basic science standpoint and for data mining ensemble systems to better inform forecaster decision-making from an operational standpoint.

From a practical perspective, ESA fields provide valuable context to high-impact forecast events but can contain too much detail at times to be used in fast-paced operational environments. In direct response to this forecaster sentiment, an ESA-based ensemble subsetting approach was developed. Sensitivity-based ensemble subsetting is a post-processing technique that identifies ensemble members best-equipped for the prediction of some chosen high-impact forecast feature later in the forecast period without the need for additional data assimilation cycles. The flexibility, speed, and customizability of the tool make it well-posed for forecasters to interrogate uncertain ensemble guidance.

Subjective assessment from 4 years-worth of real-time experiments in the NOAA HWT SFE yielded promising results, with encouraging enthusiasm from forecasters about the usefulness of subset guidance. Furthermore, subsets outperformed their full ensemble counterparts substantially and consistently in objective idealized experiments. However, translating this underlying utility to a practical framework with objective verification statistics has proven challenging. Semi-idealized experiments revealed that a primary obstacle in subset success is due to verification of updraft helicity response functions with local storm reports (two fundamentally different quantities). Mitigating this disconnect has involved developing machine learning algorithms that explicitly predict local storm reports and comparing resultant subset forecasts of severe hazards to their full ensemble counterparts. These results and their implications are discussed.

- Indicates paper has been withdrawn from meeting
- Indicates an Award Winner