From a practical perspective, ESA fields provide valuable context to high-impact forecast events but can contain too much detail at times to be used in fast-paced operational environments. In direct response to this forecaster sentiment, an ESA-based ensemble subsetting approach was developed. Sensitivity-based ensemble subsetting is a post-processing technique that identifies ensemble members best-equipped for the prediction of some chosen high-impact forecast feature later in the forecast period without the need for additional data assimilation cycles. The flexibility, speed, and customizability of the tool make it well-posed for forecasters to interrogate uncertain ensemble guidance.
Subjective assessment from 4 years-worth of real-time experiments in the NOAA HWT SFE yielded promising results, with encouraging enthusiasm from forecasters about the usefulness of subset guidance. Furthermore, subsets outperformed their full ensemble counterparts substantially and consistently in objective idealized experiments. However, translating this underlying utility to a practical framework with objective verification statistics has proven challenging. Semi-idealized experiments revealed that a primary obstacle in subset success is due to verification of updraft helicity response functions with local storm reports (two fundamentally different quantities). Mitigating this disconnect has involved developing machine learning algorithms that explicitly predict local storm reports and comparing resultant subset forecasts of severe hazards to their full ensemble counterparts. These results and their implications are discussed.

