332 Ensemble Precipitation Forecasting with Adaptive Parameterization Selection

Monday, 7 January 2019
Hall 4 (Phoenix Convention Center - West and North Buildings)
Tyler J. Wixtrom, Texas Tech Univ., Lubbock, TX; and B. C. Ancell

The current approach many operational ensemble prediction systems utilize is a single set of parameterization schemes for all model forecasts. However, previous studies have shown that variability resulting from changes in model physics is important to ensemble forecasts, suggesting that certain parameterization choices may be more skillful for a given situation. The question arises of whether a training dataset composed of previous model forecasts could be used to select parameterizations for an upcoming forecast, improving the skill of that forecast within the ensemble. Similar work has demonstrated that the incorporation of such a dataset can allow for the creation of ensemble statistics from deterministic models with similar performance to that of a traditional ensemble approach. This study seeks to find if predictions of optimized ensemble parameterizations from previous verification will lead to an adaptive ensemble with improved forecast skill compared to a static-parameterizations ensemble for both climatological and extreme precipitation events.

The technique of Hamill et al. (2006) will be used to select analogues to ensemble mean precipitation forecasts. These analogues will then be used to predict the optimized set of parameterization schemes for the given upcoming flow pattern, and a new forecast will be generated using this optimized set. This new forecast will be compared to a control with static physics to determine if an improvement in skill is observed. Initial testing and proof of concept has been conducted with the Big Weather Web ensemble. Comparisons of analogue predictors and threshold sensitivity will be investigated on a 12-month training dataset at convective-allowing resolution, which is the basis for an additional three months of adaptive forecasts. Forecasts are verified by multiple statistical methods, both grid-based and object-based, to quantify any change in forecast skill compared to a static-parameterizations control.

- Indicates paper has been withdrawn from meeting
- Indicates an Award Winner