Tuesday, 5 June 2018
Aspen Ballroom (Grand Hyatt Denver)
Development of an Adaptive Ensemble Technique
Tyler Wixtrom and Brian Ancell
The current approach many operational ensemble prediction systems utilize is a single set of parameterization schemes for all model forecasts. However, previous studies have shown that variability resulting from changes in model physics is important to ensemble forecasts, suggesting that certain parameterization choices may be more skillful for a given situation. The question arises of whether a training dataset composed of previous model forecasts could be used to select parameterizations for an upcoming forecast, improving this skill of that forecast within the ensemble. Similar work has demonstrated that the incorporation of such a dataset can allow for the creation of ensemble statistics from deterministic models with similar performance to that of a traditional ensemble approach. This study seeks to find if predictions of optimized ensemble parameterizations from previous verification will lead to an adaptive ensemble with improved forecast skill compared to a static-parameterizations ensemble.
The ensemble analogue technique (Delle Monache et al. 2013) will be used to select analogues to the feature of interest. These analogues will then be used to predict the optimized set of parameterization schemes. Finally, a new forecast will be generated using the optimized set. This new forecast will be compared to a control with static physics to determine if an improvement in skill is observed. Initial testing will be conducted with the Big Weather Web ensemble, seeking to predict the ensemble member with best verification based on analogues from the entire dataset. An additional 12-month training dataset will be compiled with the Weather Research and Forecasting model followed by an additional three months of adaptive forecasts. For both the training dataset and adaptive forecasts, the model domain will consist of a 12-km grid covering the continental United States with a 4-km nested domain over the West Texas region. These forecasts will be verified by object-based methods utilizing the National Center for Atmospheric Research Model Evaluation Toolkit. This will quantify any change in forecast skill compared to a static-parameterizations control with multiple verification metrics. Additional testing will be conducted to determine any sensitivity to changes in verification methods, grid spacing, and forecast variable.
Tyler Wixtrom and Brian Ancell
The current approach many operational ensemble prediction systems utilize is a single set of parameterization schemes for all model forecasts. However, previous studies have shown that variability resulting from changes in model physics is important to ensemble forecasts, suggesting that certain parameterization choices may be more skillful for a given situation. The question arises of whether a training dataset composed of previous model forecasts could be used to select parameterizations for an upcoming forecast, improving this skill of that forecast within the ensemble. Similar work has demonstrated that the incorporation of such a dataset can allow for the creation of ensemble statistics from deterministic models with similar performance to that of a traditional ensemble approach. This study seeks to find if predictions of optimized ensemble parameterizations from previous verification will lead to an adaptive ensemble with improved forecast skill compared to a static-parameterizations ensemble.
The ensemble analogue technique (Delle Monache et al. 2013) will be used to select analogues to the feature of interest. These analogues will then be used to predict the optimized set of parameterization schemes. Finally, a new forecast will be generated using the optimized set. This new forecast will be compared to a control with static physics to determine if an improvement in skill is observed. Initial testing will be conducted with the Big Weather Web ensemble, seeking to predict the ensemble member with best verification based on analogues from the entire dataset. An additional 12-month training dataset will be compiled with the Weather Research and Forecasting model followed by an additional three months of adaptive forecasts. For both the training dataset and adaptive forecasts, the model domain will consist of a 12-km grid covering the continental United States with a 4-km nested domain over the West Texas region. These forecasts will be verified by object-based methods utilizing the National Center for Atmospheric Research Model Evaluation Toolkit. This will quantify any change in forecast skill compared to a static-parameterizations control with multiple verification metrics. Additional testing will be conducted to determine any sensitivity to changes in verification methods, grid spacing, and forecast variable.
- Indicates paper has been withdrawn from meeting
- Indicates an Award Winner