Tuesday, 9 January 2018: 9:00 AM
Room 19AB (ACC) (Austin, Texas)
Probabilistic forecasts from a multi-model ensemble (MME) are calibrated and consolidated for subseasonal to seasonal timescales, and the skill in forecasting extremes is assessed. The primary results of calibration are generation of a forecast probability density function, improvement of the reliability of probabilities, and weighting of individual models according to their skill. Models with little skill regress to near zero anomalies or may be removed entirely from anomaly forecasts. Extreme forecasts are identified when the forecast probability exceeding about one standard deviation above or below normal exceeds the climatological probability of approximately 15%. The calibration technique is compared to forecasts made by estimating the probability from the count of ensemble members exceeding the threshold for an extreme forecast. Forecasts are verified using the Heidke and Brier skill scores, as well as assessed for reliability of probabilities, such that regions and seasons when the MME is found to have skill in forecasting extremes are identified. While individual ensemble models often have negative skill when forecasting extremes, the combined MME is found to have greater skill when forecasting extremes.
- Indicates paper has been withdrawn from meeting
- Indicates an Award Winner