4A.8 Making Use of Operational Model Forecast Skill in a Real-Time Setting

Monday, 16 April 2012: 5:45 PM
Champions AB (Sawgrass Marriott)
Peter S. Dailey, AIR-Worldwide, Boston, MA; and I. Dima

As operational forecast ensembles become widely available and are evaluated across a wide range of modeling applications, they provide a foundation for probabilistic modeling of storm track and intensity in real-time. The National Hurricane Center (NHC) guidance for track and intensity – via the ‘cone of uncertainty' and the ‘maximum wind speed probability' table – reflects a composite view of the operational ensemble combined with forecast error metrics as well as forecaster experience. While the NHC guidance is critical for emergency management and other applications, the underlying ensemble forecast can add value to real-time risk assessment applications by providing coherent, spatially consistent scenarios. Further, by evaluating the skill of each ensemble model under various spatial and temporal conditions, one can develop a skill-weighted forecast to improve accuracy and better quantify uncertainty in the forecasts. Skill metrics have been developed for all operational model forecasts of storm track and maximum wind intensity over the period from 2004 through 2011. For each model and each lead time, a normalized skill score is assigned. Then, historical events are hindcast by weighting models according to their relative skill using an inverse mean squared error (MSE) weighting scheme. Next, comparisons are made between the unadjusted ensemble mean and the skill-weighted ensemble mean to evaluate forecast improvement. Such skill metrics can be used to improve stochastic simulation techniques which largely depend on precise quantification of forecast uncertainty. Skill weighting schemes present various challenges. For example, as model formulation, resolution, and parameterization schemes change, so does the model's skill. To account for such changes over time, one can test for trends in forecast skill, and if statistically significant trends exist, weight those models based on more recent skill metrics. When there appears to be no justification for trends in skill (e.g., the model hasn't fundamentally changed), one can reject the trend and weight the model accordingly. These and other methods for handling temporal and spatial dependencies in model skill will be discussed.

- Indicates paper has been withdrawn from meeting
- Indicates an Award Winner