1172 Measuring the Quality of Updating High Resolution Time-Lagged Ensemble Probability Forecasts Using Spatial Verification Techniques

Wednesday, 25 January 2017
4E (Washington State Convention Center )
Tressa L. Fowler, NCAR, Boulder, CO; and T. Jensen, R. Bullock, and J. E. Halley Gotway
Manuscript (702.9 kB)

Numerical models are producing updated forecasts more frequently as computing resources increase. Some updates are plagued by large changes from one time to the next, causing users to lose confidence in the forecasts. For forecasts that are revised, especially those with frequent updates, the magnitude and randomness of the revision series is an important aspect of forecast quality. Similar problems exist in economics and other fields, and many types of metrics are in place for simple updating time series. Unfortunately, though everyone knows forecast jumpiness when they see it, it is rarely measured objectively in weather forecasting. Users may examine dprog/dt and even calculate the total difference in the forecast from one time to the next. However, this measure suffers from the same double penalty issues as traditional verification measures, namely that a small displacement may be measured as a large change at multiple locations. In this presentation, assessments of forecast revision magnitude and randomness are applied to attributes of forecast objects using spatial verification techniques, thus incorporating temporal and spatial information into the assessment. Examples of revision assessments from probability forecasts from a high resolution time-lagged ensemble that is updated hourly are discussed.
- Indicates paper has been withdrawn from meeting
- Indicates an Award Winner