Wednesday, 9 January 2013: 8:45 AM
Room 10B (Austin Convention Center)
Edward Tollerud, NOAA/FSL, Boulder, CO; and T. L. Jensen, P. Oldenburg, T. L. Fowler, S. Stoytchev, and B. G. Brown
Model forecast verification plays a crucial role for operational forecasting both in real-time during immediate forecasting scenarios and in retrospective studies (seasonal, perhaps) of model performance. For the former, inherent uncertainty in scoring due solely to verification system methodology (e.g., choice of scores, domain, aggregation techniques, observations, etc.) is a critical factor for identifying best current forecasts; for the latter, if correct choices are to be made between numerous possibilities for new or revised models, then comparison differences due to factors other than model performance must be known and weighed. As demands for higher resolution regional forecasts increases, many of these factors become particularly difficult.
During several winter exercises in the California Sierras and Coast Ranges, QPF verification results have been produced in real-time and subsequently assembled for retrospective studies for the HMT for several operational models and an ESRL- based research regional ensemble modeling system. We use these results to address several verification questions of operational relevance related to the issues above. These include choices for verification data streams, appropriate scoring algorithms for regional QPF during heavy rainfall in extreme terrain, and selection and interpretation of aggregation methods (spatial and temporal) for retrospective and real-time applications. In particular, we explore the usefulness of traditional and object-based ensemble verification methods and display products developed for the Model Evaluation Tools (MET) verification package.
- Indicates paper has been withdrawn from meeting
- Indicates an Award Winner