Improving Quantitative Precipitation Forecasts Through Objective Evaluations during NOAA Testbed Activities

- Indicates paper has been withdrawn from meeting
- Indicates an Award Winner
Monday, 24 January 2011: 5:00 PM
Improving Quantitative Precipitation Forecasts Through Objective Evaluations during NOAA Testbed Activities
612 (Washington State Convention Center)
Tara L. Jensen, NCAR/RAL, Boulder, CO; and E. I. Tollerud, S. J. Weiss, F. E. Barthold, D. R. Novak, H. Yuan, J. H. Gotway, E. Sukovich, P. Oldenburg, W. L. Clark, A. J. Clark, F. Kong, M. Xue, M. Harrold, T. L. Fowler, and B. G. Brown

Verification of precipitation forecasts plays a crucial role in improving the accuracy of precipitation totals introduced into hydrology models. The Model Evaluation Tools (MET) software was designed by the Developmental Testbed Center (DTC) to provide the numerical weather prediction community with quality software incorporating the latest advances in forecast verification, including methods for verifying quantitative precipitation forecasts (QPF).

Collaboration between the DTC and NOAA through testbeds have led to a demonstration of MET capability within the operational forecast community. The Hydrometeorology Testbed (HMT) has conducted an experiment examining cool season precipitation over California, Oregon, and Washington for more than six years. In 2010, the DTC collaborated with HMT to evaluate a nine-member ensemble generated by NOAA Earth System Reseach Laboratory (ESRL). Results were provided to the NOAA River Forecast Centers and NWS Weather Forecast Offices as observations became available. The DTC has also collaborated for three years with the Hazardous Weather Testbed. A cornerstone of the HWT is their Spring Experiment in which forecasters and researchers explore the use of the latest modeling technology to predict severe weather. The 2010 NOAA HWT Spring Experiment incorporated the prediction of precipitation exceeding 0.5 and 1 inch in 6 hours in the forecast exercise. These probability prediction was generated using output from four convective-allowing (3-4 km grid spacing) deterministic models as well as a twenty-six member ensemble provided by the Center for Analysis and Prediction of Storms (CAPS) at University of Oklahoma.

Examples demonstrating MET's traditional and object-based verification from these two projects will be presented. Objective evaluation of both deterministic and probabilistic fields (i.e. ensemble relative frequency and ensemble mean) and ideas of how this tool may be used in other aspects of hydrometeorology will be discussed.