Wednesday, 25 January 2012: 10:45 AM
Multiple-Metric Assessment Strategies for QPF Verification: The Impact of Aggregation and Analysis Choices on Comparative Modeling Results
Room 238 (New Orleans Convention Center )
For several years, variations of WRF-model-based ensemble precipitation forecasts have been produced for the Hydrometeorology Testbed (HMT) winter exercise focused on extreme precipitation on the Pacific West Coast and the Sierra Nevada Mountains. During the last two exercises, real-time verification statistics for quantitative precipitation forecasts (QPF) from these ensembles have been accumulated to provide an assessment of several models including the WRF ensemble, and modeling options like variations of model physics, model domain and resolution, verification data analysis, and verification metrics. We present an initial attempt to assemble these statistics into a credible assessment and present them in a cogent and concise manner. Of particular interest are methods to facilitate credible comparisons between models, effective metrics that diagnose characteristics of ensemble member performance, and confidence tests that can estimate the overall impact of verification dataset choices and episode aggregation. Both standard pairs-based verification techniques and spatial methods utilizing the Development Testbed Center (DTC) verification package (MET, for Model Evaluation Tools) have been applied to address these topics. In particular, we present standard results condensed onto a ‘Performance Diagram' of the type developed by Paul Roebber and Clive Wilson, spatial attribute plots that directly compare the validity of spatial QPF characteristics of the different modeling options, and rank histograms that display persistent comparative performance patterns of the WRF ensemble members.
Supplementary URL: