The sensitivity of QPF verification to the choice of verification data: A study from the DWFE
Edward Tollerud, NOAA/ESRL, Boulder, CO; and L. R. Bernardet, A. Loughe, and L. S. Wharton
Although verification fields for quantitative precipitation forecasts (QPF) are commonly but implicitly assumed “perfect”, in reality the error characteristics and other aspects of these fields are themselves sources of uncertainty in verification scores based on them. In addition to simple measurement errors, sources for this uncertainty include density variations across the observation domain (in the case of gage measurements); sampling differences (for example, radar beam height and beam blockage); and representativeness differences between observation types and model resolution (eg., accumulated gage “point” observations and snapshot satellite pixel “averages” vs model grid box estimates). The magnitude of these uncertainties must be kept in mind when scores are used to compare model performance. The model fields and verification data produced and archived during the two-month duration of the Developmental Testbed Center Winter Forecast Experiment (DWFE) represent an excellent opportunity to examine this impact for wintertime precipitation regimes over the U.S. Datasets archived during the course of the experiment include the so-called Stage II and Stage IV grid analyses of (primarily) radar estimates, individual hourly gages of the Hydrometeorological Automated Data Set (HADS), the highly accurate daily estimates provided by River Forecast Centers (RFCs), and other high-density mesonetworks of precipitation observations. Automated quality control procedures for the hourly gage sites allow a further segregation of gage observations into rigorously screened and lightly screened sets (a “quantity vs quality” comparison). As a first analysis step, we present time series of CONUS QPF verification scores for the 5km WRF model runs as computed from several precipitation observation datasets. From these time series, we search for periods when the intra-dataset score variability is particularly large and examine the sources for these differences. Ordinal statistics are applied to estimate the overall variability between the sets of scores.
Session 3, Coupled Hydrological Modeling
Wednesday, 17 January 2007, 8:30 AM-11:15 AM, 213A
Browse or search entire meeting
AMS Home Page