Improving the accuracy of quantitative precipitation forecasting (QPF) is an important task for the meteorological community. Well-known inherent limitations of the U.S. rain gauge network and limited utility of remote sensors to accurately describe an observed precipitation field make it difficult to accurately verify numerical weather prediction (NWP) precipitation forecasts. Nevertheless, modelers and others within the scientific community who are interest in the skill of precipitation forecasts create analyses of observed precipitation by interpolating observations and surrogates for observations (e.g., precipitation derived from radar or satellite measurements) to a grid that can be directly matched to corresponding NWP forecasts. How the analysis is performed for both the observations and forecasts, for example, the grid mesh size, plays a critical and often overlooked role in the verification result. Moreover, the measure of skill that is commonly employed in verification schemes often includes the computation of a contingency table-based threat score (e.g., equitable threat score) which is sometimes incorrectly assumed to be a fair measure of skill.
This study compares results of verification performed on Rapid Update Cycle (RUC) and Eta model precipitation forecasts using four different analyses computed on different grid mesh sizes. The 24-h precipitation analyses (valid at 1200 UTC) that will be compared are: (1) National Centers for Environmental Prediction NCEP 24-h analysis based on the National Weather Service's (NWS) River Forecast Center (RFC) daily gage network, (2) the Climate Prediction Center's (CPC) 24-h analysis which is similar to the NCEP analysis but employs different quality control and analysis techniques, (3) a derived 24-h analysis from the NCEP hourly stage IV multisensor analysis, and (4) a stage IV hourly gauge-only analysis. The interpretation of the skill of a forecast will be shown to be a function of the choice of the analysis, skill score, and grid resolution that are used.