Most verification of quantitative precipitation forecasts (QPFs) is done by comparing the forecast and observed rainfall occurrence and intensity for a collection of points matched in space and/or time. Typically this would be time series of rain forecasts and observations at selected stations, or regional or national scale comparisons of gridded forecasts and rain analyses. An alternative approach evaluates forecast rain events, where a rain event is defined as a contiguous area of accumulated rain, such as in a daily rainfall analysis or a gridded model QPF. Verification of rain events involves determining the position error of the forecast, the difference between the forecast and observed mean and maximum rain rates and rain volume, and the correlation between the position-corrected forecast and observations.
The position error can be determined by minimizing the squared error of the forecast as it is translated over the observations and measuring the vector displacement of the shifted forecast. If the forecast and observed rain fields are completely enclosed within the verification domain then the total error can be decomposed into contributions from displacement, intensity, and pattern errors. This methodology is useful for both diagnosing errors in case studies and for characterizing systematic errors over many events.
From a public safety perspective, the two most important elements of a rain event forecast are its location and maximum rain rate. Using the verification results for many events and specifying some appropriate criteria for good forecasts, a 3x2 categorical contingency table can be constructed as follows:
Forecast Maximum Rain Rate |
||||
Too Light |
Approx. Correct |
Too Heavy |
||
Displacement of forecast rain pattern |
Close |
Underestimates |
Hits |
Overestimates |
Far |
Missed Events |
Missed Locations |
False Alarms |
The hit rate measures the success of the rain event forecasts, while the other categories describe the nature of the errors.