An accurate diagnosis of rainfall forecast errors requires a validation scheme that accurately measures the performance of the forecast system. However, no standard technique has been developed to validate rainfall forecasts from tropical cyclones. Conventional measures of precipitation forecast skill, such as skill score, are difficult to interpret in the context of tropical cyclones due to the strong dependence of rain location and magnitude on the forecasted track of the storm and differences in the spatial and temporal sampling areas of rain gauge data compared to model output. Therefore, a key task in improving rainfall forecasts is to develop validation schemes for tropical cyclone rainfall that provide a baseline measure of forecast skill independent of track error and sampling issues. In this presentation a new technique will be presented that addresses these issues. The validation scheme will be demonstrated by comparing rainfall forecasts of Hurricane Isabel (2003) from the operational GFDL, GFS, and Eta models and the benchmark Rainfall CLIPER product against observed rain fields provided by the National Precipitation Validation Unit (NPVU) dataset. The validation scheme presented here will enable quantitative comparisons of rainfall from all tropical cyclones covered by the NPVU database (back to 1997).
Supplementary URL: