92nd American Meteorological Society Annual Meeting (January 22-26, 2012)

Tuesday, 24 January 2012: 11:00 AM
Verification and Calibration of Probabilistic Precipitation Forecasts Derived From Neighborhood and Object Based Methods for a Multi-Model Convection-Allowing Ensemble
Room 238 (New Orleans Convention Center )
Aaron Johnson, University of Oklahoma, Norman, OK; and X. Wang
Manuscript (603.7 kB)

Probabilistic forecasts derived from the storm-scale ensemble produced at the Center for Analysis and Prediction of Storms for the 2009 NOAA Hazardous Weather Testbed Spring Experiment are verified and calibrated with various methods.

Different users may be interested in different aspects of the ensemble forecasts. Therefore probabilistic forecasts are derived from both neighborhood and object based methods. The neighborhood method is used to forecast the probability of exceeding a precipitation accumulation threshold at each grid point. The object based method is used to forecast the probability that an object, representing a particular storm, will occur. Brier skill score relative to the observed sample frequency shows skill that depends on the forecast lead time, length of accumulation period and accumulation threshold. Neighborhood forecasts have zero or negative skill for some lead time/threshold combinations and object based forecasts have negative skill for all lead times.

Various calibration methods including reliability based methods, logistic regression, and member-by-member bias removal were used to calibrate both types of probabilistic forecasts. It is found that both types of probabilistic forecast have improved after calibration. Both types of probabilistic forecasts after calibration have positive skill relative to the reference forecast. The choice of calibration method has a relatively small impact on skill.

The same verification and calibration methods were applied for two subsets of ensembles. The first set contains only the ARW model and the second contains only the NMM model. The uncalibrated and calibrated skill of single-model sub-ensembles are then compared to a multi-model sub-ensemble which is a mix of ARW members and NMM members. Before calibration, the single-model sub-ensembles have significantly different skill and the multi-model sub-ensemble has skill between them for both types of probabilistic forecasts. After calibration, the differences in skill among the sub-ensembles are reduced. The multi-model sub-ensemble is more skillful than either single-model sub-ensemble after the 24-27 hour lead time for all thresholds and forecasting methods.

Supplementary URL: