92nd American Meteorological Society Annual Meeting (January 22-26, 2012)

Thursday, 26 January 2012: 1:45 PM
Determining the Value of Additional Models in Multi-Model Ensemble Prediction
Room 238 (New Orleans Convention Center )
Dan C. Collins, NOAA Climate Prediction Center, Camp Springs, MD; and D. A. Unger

Probabilistic precipitation forecasts are derived from the NCEP Climate Forecast System version two (CFSv2) 45-day reforecasts which cover the period 1999 to 2011 using a calibration methodology developed at the NOAA Climate Prediction Center for the North American Ensemble Forecast System (NAEFS) multi-model ensemble. In an initial step, seasonally and lead dependent probability distributions derived from the CFSv2 reforecast data are calibrated against observational analysis data on a similar grid by altering the forecast probability density function (PDF) to better match observations. Next, the bias-corrected model precipitation amounts are used in an ensemble regression (Unger et al., 2009) to determine the error variance of the best ensemble member forecast. After generating an ensemble model PDF, model skill is assessed by calculating the probability assigned to the observed value; the best-member error variance is used to determine a window of integration of the ensemble PDF to calculate the probability of the forecast given the observation. The likelihood is averaged over time and using Bayes theorem, an estimate of the probability of an observation given the forecast can be generated. An ensemble of model forecasts can be generated from model simulations at multiple initialization times, where ensemble members with longer lead times would be expected to have lower skill. Using this framework, the increased skill of additional models, measured by the probability of observation given the forecasts, is compared to the skill of the prior ensemble forecast in a kind of hierarchical procedure. A multi-model ensemble system is built, first using observed climatology as the prior, then adding each model individually. If a model does not improve skill, it is discarded, and if no model improves on climatology, climatology would be the forecast. Consolidation of multiple lagged ensembles as well as other ensemble model systems, such as the NCEP Global Ensemble Forecast System, may be considered.

Supplementary URL: