Tuesday, 25 January 2011: 2:45 PM
613/614 (Washington State Convention Center)
Michael J. Erickson, SUNY/Stony Brook University, Stony Brook, NY; and B. A. Colle, J. Pollina, and J. J. Charney
Given the inherent biases and underdispersion in many ensemble systems, post-processing is an important step. Post processing via Bayesian Model Averaging (BMA) has been shown to improve reliability and underdispersion while quantifying uncertainty in the model forecast, but it has not been tested much for a multi-model ensemble over the Northeast U.S. BMA requires both an effective bias correction scheme a priori and a representative calibration period of similar days in order to optimize performance. Previous studies with bias correction and BMA calibrate their statistics using the most recent consecutive days (traditional training) and have not analyzed nor compared the impact of calibrating statistics using days similar to those being verified (i.e. conditional training). Additionally, most studies may not be obtaining optimal BMA estimates due to a spatially invariant linear bias correction, which doesn't effectively remove bias from all locations and thresholds. Using fire threat days over the Northeast U.S., this talk focuses on the sensitivity of BMA performance to different bias correction techniques for 2-m temperature, 2-m mixing ratio, and 10-m wind speed from 2007-2009. In addition, differences between conditional and traditional training are explored for both bias correction and BMA.
The ensemble for this study selectively includes the 21-member Short Range Ensemble Forecast (SREF, 32 to 45-km grid spacing) run at the National Centers for Environmental Prediction (NCEP) as well as the 13-member Stony Brook University (SBU) system (12-km grid spacing) from the Weather Research and Forecasting (WRF-ARW) and Penn-State-NCAR Mesoscale Model (MM5). These models include different initial conditions and physical parameterizations (convective parameterization, boundary layer, and microphysics). A spatially dependent cumulative distribution function (CDF) bias correction is compared to a linear regression bias correction used in most previous BMA studies. The CDF bias correction adjusts the CDF of the model to the observation separately for each unique land surface type.
As will be discussed, the CDF bias correction is shown to more effectively remove model bias at all thresholds and locations compared to a linear regression. When verifying on fire threat days, results improve when bias correction and BMA are trained conditionally on fire threat days compared to traditional training. This is a result of differing model biases under unique synoptic conditions. Anomalies in the synoptic pattern during fire weather events are further explored.
- Indicates paper has been withdrawn from meeting
- Indicates an Award Winner