20th Conference on Weather Analysis and Forecasting/16th Conference on Numerical Weather Prediction
Symposium on Forecasting the Weather and Climate of the Atmosphere and Ocean

J6.1

Toward an effective short-range ensemble forecast system

F. Anthony Eckel, University of Washington and Air Force Weather Agency, Seattle, WA; and C. F. Mass and E. P. Grimit

At the University of Washington, we designed a unique test bed well suited for exploring and advancing the utility of short-range ensemble forecasting (SREF). In this study we compared 0-48h forecasts from the following SREF systems over a total of 129 cool season (Nov 2002 - March 2003) forecast cases over the Pacific Northwest using model analyses for verification:

A) 8-member poor man's ensemble (PME) of low-resolution global models--a multimodel, multianalysis system.

B) 8-member mesoscale ensemble using the Pennsylvania State University-National Center of Atmospheric Research Mesoscale Model version 5 (MM5) (36/12-km nested domains, 32 levels) and the PME's analyses as initial/boundary conditions--a single-model, multianalysis system.

C) 8-member ensemble like B but with variations to MM5--a perturbed-model, multianalysis system.

D) 17-member ensemble as an expanded version of B--single-model system with multi-, centroid, and mirrored-analyses.

Ensemble system D was an attempt to alleviate the multianalysis technique's problem of small ensemble size by generating additional initial conditions (ICs) from the PME analyses. Our conclusion is that such an effort is not productive, at least by our method of linearly combining the analyses. The expanded system did not display improved skill commensurate with the increase in ensemble size. However, a valuable outcome from ensemble system D was that the MM5 run from the centroid analysis (i.e., mean of the 8 component analyses) displayed the best overall performance among the individual ensemble members. This likely means that the centroid analysis is the best representation of synoptic-scale truth.

Comparing ensemble systems A, B, and C we confirmed that model deficiencies (both stochastic and systematic error) play a significant role in SREF. Then we elucidated what that role is. Stochastic errors (i.e., random model errors) are a large source of uncertainty and must be accounted for within a SREF system in order to maximize utility, particularly for mesoscale, sensible weather phenomena. Systematic errors (i.e., model biases) are clearly not part of the forecast uncertainty but are a large part of the forecast error and can seriously degrade ensemble performance if not corrected.

To eliminate the bulk of the systematic error, we applied a simple (but effective) grid-based, 2-week, running-mean error bias correction. This correction greatly improved the skill of probability forecasts by: 1) improving reliability by adjusting the mean of the ensemble's probability density function (PDF) to match the mean of the verification's PDF, and 2) improving resolution by narrowing the ensemble's PDF where members had opposing biases. Additionally, analyzing bias-corrected results led to firmer conclusions including the importance of accounting for stochastic model error.

Comparing the mesoscale ensemble systems B and C, we found that inclusion of model diversity dramatically increased dispersion toward statistical consistency, but still fell well short. Using the perturbed-model approach, it is extremely difficult to capture all the uncertainty in the model. However, ensemble system C did display greatly improved (both reliability and resolution) probabilistic forecast skill over ensemble system B because stochastic errors are a large part of the forecast error. Results varied greatly by parameter, but in general, predictability error growth due to model uncertainty exceeds the growth due to analysis uncertainty in the first 24 h. The impact of stochastic model error is likely even more pronounced in a summer regime with weak synoptic forcing.

Comparing the multimodel and the perturbed-model approaches, we found that the PME exhibits greater dispersion (actually slightly over-dispersive) and superior performance on the synoptic scale compared to ensemble system C. This is largely due to the PME's more complete representation of model uncertainty. Additionally, the PME grows the large-scale errors globally whereas a mesoscale ensemble reduces error growth by running on a limited-area domain, even with updated lateral boundaries. This problem also impacts the small-scale errors since much of the mesoscale uncertainty is driven by the synoptic-scale error growth in the mid-latitude cool season. We found that given a highly skilled large-scale model, the dependant MM5 solution is generally inferior on the synoptic scales. A possible way to improve our mesoscale SREF is to periodically nudge each member's MM5 forecast toward the large-scale model from which it was forced, thus improving the large-scale dispersion of the mesoscale SREF.

.

Joint Session 6, Probabilistic Forecasting/Ensembles. Part I (Joint between the Symposium on Forecasting the Weather and Climate of the Atmosphere and Ocean and the 20th Conference on Weather Analysis and Forecasting/16th Conference on Numerical Weather Prediction) (ROOM 6A)
Tuesday, 13 January 2004, 11:00 AM-12:15 PM, Room 6A

Next paper

Browse or search entire meeting

AMS Home Page