P3.13
Identifying state-dependent model error in NWP
Jonathan R. Moskaitis, MIT, Cambridge, MA; and J. Hansen, Z. Toth, and Y. Zhu
Model forecasts of complex systems such as the atmosphere lose predictive skill because of two different sources of error: initial conditions error and model error. While much study has been done to determine the nature and consequences of initial conditions error in operational forecast models, very little has been done to identify the source of model error and to quantify the effects of model error on forecasts. Here, we attempt to "disentangle" model error from initial conditions error by applying a diagnostic tool in a simple model framework to identify poor forecasts for which model error is likely responsible. The diagnostic is based on the premise that for a perfect ensemble forecast, verification should fall outside the range of ensemble forecast states only a small percentage of the time, according to the size of the ensemble. Identifying these outlier verifications and comparing the statistics of their occurrence to those of a perfect ensemble can tell us about the role of model error in a quantitative, state-dependent manner. We then apply the same diagnostic to operational NWP models to quantify the role of model error in poor forecasts. From these results, we can then infer the atmospheric processes the model cannot adequately simulate.
Poster Session 3, Wednesday Posters
Wednesday, 14 January 2004, 2:30 PM-4:00 PM, Room 4AB
Previous paper Next paper