Impact Assessments of Adding Errors to Simulated Radiance Data in Observing System Simulation Experiments

- Indicates paper has been withdrawn from meeting
- Indicates an Award Winner
Wednesday, 5 February 2014
Hall C3 (The Georgia World Congress Center )
Sean P. F. Casey, JCSDA/Eearth System Science Interdisciplinary Center, College Park, MD; and L. P. Riishojgaard, M. Masutani, T. Zhu, J. S. Woollen, R. Atlas, Z. Li, and T. J. Schmit

Observing System Simulation Experiments (OSSEs) allow for assessment of new or moved instruments and their impacts on numerical weather prediction. However, there are questions about how representative simulated radiances can be, and how this will effect the conclusions on whether or not to build new instruments. This study focuses on the introduction of random errors to simulated radiances and how analyses and forecasts differ based on the presence or absence of errors. Two control radiance datasets are used, one with and one without added random error. These are combined with two test datasets of an Atmospheric Infrared Sounder (AIRS) instrument in geostationary orbit, one with and one without random error, for a total of four experiments. Optimal error weight and gross error check values are determined to give the best fit to an analysis using the Global Forecast System (GFS) 2012 operational model developed by the National Centers for Environmental Prediction (NCEP). Differences in optimal weights between instruments and experiments will be discussed, as well as on the reduction of instrument penalty scores during the assimilation process. Two-month forecasting experiments will then be run testing the impact of the geostationary AIRS instrument on the forecast, and differences among the experiments will be discussed with an eye toward understanding the effects of random error on the forecast impact.