18th Conference on Weather Analysis and Forecasting and the 14th Conference on Numerical Weather Prediction

P2.18

Toward a surface data time continuum: use of the Kalman Filter to Create a Continuous, Quality Controlled Surface Data Set (Formerly Paper Number 6.3)

John A. McGinley, NOAA/FSL, Boulder, CO

The demand for weather information at precise time and space locations is rendering the synoptic hourly (or longer) product cycle obsolete. There is an increasing demand for weather information at locations and times that do not match either the resolution of observing networks or the the standard cycle of available weather products. Operational weather forecasting offices are uncovering a wealth of supplemental surface observations sponsored by state governmental departments, agricultural networks, industry, and others. This data often has high spatial density and time frequency. The data can enhance local nowcasting and short range forecasting if it can be acquired, quality checked, and integrated into office procedures. A previous series of papers discussed methods for dealing with this huge flow of data for meeting standard product times.

The focus of the current paper is to consider the data space in a time- continuous mode, allowing analysis products to be developed on sub-houly cycles without the concern of a fluctuating station count. This paper describes the extension of a quality control scheme based on the Kalman Filter, described in a previous preprint paper. The Kalman filter approach utilizes application of statistically-based prediction model customized for each measured variable at each observation site within a domain. The Kalman scheme utilizes a linear model allowing for optimum extrapolation of all observations for any given time. The resulting data set is not dependent on a uniform observational frequency, timely or reliable communication links, or built-in data latency for certain kinds of platforms. This ensures full continuity in station count and data quality. Product generation can then be done utilizing inexpensive analysis schemes based on successive corrections, or the like. Similar capabilites might be possible with a complex OI scheme or 3-D or 4-D var, but the advantage of the current approach is that it operates in data space with a two-order-of-magnitude saving in computation and storage.

The linear Kalman model is based on station self trends, neighboring multi-station trends, background model trends and diurnal climatology. The realtime evaluated performance of the linear model is an integral part of setting the Kalman gain which may be considered a measure of the model's skill in providing time-extrapolated values. Since the Kalman gain is dynamic, the model evolves in time: from each datum received (no matter when the datum is received) it learns what model components are producing the best estimates for any time of day and sets the weights accordingly.

The approach offers a methodology to sustain data densities in environments where observations come at varying times, are delayed by poor communications, or are missing altogether. In presenting this paper we will discuss the idea of a data continuum and present some applications.

extended abstract  Extended Abstract (420K)

Poster Session 2, Poster Session - Numerical Data Assimilation or Analysis: Case Studies and Validation—with Coffee Break
Tuesday, 31 July 2001, 2:30 PM-4:00 PM

Previous paper  

Browse or search entire meeting

AMS Home Page