The Early Years - Statistical Interpretation Systems, Verification, and Computer-Worded Forecasts (Keynote Presentation)

- Indicates paper has been withdrawn from meeting
- Indicates an Award Winner
Tuesday, 6 January 2015: 9:15 AM
211A West Building (Phoenix Convention Center - West and North Buildings)
J. Paul Dallavalle, Retired, Davidsonville, MD
Manuscript (3.4 MB)

The 1960's and 1970's were a time of great change in meteorology. As knowledge of the atmosphere increased and observing systems improved, greater computer power made the possibility of accurate objective weather forecasting a reality. Simple dynamical weather prediction models like the barotropic were replaced by baroclinic models with sophisticated treatment of physical and thermodynamic processes. In the National Weather Service (NWS), a group of dedicated meteorologists led efforts to make weather forecasting an accurate, reliable service. The National Meteorological Center or NMC (later reorganized as the National Centers for Environmental Prediction) was responsible for the development and implementation of dynamical weather prediction models. A single model run, however, provided little information about the forecast uncertainty. While the model might predict measurable precipitation, the human forecaster had no indication of the confidence of that forecast. In addition, the early baroclinic models contained only rudimentary physics describing planetary boundary layer processes. Forecasts of air temperature or dew point at the observation shelter height were either unavailable or very inaccurate. Human forecasters subjectively interpreted the model forecasts, but an objective interpretation of model output seemed essential.

In 1964, the Techniques Development Laboratory or TDL (later reorganized as the Meteorological Development Laboratory) was created. TDL's mission was to develop techniques that generated from the dynamical models objective guidance useful to NWS forecasters. Two scientists in TDL were primarily responsible for bringing statistical methods of interpreting model output to the NWS. Bill Klein was the Director of TDL from 1964 until 1976. Bob Glahn, with TDL from its inception, became Director of TDL in 1976 and served in that capacity until 2012. Reviewing TDL's first 25 years, Bob wrote: “The age of computers freed researchers from depending for development and implementation upon tedious manual calculations …. With a large mainframe computer at NMC …, researchers could now think not only about multiple regression with many variables and large data samples for development, but also about distributing the results of such research to the field organization on a scheduled basis.”

Many of the statistical techniques that Bob Glahn imported for testing at TDL were developed during the 1950's and early 1960's at the Traveler's Research Corporation. Various approaches were tried at TDL and found wanting for reasons of accuracy or complexity. Requirements for specific guidance influenced decisions. If a probabilistic product such as the probability of measurable precipitation was needed, logistic analysis could produce the requisite statistical relationships. If a categorical forecast like cloud cover was required, discriminant analysis could be used. For prediction of a continuous variable like temperature, multiple linear regression was appropriate. After experimentation, TDL chose multiple linear regression with numerous enhancements as a basis for a statistical interpretation system. The common approach, however, of collecting a small sample of data, punching the data on computer cards, and then writing software to analyze the data was both inefficient and error-prone. Bob Glahn ensured that digital data bases were established and quality-controlled, software was written in a systematic and documented fashion, and guidance products were developed and improved within a formal statistical analysis framework.

A second decision was critical in the early history of Bob Glahn. Two approaches were possible in the objective interpretation of model output. In the “perfect prog” method, specification equations that related a meteorological variable like maximum temperature to observed or analyzed atmospheric conditions like upper-air heights or temperatures were developed. These equations were then applied to forecast output from a dynamical model. In the second approach, equations that related a meteorological variable to predicted variables from a dynamical model were developed. These equations were then applied to forecast output from the same or nearly the same dynamical model. Extensive testing showed that this latter approach, eventually known as Model Output Statistics or MOS, was superior to the perfect prog method. Eventually, an extensive suite of MOS guidance products was implemented.

The changes to the forecast process created by implementation of dynamical models and MOS posed an existential threat to the human forecaster. If TDL could produce statistical forecasts of weather elements based on the dynamical models, did the human add value to the final product? Could the NWS issue a public or aviation forecast based solely on statistical interpretation of the models? Bob Glahn considered these issues as early as the late 1960's. Under his leadership, TDL developed early versions of computer-worded forecasts and supported a verification system that collected local NWS public and aviation weather forecasts. Verifications showed the skill of the local forecasts improved with improvement in the dynamical models and MOS. Moreover, the local forecaster added value, particularly at the shorter-range projections.

Bob Glahn devoted over 50 years to public service. While this is extraordinary in terms of longevity, consideration of the obstacles faced by Bob reveals the magnitude of the achievement. Particularly during the early years of MOS, computer resources were scarce. Development was slow, and implementation of new products faced substantial hurdles. Major scientific differences about the direction of weather forecasting existed among the dynamical modelers, the statistical developers, and the forecasters. New statistical guidance products were often greeted with skepticism. In this environment, a leader needed vision, persistence, discipline, and organizational skills to succeed.

In this talk, we consider some of the events important in the history of Bob Glahn and the development of MOS. We focus on the years between 1968 when the first MOS product was implemented and 1988 when development of MOS guidance based on the Nested Grid Model was begun. We look at the replacement of perfect prog by MOS and the subsequent implementation of MOS guidance products. Consideration of the computer-worded forecast, a short-range update scheme known as LAMP, and some of the results of the national forecast verification system show that today's NWS forecast process had its roots in decisions made nearly 4 decades ago.