S75
Evaluating Numerical Predictions of Meteorological Features

- Indicates paper has been withdrawn from meeting
- Indicates an Award Winner
Sunday, 23 January 2011
Evaluating Numerical Predictions of Meteorological Features
Kathleen Quardokus, Purdue University, West Lafayette, IN; and D. G. Burgin, J. A. Crespo, E. R. Fernandes, A. D. Hendricks, S. M. Hinkle, K. A. Hudson, R. T. Knutson, Z. L. Muchow, M. C. Sholty, E. L. Waterman, Z. T. Zobel, and M. E. Baldwin

Weather forecasters often use a “feature-specific” approach when warning about specific meteorological phenomena, such as hurricanes or severe thunderstorms. This approach involves the identification, characterization, classification, and tracking of well-defined weather systems of interest, either in the forecast guidance or observational data. Researchers have recently proposed developing feature-specific prediction methods using automated methods of identifying features in numerical weather prediction output and providing information related to the characteristics of those features to the forecasters who use that output in their forecasting process. While today's high-resolution operational and research numerical weather prediction models can provide valuable forecast information, they also contribute substantially to the volume of data that the forecaster needs to interpret. By identifying and characterizing predicted meteorological “features” of interest, guidance on the most relevant events during the forecast period can quickly be obtained, enhancing forecaster efficiency as a result.

As with any forecast, it is important to understand the quality of the predictions. Methods of evaluating “feature-specific” predictions are actively being developed by the research community. In this study, we apply subjective feature-based evaluation methods using an Euclidean distance approach to a series of numerical weather prediction forecasts. These results will be compared to “traditional” forecast verification statistics that are computed as a function of the difference between observed and predicted values. We will also compare the results of the subjective evaluation methods to automated techniques for evaluating feature-specific predictions. The goal is to gain insight on the quality and usefulness of the various forecast evaluation methods and to determine whether new objective verification methods are providing information that is consistent with subjectively-determined forecast evaluations.

This study was conducted as part of a new sophomore-level, research-oriented laboratory at Purdue University in the Atmospheric Science program.