21st Conf. on Severe Local Storms and 19th Conf. on Weather Analysis and Forecasting/15th Conf. on Numerical Weather Prediction

Tuesday, 13 August 2002: 9:15 AM
Subjective verification of numerical models as a component of a broader interaction between research and operations
John S. Kain, NOAA/NSSL and CIMMS/Univ. Oklahoma, Norman, OK; and M. E. Baldwin, S. J. Weiss, P. R. Janish, G. W. Carbin, M. P. Kay, and L. Brown
Poster PDF (94.7 kB)
Since the Storm Prediction Center (SPC) became co-located with the National Severe Storms Laboratory (NSSL) facility in 1997, close proximity and a mutual interest in operationally relevant research problems have cultivated a strong working relationship between the two organizations. Organized interactions include informal daily map discussions and individual research projects, but the cornerstone of this collaboration in the last couple of years has been intensive multi-week research programs conducted during the spring severe weather season, which has come to be known as the NSSL/SPC Spring Program.

A key element in the 2001 Spring Program was a systematic subjective evaluation of operational and experimental numerical models. This type of evaluation was inspired by our experience over the past several years with realtime forecasting using an experimental version of the the Eta model. Visual comparison of this experimental version of the Eta with its operational counterpart often reveals significant differences, yet the character of these differences is often misrepresented (or not represented at all) by standard objective verification measures. Moreover, forecaster judgements of which model is providing superior guidance is often at odds with standard objective interpretations.

The subjective verification component of the 2001 Spring Program was designed to address several aspects of this problem. The primary goal of this effort was to determine whether subjective interpretation and evaluation of numerical model output can provide a valid measure of model performance when it is done using systematic and quantitative procedures. As corollary objectives, we sought to 1) document the disparity between widely used objective verification measures and human judgements of model performance, 2) develop a database of subjective verification statistics that could be used to calibrate new objective verification techniques that are being developed at NSSL, and 3) develop a better understanding of how forecasters are using model forecasts.

The methodology used for subjective verification during this Spring Program will be presented. The results will also be presented and compared with comparable objective verification statistics. The implications for future evaluation and verification of numerical model output will be discussed.

Supplementary URL: