A key element in the 2001 Spring Program was a systematic subjective evaluation of operational and experimental numerical models. This type of evaluation was inspired by our experience over the past several years with realtime forecasting using an experimental version of the the Eta model. Visual comparison of this experimental version of the Eta with its operational counterpart often reveals significant differences, yet the character of these differences is often misrepresented (or not represented at all) by standard objective verification measures. Moreover, forecaster judgements of which model is providing superior guidance is often at odds with standard objective interpretations.
The subjective verification component of the 2001 Spring Program was designed to address several aspects of this problem. The primary goal of this effort was to determine whether subjective interpretation and evaluation of numerical model output can provide a valid measure of model performance when it is done using systematic and quantitative procedures. As corollary objectives, we sought to 1) document the disparity between widely used objective verification measures and human judgements of model performance, 2) develop a database of subjective verification statistics that could be used to calibrate new objective verification techniques that are being developed at NSSL, and 3) develop a better understanding of how forecasters are using model forecasts.
The methodology used for subjective verification during this Spring Program will be presented. The results will also be presented and compared with comparable objective verification statistics. The implications for future evaluation and verification of numerical model output will be discussed.
Supplementary URL: