16th Conference on Probability and Statistics in the Atmospheric Sciences

1.4

Determining the accuracy of small-scale information in forecast fields

PAPER WITHDRAWN

Michael E. Baldwin, CIMMS/Univ. of Oklahoma, NOAA/NSSL, and NOAA/NWS/SPC, Norman, OK; and S. Lakshmivarahan and J. S. Kain

Some of the difficulties associated with evaluating forecasts that contain small-scale spatial detail have been documented for many years (e.g., Anthes, 1983). For example, accuracy measures such as anomaly correlation and RMS errors of fields have been traditionally used to verify large-scale numerical weather prediction (NWP) models that do not possess small-scale detail. Applying these same measures to mesoscale NWP models that do contain significant small-scale detail may produce scores that are worse than those found by their smoother, large-scale counterparts. However, this does not necessarily mean that the larger-scale NWP forecast is providing more valuable guidance than the mesoscale model. A typical and rather unappealing solution to this problem is to filter out the small-scale detail (often referred to as "noise") in mesoscale model forecasts prior to computing accuracy measures. Although this may result in improved measures, this comes at the expense of filtering out the smaller-scale features, which may have provided useful information to certain users of a detailed forecast.

We propose an alternate solution where the forecast and observed fields (for example, precipitation) are decomposed into sets of patterns or features. Pattern recognition techniques can be used to classify the dominant feature found within a region (e.g., convective lines, cells, stratiform, and orographic precipitation features). The set of features found in the observed field could then be compared to the set of features found in the forecast field. The joint distribution of forecast and observed features could then be interrogated following the general verification framework developed by Murphy and Winkler (1987). One could also measure errors in displacement, amplitude, areal extent, orientation, etc. of the various types of events classified by this scheme. We plan to use the NCEP 22km Eta Model as well as a parallel version of the Eta Model using the Kain-Fritsch convective parameterization in order to develop these techniques, although the verification methods will be applicable to any model.

In order to develop an objective classification scheme, our initial work involves examining a small target data set containing relatively high-resolution (4km x 4km) observed hourly precipitation fields for 48 cases. To validate the objective classification, the dominant precipitation event for each case was subjectively classified into four categories (lines, cells, stratiform, orographic). To reduce the dimensionality of the problem, the distribution of precipitation amounts for each case was fitted to a gamma distribution using the maximum likelihood estimation technique of Wilks (1990). Cluster analysis using the parameters of the gamma distribution as attributes was then performed. The resulting clusters successfully separate the convective events (lines, cells) from the non-convective (stratiform, orographic). However, this simple analysis fails to distinguish between linear and cellular cases, or stratiform and orographic events. Current results from this work in progress will be presented at the conference.

Session 1, forecast evaluation
Monday, 14 January 2002, 9:30 AM-3:30 PM

Previous paper  Next paper

Browse or search entire meeting

AMS Home Page