Ninth Conference on Mesoscale Processes

8.3

Verification of mesoscale features in NWP models

Michael E. Baldwin, NOAA/NWS/SPC and NOAA/NSSL/CIMMS, Norman, OK; and S. Lakshmivarahan and J. S. Kain

Forecast verification is a vital, but often neglected component of any modeling system. The importance of good verification information is fairly obvious. Used properly, it can be a valuable tool for improving forecasts and the modeling system itself. However, if biases or inadequacies of a verification technique are not taken into account, misleading information will be obtained and incorrect decisions by forecasters and/or model developers may result. For example, traditional verification measures, such as anomaly correlations and RMS errors of fields at standard levels (e.g., 500mb heights) have been used to verify large-scale models that do not possess fine-scale detail. Applying these same measures to mesoscale models that contain significant small-scale detail will produce scores that are worse than those found by their smoother, large-scale counterparts. This does not necessarily mean that the larger-scale model forecast is providing better mesoscale guidance than the smaller-scale model. A typical and rather unappealing solution to this problem is to filter out the small-scale detail (often called "noise") in mesoscale model forecasts prior to computing any skill scores. Although this may result in improved skill scores, this comes at the expense of filtering out the smaller-scale features, some of which may have been correctly forecast by the mesoscale model.

We propose another solution where the forecast and observed fields (for example, precipitation) are decomposed into a set of patterns or features. Pattern recognition techniques can be used to classify coherent structures in the fields (e.g., convective lines, cells, stratiform, and orographic precipitation features). The set of features found in the observed field could then be compared to the set of features found in the forecast field. The joint distribution of forecast and observed features could then be interrogated following the general verification framework developed by Murphy and Winkler (1987). One could also measure errors in displacement, amplitude, areal extent, orientation, etc. of the various types of events classified by this scheme. We also plan to explore techniques developed by other researchers, such as using wavelet transforms to partition the forecast and observed fields into components covering the range of spatial scales (Briggs and Levine, 1997). This allows one to look at the skill of a forecast versus spatial scale, or the contribution by each spatial scale to a given measure of skill. Another method involves examining the statistical structure of the horizontal variability of precipitation as a function of scale in both the observed and forecast fields (Zepeda-Arce et al, 2000). This allows the determination of whether or not the model is capturing the spatial variability of the observed precipitation field. We plan to use the NCEP 22km Eta Model as well as a parallel version of the Eta Model using the Kain-Fritsch convective parameterization in order to develop these techniques, although the verification methods will be applicable to any model.

extended abstract  Extended Abstract (100K)

Supplementary URL: http://www.nssl.noaa.gov/etakf/verf/

Session 8, Mesoscale Model Verification
Wednesday, 1 August 2001, 8:15 AM-9:30 AM

Previous paper  Next paper

Browse or search entire meeting

AMS Home Page