The Fred Sanders Symposium

3.2

Mesoscale modeling and the scientific forecast process

Paul J. Roebber, University of Wisconsin, Milwaukee, WI

Studies of weather forecasting have shown that experienced forecasters are adept at using and integrating information from a variety of sources and thereby detecting patterns in the data. The forecast process is conducted by gradually building a “mental model” of the forecast scenario (hypothesis formation) and then critically examining this model through continuous evaluation of additional datasets, such as model output and updated observations (hypothesis testing).

Storm-scale model data provide an additional, powerful means for hypothesis formation and testing. Forecasters who fail to develop a conceptual model of a forecast situation cannot take full advantage of storm-scale model data and could profoundly degrade forecast skill through naive application of these data. We provide examples of these opportunities and risks from storm-scale model forecast data based on a longitudinal verification of convective forecasts and a study of the 3 May 1999 tornadic outbreak in Oklahoma.

The verification of 6-km grid spacing, short range numerical model forecasts of warm season convective occurrence, mode and location was conducted over the Lake Michigan region for the 1999 warm season. Contingency measures show forecast skill for convective occurrence is high, with day one (day two) equitable threat score of 0.69 (0.60). Forecast skill in predicting convective mode (defined as linear, multicellular or isolated) is also high, with a true skill statistic of 0.91 (0.86) for day one (day two). Median timing errors for convective initiation/dissipation were within 2.5 hours for all modes of convection at both forecast ranges. Forecasts of the areal coverage of the 24 h accumulated precipitation in convective events were more problematic, exhibiting skill comparable to the lower resolution, operational models, with median threat scores at day one (day two) of 0.21 (0.24). Hence, while these data show opportunity by providing useful forecast guidance out to day two concerning the occurrence, timing and mode of convection, information concerning location is less precise. Conceptual understanding on a case by-case basis is critical to more accurate determination of location.

Model experiments for the outbreak of 3 May 1999 show several sensitivities of the event to a streak in the subtropical branch of the jet: (1) convective initiation in the weakly forced environment, achieved in part through modification of an existing cap through synoptic-scale ascent associated with the jet streak; (2) weak-to-moderate forcing from the jet streak was most conducive to the production of long-lived supercells, while strong forcing resulted in a trend toward linear mesoscale convective systems; (3) the cirrus shield associated with the jet streak was important in limiting development of convection and reducing competition between storms. The model information for this case suggests opportunity, most particularly by assisting in the revision of conceptual ideas about the evolution of the outbreak.

Substantial obstacles to operational implementation of such tools remain, however, including lack of information concerning model biases, insufficient real-time observations to assess model prediction details, inconsistent forecaster education, and inadequate technology to support rapid scientific discovery in an operational setting.

Session 3, Forecasting (Room 617)
Monday, 12 January 2004, 1:30 PM-2:30 PM, Room 617

Previous paper  Next paper

Browse or search entire meeting

AMS Home Page