11A.2
Evaluating WRF model output for severe-weather forecasting: The 2007 NOAA HWT Spring Experiment
Michael C. Coniglio, CIMMS/Univ. of Oklahoma, Norman, OK; and J. S. Kain, S. J. Weiss, D. R. Bright, J. J. Levit, M. Xue, M. L. Weisman, Z. I. Janjic, M. Pyle, J. Du, and D. Stensrud
The NOAA Hazardous Weather Testbed (HWT) will conduct the 2007 Spring Experiment (formerly known as the SPC/NSSL Spring Program) over a seven-week period during the peak severe convective season, from mid April through early June. As in recent Spring Experiments, the primary focus will be an examination of near-cloud-resolving (dx = 2-4 km) configurations of the WRF model in a simulated severe-weather-forecasting environment. A new component to the experiment will be the use of a 10-member ensemble of 4-km WRF model simulations provided by the Center for the Analysis and Prediction of Storms (CAPS). CAPS, the Environmental Modeling Center (EMC), and the National Center for Atmospheric Research (NCAR) will provide additional WRF simulations with grid spacing of 2-3 km. These storm-scale WRF simulations will be complemented with an ensemble of mesoscale (dx = ~30 km) WRF simulations run by the National Severe Storms Laboratory that assimilate surface observations using an Ensemble Kalman Filter Technique. These simulations, covering approximately the eastern three-fourths of the U. S., will be evaluated based on their ability to 1) simulate the evolution of the pre-convective environment, particularly boundary layer evolution, temperature and moisture stratification, and vertical wind profiles; 2) predict the location and timing of thunderstorm initiation and evolution; and 3) offer useful information on thunderstorm morphology with an emphasis on higher order classifications of discrete supercells and quasi-linear convective systems (QLCS). The main purpose is to determine the valued added by the use of storm-scale ensemble output compared to traditional deterministic model output.
Evaluation procedures for the deterministic and ensemble forecasts will be based on both subjective and objective verification strategies. Subjective approaches will rely on the concept of consensus assessment by expert operational forecasters and research scientists. Panels of experts will be anchored by SPC forecasters and NSSL scientists and will include a diverse group of researchers and forecasters from numerous meteorological centers and universities. The subjective evaluation will be conducted in the context of an experimental operational forecasting environment. Objective methods will include traditional metrics such as equitable-threat and bias scores as well as object-oriented approaches for both deterministic and probabilistic model output. The goal of both approaches will be to provide specific information to model developers that can guide their efforts to improve various components of the WRF model and to examine the benefit of deterministic versus probabilistic output on the storm-scale.
Preliminary results will be presented at the conference, focusing on assessments of the potential value of mesoscale and storm-scale ensembles and specific strengths and weaknesses of the different model configurations.
.Session 11A, Mesoscale Model Applications
Thursday, 28 June 2007, 4:00 PM-6:00 PM, Summit A
Previous paper Next paper