88th Annual Meeting (20-24 January 2008)

Tuesday, 22 January 2008: 3:30 PM
Preliminary results from the spring 2007 experiment of the NOAA Hazardous Weather Test Bed: Application of LEAD to the explicit prediction of deep eonvection via ensembles and dynamically adaptive forecasts
207 (Ernest N. Morial Convention Center)
Kelvin Droegemeier, University of Oklahoma, Norman, OK; and M. Xue, F. Kong, K. W. Thomas, Y. Wang, K. Brewster, D. Weber, J. Alameda, A. Rossi, B. F. Jewett, S. Marru, M. Christie, D. Gannon, D. O'Neal, S. J. Weiss, J. S. Kain, D. R. Bright, and J. J. Levit
For more than a decade, the Center for Analysis and Prediction of Storms at the University of Oklahoma – a now-graduated NSF Science and Technology Center that pioneered the numerical analysis and prediction of high-impact local weather with emphasis on deep convective storms – has collaborated with the NOAA Storm Prediction Center and National Severe Storms Laboratory to study fine-scale atmospheric predictability via real time forecasts performed during the U.S. spring severe weather season. During spring 2005, this work involved using the Weather Research and Forecast (WRF) model to create 2 km grid spacing forecasts over 2/3rds of the continental US with initial conditions specified by the National Centers for Environmental Prediction (NCEP) operational model analysis. The forecasts provided dramatic evidence that the predictability of organized deep convection is, in some cases, an order of magnitude longer (one day) than suggested by prevailing theories of atmospheric predictability.

In the most recent experiment, conducted in spring 2007 as part of the NOAA Hazardous Weather Test Bed at the National Weather Center in Norman, Oklahoma, CAPS utilized a combination of its own capabilities along with those developed by LEAD (Linked Environments for Atmospheric Discovery), an NSF Large Information Technology Research (ITR) grant that is creating a service-oriented architecture to enable atmospheric sensors, data systems, models and analysis tools – and most importantly, people – to interact dynamically with mesoscale weather. The 2007 experiment was a dramatic departure from its predecessors by addressing two important challenges in the explicit numerical prediction of deep convection: (a) The use of storm-resolving ensembles for specifying uncertainty in model initial conditions and quantifying uncertainty in model output; and (b) The application of dynamically adaptive, on-demand forecasts that are created automatically, or by humans, in response to existing or anticipated atmospheric conditions.

The 2007 experiment extended from 23 April (the forecasts actually began on 15 April) through 8 June with all forecasts run on dedicated NSF TeraGrid resources at the National Center for Supercomputing Applications (NCSA) and Pittsburgh Supercomputing Center (PSC). This included a 33-hour, 10-member ensemble at 4 km grid spacing (run at PSC using a mixture of initial condition and physics perturbations) over the eastern 2/3rds of the continental US, a 2 km grid spacing forecast in the same domain (also at PSC), six-hour nested grid forecasts at 2 km spacing launched automatically over regions of expected severe weather (run at NCSA), and a 6-hour nested grid forecast at 2 km grid spacing launched manually when and where deemed most appropriate (run at NCSA).

We describe in this paper an initial assessment of the relative quality and value of the forecasts, particularly the comparative impact on decision making of ensembles, the single high-resolution deterministic forecast and on-demand forecasts. We further discuss the successes and challenges in running such a demanding experiment on the TeraGrid, and describe the impact and value of LEAD on the overall process. Concluding comments include a description of plans for experiments to be conducted during the spring 2008 and 2009 severe weather seasons.

Supplementary URL: