Evaluation of CAPS multi-model storm-scale ensemble forecast for the NOAA HWT 2010 Spring Experiment
Fanyou Kong, CAPS/Univ. of Oklahoma, Norman, OK; and M. Xue, K. W. Thomas, Y. Wang, K. Brewster, X. Wang, J. Gao, S. J. Weiss, A. Clark, J. S. Kain, M. C. Coniglio, and J. Du
In support of the NOAA Hazardous Weather Testbed (HWT) 2010 Spring Experiment, the Center for Analysis and Prediction of Storm (CAPS) once again produced multi-model storm-scale ensemble forecasts (SSEF) in realtime from 26 April through 18 June. Several major changes from previous experiment seasons were made, most notably, the forecast domain was expanded to cover the full continental United States and the number of 4-km ensemble members was increased from 20 to 26. Ensemble included 19 WRF-ARW, 5 WRF-NMM, and 2 ARPS members; and WRFV3.1.1 was used for both dynamic cores. Also new in 2010 Spring Experiment was the generation of a large set of post-processed ensemble products from a 15-member sub-ensemble that consists of the multi-physics, IC and LBC perturbation, and radar analysis members. Post-processed products include ensemble mean and maximum, probability matching mean, frequency-based ensemble probability, and neighborhood probability of accumulated precipitation, reflectivity, environmental fields such as surface temperature, dew point and wind, CAPE-shear parameters, and storm-attribute parameters such as updraft helicity, updraft speed, and integrated graupel. For all members but three (one from each model), full-resolution data from the nationwide WSR-88D radar network (both reflectivity and radial wind) were analyzed into the ICs using the ARPS 3DVAR and cloud analysis package. 30-h forecasts, initiated at 0000UTC, were produced daily on weekdays from April 26 to June 18 on a Cray XT4 supercomputer with over 18,000 cores at the National Institute of Computing Science (NICS) of the University of Tennessee. The post-processed ensemble products were made available in real-time in GEMPAK format data files and on the web for the SPC, NSSL, DTC, and HPC and evaluated daily by HWT participants.
A total of 36 days of complete ensemble forecasts was produced during the experiment period. The SSEF QPF and probabilistic QPF performance has been evaluated using various traditional verification metrics and compared to operational 12-km NAM forecasts and 32-km SREF products. Realtime production and the evaluation results will be presented in the conference.
Poster Session 4, Forecasting Techniques and Warning Decision Making Posters I
Tuesday, 12 October 2010, 3:00 PM-4:30 PM, Grand Mesa Ballroom ABC
Previous paper Next paper
Browse or search entire meeting
AMS Home Page