S111 Evaluating High-Resolution Guidance From Pre-Implementation Configurations

Sunday, 7 January 2018
Exhibit Hall 5 (ACC) (Austin, Texas)
Cody H. Snell, University of Maryland, College Park, Mount Airy, MD; and D. T. Kleist and G. Manikin

In order to generate better communication between the weather enterprise and the public, computer guidance needs to improve. High-resolution guidance gives meteorologists the tools they need to help pinpoint areas of interest during a specific weather event. Evaluating these tools before they become operational ensures not only the accuracy of the models, but also ensures the forecaster will be able to understand what the guidance should and should not be used for. Short-range, high-resolution guidance such as the HREF and HRRR can be used to forecast aspects of weather from all seasons. For example, specific information on severe weather, excessive rainfall, snowfall, and wind events can be forecasted and has been evaluated. This gives meteorologists enough time to alert the public of specific threats and answer exact questions that they may have about the forecast, instead of being very broad and general with a forecast.

Evaluation of these products while under pre-implementation configurations has many categories. The process went through testing of skills such as timing, location, evolution, and mode. The Model Evaluation Group also studied instances were also examined where guidance either improved or worsened as a weather event approached. Null cases have also been studied to understand how often, and easy, “false alarms” are detected. To ensure more professionals understand how these newly upgraded high-resolution guidance tools perform, we will present the previous evaluation aspects mentioned above.

- Indicates paper has been withdrawn from meeting
- Indicates an Award Winner