3.7 User-Driven Verification of Tropical Cyclone Predictions

Monday, 13 January 2020: 3:45 PM
260 (Boston Convention and Exhibition Center)
Barbara G. Brown, NCAR, Boulder, CO; and L. B. Nance and C. L. Williams

In recent years, the global forecast verification community has placed an emphasis on the development and application of user-relevant metrics for evaluation of weather and climate forecasts (e.g., Ebert et al. 2018), with a goal of making the information provided by forecast evaluations more relevant for the specific users of the forecasts (i.e., rather than applying the too-common approach of using the same standard metric for all applications). The process toward creating such metrics involves clearly identifying the questions that are of interest to the forecast users and then defining metrics that can answer those questions. This presentation will consider user-relevant verification concepts in general, with a specific example provided by recent studies associated with the US Hurricane Forecast Improvement Project (HFIP), focused on evaluations of tropical cyclone (TC) predictions.

Each year, TCs cause significant property damage and human impacts (death, injury, displacement) around the world. To mitigate these impacts, weather prediction centers provide forecasts of TC movement (i.e., track) and intensity, with warnings based on these forecasts provided to emergency managers and the public. Guidance for the operational forecasters includes predictions of storm motion and intensity from operational numerical weather prediction (NWP) models. In response to the needs for improved predictions of TC track and intensity information (with a major focus on intensity), the US National Weather Service (NWS) implemented HFIP in 2007, with a goal of making significant improvements to TC track and intensity predictions in both the Atlantic and Eastern Pacific basins (Gall et al. 2013). An important aspect of the project has been to improve the NWP guidance provided to forecasters. Thus, HFIP engaged mesoscale and global NWP model developers at universities and government laboratories to improve and test the TC guidance that the models are able to produce. Each year for five years, the HFIP program office undertook an annual intercomparison of models – including both operational and experimental systems – to select new modeling systems to be demonstrated to forecasters at the US National Hurricane Center (NHC) during the subsequent hurricane season.

Evaluation of the proposed systems was undertaken by scientists at the National Center for Atmospheric Research (NCAR). The capabilities of each experimental model were compared to the performance of predictions from current “baseline” operational models. A very important aspect of the evaluations was the definition of the questions to be answered by the evaluations, which were elicited from the program managers as well as staff connected with NHC’s operational forecasting group. Once the questions were identified, appropriate statistically-valid verification methods were developed for each question and applied to evaluate the predictions from the various experimental models in comparison to the predictions from the baseline models. This process resulted in meaningful, actionable, verification information, used to select the demonstration models each year.

This presentation will describe the process of developing the verification approaches and the rationale behind each method that was applied. In addition, the specific types of decisions made via examination of these results will be discussed. Extension of the user-relevant verification concept to other types of verification studies – to achieve the goal of providing user-specific and actionable information – will also be considered.

References

Ebert, E., B. Brown, M. Goeber, T. Haiden, M. Mittermaier, P. Nurmi, L. Wilson, S. Jackson, P. Johnston, and D. Schuster, 2018: The WMO challenge to develop and demonstrate the best new user-oriented verification metric. Met. Zeitschrift, 27, 435-440.

Gall, R., J. Franklin, F. Marks, E.N. Rappaport, and F. Toepfer, 2013: The Hurricane Forecast Improvement Project. Bulletin of the American Meteorological Society, 94, 329-343.

- Indicates paper has been withdrawn from meeting
- Indicates an Award Winner