Each year, TCs cause significant property damage and human impacts (death, injury, displacement) around the world. To mitigate these impacts, weather prediction centers provide forecasts of TC movement (i.e., track) and intensity, with warnings based on these forecasts provided to emergency managers and the public. Guidance for the operational forecasters includes predictions of storm motion and intensity from operational numerical weather prediction (NWP) models. In response to the needs for improved predictions of TC track and intensity information (with a major focus on intensity), the US National Weather Service (NWS) implemented HFIP in 2007, with a goal of making significant improvements to TC track and intensity predictions in both the Atlantic and Eastern Pacific basins (Gall et al. 2013). An important aspect of the project has been to improve the NWP guidance provided to forecasters. Thus, HFIP engaged mesoscale and global NWP model developers at universities and government laboratories to improve and test the TC guidance that the models are able to produce. Each year for five years, the HFIP program office undertook an annual intercomparison of models – including both operational and experimental systems – to select new modeling systems to be demonstrated to forecasters at the US National Hurricane Center (NHC) during the subsequent hurricane season.
Evaluation of the proposed systems was undertaken by scientists at the National Center for Atmospheric Research (NCAR). The capabilities of each experimental model were compared to the performance of predictions from current “baseline” operational models. A very important aspect of the evaluations was the definition of the questions to be answered by the evaluations, which were elicited from the program managers as well as staff connected with NHC’s operational forecasting group. Once the questions were identified, appropriate statistically-valid verification methods were developed for each question and applied to evaluate the predictions from the various experimental models in comparison to the predictions from the baseline models. This process resulted in meaningful, actionable, verification information, used to select the demonstration models each year.
This presentation will describe the process of developing the verification approaches and the rationale behind each method that was applied. In addition, the specific types of decisions made via examination of these results will be discussed. Extension of the user-relevant verification concept to other types of verification studies – to achieve the goal of providing user-specific and actionable information – will also be considered.
References
Ebert, E., B. Brown, M. Goeber, T. Haiden, M. Mittermaier, P. Nurmi, L. Wilson, S. Jackson, P. Johnston, and D. Schuster, 2018: The WMO challenge to develop and demonstrate the best new user-oriented verification metric. Met. Zeitschrift, 27, 435-440.
Gall, R., J. Franklin, F. Marks, E.N. Rappaport, and F. Toepfer, 2013: The Hurricane Forecast Improvement Project. Bulletin of the American Meteorological Society, 94, 329-343.