In the 2016 experiment, the ProbSevere model provided forecasters with guidance for the occurrence of any severe weather associated with a storm over the next 60 min. While some difficulty with tracking hindered usability as commented in the forecaster feedback, ProbSevere was determined to be dependable (consistently available), timely (had little latency), and accurate (system probabilities reliably met storm likelihood) enough to move ahead into development within Hazard Services and continued development within the prototype PHI tool. During the 2017 experiment, the PHI prototype experiment expanded to contain hazard-specific (tornado, wind, and hail) automated guidance from ProbSevere for the forecaster.
This presentation focuses on how ProbSevere was utilized by forecasters in the testbed during both case studies and real-time events. For example: How much did forecasters depend upon the guidance? How did access and use of the automation modify the perceived workload? Were forecasters able to add value or did they decrease accuracy when modifying the probabilities away from the guidance? How did this vary according to hazard or hazards (e.g., hail, wind, and/or tornado) issued by the forecaster?
The goal of this work is to contribute to the Forecasting a Continuum of Environmental Threats (FACETs) paradigm, which proposes to evolve the National Weather Service (NWS) from product-centric watches and warnings to PHI.