Monday, 31 March 2014: 9:30 AM
Garden Ballroom (Town and Country Resort )
Emergency managers exposed to hurricane risk must prepare in advance for the full range of potential impacts. In many states, emergency management (EM) professionals reenact past events to better understand decisions which were made as the event unfolded, and to foster better and more timely decisions in the future. To effectively improve the EM process, one must carefully evaluate all sources of uncertainty. In forecasting natural catastrophes, uncertainty can be assessed in two forms, objective and subjective. Subjective uncertainty is found in human factors, which can bring about different decisions from individuals exposed to the same objective information. Subjective uncertainty is often addressed through improvements in technology, training and well-designed EM protocols. Objective uncertainty fits into three classes: aleatory uncertainty represents inherent randomness of the physical process, epistemic uncertainty reflects deficits in knowledge about that process and ontological uncertainty is tied to the possibility of events occurring without anticipation (e.g., "black swan" events). With regard to hurricanes, uncertainty can be reduced and decision making improved with more research on past hurricanes, and over time with the data introduced with each new event. While initially hurricanes such as Katrina (2005) and Sandy (2012) may make us feel more anxious (less certain) about the risk, they explicitly reduce ontological uncertainty by providing real data on high impact / low probability events, which was previously simulated, or perhaps disregarded. In the study of uncertainty, such tail events make up a critical component of emergency planning. By considering as complete distribution as possible, one can account for a more complete range of outcomes, and plan for a full range of response. From a practical point-of-view, aleatory uncertainty in hurricane tracks is quantified by the National Hurricane Center (NHC) via the cone of uncertainty (COU). The COU spans the expected track' of an active storm one standard deviation in forecast track error to the right and left based on the last 5 to 10 years of operational forecast experience at NHC. This is a useful EM metric, as it provides an operational benchmark for decision makers, namely the volatility in historical track forecasts. Given the COU reflects +/- one sigma, it captures the 67% confidence interval, that is on average one can expect 33% of the time a storm will track outside the COU. Unfortunately, EM decision makers require 95% or 99% confidence even if that means preparing for a much wider range of outcomes. AIR Worldwide Corp. has developed a real-time decision aid called ClimateCast® which estimates hazard, damage, and loss potential for active storms. The system provides a diverse range of 500 potential storm scenarios derived from the full operational forecast ensemble. One of the risk metrics developed for EM professionals is the Forecast Confidence Score for storm track (FCS-t). FCS-t is a normalized measure of real-time model convergence. The score is computed by taking the ratio of forecasted track variance (in distance) at lead times of 1 to 5 days across the full 500-member distribution to the corresponding COU radii. The final score is computed by weighting the scores by lead time, with the highest weights tied to the shortest lead times, since a divergent 1-day forecast makes the final outcome more uncertain than a divergent 5-day forecast. A high FCS-t indicates the real-time operational ensemble is more converged than the COU indicates based on climatology. Validation has shown higher (lower) likelihood of the central track verifying when FCS-t is high (low). The scoring metric allows EM professionals to establish FCS-t thresholds to assist in the timing and execution of their activities. Temporal trends in FCS-t can be useful when tracked through a decision window. Given the larger magnitude of error in forecast intensity, and the importance of intensity to damage and financial loss, future work will include the development of an Forecast Confidence Score (FCS-i) for maximum wind speed and storm surge.
- Indicates paper has been withdrawn from meeting
- Indicates an Award Winner