Wednesday, 26 January 2011
Washington State Convention Center
Forecast verification strongly influences the design and development of new and existing forecast systems. In particular, changes to forecast systems are often made with the aim of improving the results of whatever verification system is in place. Presently, the US Navy forecast verification scorecard is dominated by science-relevant measures. For example, the FNMOC forecast verification scorecard includes metrics such as 500mb anomaly correlations, tropical cyclone track errors, and hemispherically averaged wind errors. Each metric is weighted (500 mb heights and TC tracks are given the highest weight), and a summary scalar scorecard value is produced calculated as the sum of the weighted measures. Changes to the forecast system that improve the forecast verification scorecard value are operationalized, while changes that degrade the value are not implemented. It is not obvious that improvements in this scorecard result in improved forecasts and decisions.
The initial focus is on verifying guidance that is known to be critical in warfighter planning. Data on model output and verifying conditions will be analyzed to determine guidance performance. Ultimately, this information will be related to warfighter plans and outcomes. Forecast performance and operational performance will be compared to assess the impacts of forecasts on warfighter plans and outcomes, and to identify methods for improving those impacts. A similar process will be used to determine the impacts of forecaster recommendations.
The aim of the current User Scorecard is to identify a small set of measures, or metrics, of the impacts of METOC forecasts and recommendations on warfighter plans and outcomes. The Navy User Scorecard was developed through conversations and meetings with numerous stakeholders. The scorecard, in addition to lessons learned and scorecard results will be presented.
- Indicates paper has been withdrawn from meeting
- Indicates an Award Winner