807 Meaningful evaluation strategies for numerical model forecasts

Wednesday, 9 January 2013
Exhibit Hall 3 (Austin Convention Center)
Tressa L. Fowler, NCAR, Boulder, CO; and B. G. Brown, J. K. Wolff, and L. B. Nance

The Developmental Testbed Center (DTC) serves as a bridge between research and operations for numerical weather forecasts. Thus, the DTC staff have developed and incorporated a number of strategies for doing meaningful evaluations of these forecasts. Meaningful evaluations ensure that the weather community can trust that real, significant improvements are realized prior to operational implementation. Use of appropriate metrics; accurate estimates of uncertainty; consistent, independent observations; and large, representative samples are essential elements of a meaningful evaluation. Spatial, temporal, and conditional analyses should be incorporated wherever appropriate. When possible, the end users' needs should be served by the evaluation. Software used to complete evaluations should be thoroughly tested and documented, preferably open source. These elements and other considerations will be discussed in detail. Examples of thorough evaluations from DTC projects will be shown.
- Indicates paper has been withdrawn from meeting
- Indicates an Award Winner