Wednesday, 9 January 2019: 1:30 PM
North 230 (Phoenix Convention Center - West and North Buildings)
The questions to be answered by a model evaluation are completely different than those that arise when validating a model implementation. Validating a model configuration has been properly implemented on a different computing platform needs to allow for expected noise stemming from differences in precision, operating system, etc. Nevertheless, many of the same statistics, and thus software, can be used to answer the implementation questions. In particular, direct model-to-model comparisons can detect implementation errors, etc. while ignoring the small, random errors that are expected when implementing an identical model configuration on a different computing platform. Conveniently, the MET verification software contains many tools that provide the analyst with a wide variety of checks over time, space, level. For example, the MET series_analysis tool provides a grid-point to grid-point comparison over time or level for any model summary statistic. This tool can be used to find regional differences, mistakes, or systematic errors between model implementations. In this talk, we will present several model comparison examples using MET and discuss the interpretation from each.
- Indicates paper has been withdrawn from meeting
- Indicates an Award Winner