Monday, 13 January 2020: 11:30 AM
260 (Boston Convention and Exhibition Center)
Often times during model development, validation analyses are performed to assess correct implementation of the model. Model implementation differences are not 'forecasting errors' as in model evaluation, and some noise is expected due to differences in precision, operating system, etc. Direct model-to-model comparisons can detect biases, implementation errors, or other problems while ignoring the small, random errors typical of a new model implementation. Examination of these comparisons can be a tedious process and generally does not involve checking each individual field, level, and forecast time period for large differences. With a goal to streamline the validation process, two different non-traditional methods used to determine whether two models are equivalent are explored. The first method examines a systematic approach for identifying spatial regions of model differences using the Model Evaluation Tools (MET) Method for Object-Based Diagnostic Evaluation (MODE). By running the MODE tool on fields of model differences, objects can be identified by using thresholds to diagnose areas that exceed those limits set by the user, creating an automated approach for the user. The second method uses equivalence testing, which assumes there are differences between the two models, but looks for evidence of the contrary. This technique is a rather new type of hypothesis testing used in research, commonly consisting of two one-sided hypothesis tests that must both be rejected, based on a user-defined margin, in order for the models to be considered equivalent. While two very different approaches, both methods address the need to streamline the process of model validation, by eliminating much of the manual procedure.
- Indicates paper has been withdrawn from meeting
- Indicates an Award Winner