Monday, 23 January 2012: 4:30 PM
Using the Model Evaluation Tools (MET) with Python
Room 346/347 (New Orleans Convention Center )
Model developers and end-users of numerical forecast information often require statistical information related to the performance of the predicted variables. The forecast verification community has been collaborative working on a set of evaluation tools called the Model Evaluation Tools (MET), which is primarily being developed at the WRF Developmental Testbed Center at NCAR. MET provides several programs for generating a set of statistical measures of model performance, comparing forecast values to observed either at individual station locations or comparing gridded forecast output. In this work, we will develop python modules for ingesting and analyzing the values generated by MET, which will allow users to explore the data and aggregate the statistics in a variety of ways. This will include visualization of the forecast errors in space and time. A demonstration of using MET with python will be presented at the conference.
Supplementary URL: