Monday, 23 January 2012
Validation and Verification of the Operational Land Analysis Activities At the Air Force Weather Agency
Hall E (New Orleans Convention Center )
The importance of operational benchmarking can be clear upon considering the wide range of performance characteristics of numerical land surface models realizable through various combinations of factors including physics, resolution, and forcing datasets that may differ in operational implementation versus those that might have been involved in any benchmarking conducted in prior development efforts, e.g.. Of course, decisions concerning operational implementation may be better informed through more effective benchmarking of performance under various blends of such aforementioned factors that might be seen in operations. To facilitate this and other needs for land analysis activities at the Air Force Weather Agency (AFWA), the Model Evaluation Toolkit (MET) – a joint product of the National Center for Atmospheric Research Developmental Testbed Center (NCAR DTC), AFWA, and the user community – and the land information system (LIS) Verification Toolkit (LVT) – developed at the Goddard Space Flight Center (GSFC) – have been adapted to the operational benchmarking needs of AFWA land characterization activities in order to compare the performance of new land modeling and related activities with that of previous generation activities and observational or analyzed datasets. In this talk, three examples of adaptations of MET and LVT to evaluation of LIS-related operations at AFWA will be presented. One example will include comparisons of new surface rainfall analysis capabilities, towards forcing of AFWA's LIS, with previous capabilities relative to retrieval-, model-, and measurement-based precipitation fields. Results generated via MET's grid-stat, neighborhood, wavelet, and object based evaluation (MODE) utilities adapted to AFWA's needs will be discussed. This example will be framed in the context of better informing optimal blends of land surface model (LSM) forcing data sources under various atmospheric and land surface conditions through consideration of various metrics and their tradeoffs. A second example, conducted through both the adapted MET utilities and those of LVT, will involve comparisons of several of AFWA's LIS output surface flux and state variables with those of the Agricultural Meteorology (AGRMET) model and various retrieval- and measurement-based datasets towards judging relative performance. A third example will highlight LVT's capabilities towards data assimilation (DA) through LIS and LVT, and the benefits of LVT as both a benchmarking tool and facilitator of LIS DA. All examples will highlight benefits and lessons learned through operational, systematic benchmarking contexts.
Supplementary URL: