Joint Poster Session 2 Land Model Benchmarking: Benchmarking, Verification and Validation in Terrestrial Hydrology Posters

Monday, 11 January 2016: 2:30 PM-4:00 PM
New Orleans Ernest N. Morial Convention Center
Hosts: (Joint between the 30th Conference on Hydrology; and the 23rd Conference on Probability and Statistics in the Atmospheric Sciences )
Chairperson/Person in Charge:
Michael B. Ek, NOAA/NWS/NCEP/EMC, College Park, MD
Chair:
Sujay Kumar, NASA/GSFC, Washington, DC

There are significant challenges associated with assessing the quality and informativeness of both models and data products that are largely related to scale, heterogeneity, complexity, and representativeness. These challenges compound when assessing spatially and temporally distributed model/data products. This session solicits contributions related to innovative methods for: (1) Assessing quality of model/data products, and (2) Assessing the fidelity of models of complex terrestrial hydrologic systems. The former might include methods for measuring or interpreting accuracy, precision, uncertainty, information content, reliability, observability, etc., while the latter recognizes that models are valuable beyond simply their ability to make accurate predictions. Related to the latter we encourage contributions on model diagnostics, identification, and benchmarking. We are particularly interested in benchmarking studies that focus on evaluating the performance of models using a priori metrics and expectations of performance. The use of novel techniques to assess distributed data or models that focus on impacts to and understanding of coupled land atmosphere and hydrometeorological processes and prediction is also encouraged. Please contact the program organizer, Mike Ek (Michael.Ek@noaa.gov), or the session chair, Sujay Kumar (Sujay.V.Kumar@nasa.gov) for additional information.

Papers:
89
Comparison of Global Mountain Snow Storage Estimates and the Prospect of Improvement with Regional Climate Modeling
Melissa L. Wrzesien, Ohio State University, Columbus, OH; and M. T. Durand and T. M. Pavelsky

90
Basin-Scale Evaluation of the Land Surface Water Budget in the NCEP Operational and Research NLDAS-2 Systems
Youlong Xia, IMSG at EMC/NCEP, College Park, MD; and M. Ek, B. Cosgrove, K. Mitchell, C. Peters-Lidard, M. J. Brewer, D. M. Mocko, S. V. Kumar, H. Wei, J. Meng, and L. Luo

92
Evaluating Evaporation Components in Flux Data and Model Output
Emma L. Robinson, Centre for Ecology and Hydrology, Wallingford, United Kingdom; and E. Blyth

Handout (1.2 MB)

93
Evaluating a High-resolution Operational LDAS/LSM with In-situ Soil Measurements Throughout North Carolina
John N. McHenry, Baron Advanced Meteorological Systems, LLC, Raleigh, NC; and A. Sims and D. T. Olerud

94
Similarity Assessment of NLDAS Multi-Model Ensemble Outputs
Shugong Wang, SAIC at NASA/GSFC, Greenbelt, MD; and S. Kumar, D. Mocko, C. Peters-Lidard, Y. Xia, and M. Ek

95
Quantifying the Mismatch between Snow and Climate in Global Reanalyses and Land Models
Patrick D. Broxton, University of Arizona, Tucson, AZ; and X. Zeng and N. Dawson

97
The North American Land Data Assimilation System (NLDAS) Science Testbed: An Environment for the Systematic Evaluation and Benchmarking of NLDAS Outputs
David M. Mocko, SAIC at NASA/GSFC, Greenbelt, MD; and S. V. Kumar, S. Wang, K. R. Arsenault, C. Peters-Lidard, G. S. Nearing, Y. Xia, M. B. Ek, and J. Dong

99
The Land Verification Toolkit---A Common Methodology For Benchmarks, Evaluation Procedures, And Metrics For The Land Surface Modeling Community
Jerry Wegiel, SAIC, Greenbelt, MD; and S. V. Kumar, C. Peters-Lidard, M. Best, M. B. Ek, S. G. Benjamin, J. D. Cetola, J. B. Eylander, K. R. Arsenault, J. Geiger, D. M. Mocko, S. Wang, C. Franks, R. L. Ruhge, E. D. Hunt, T. A. Lewiston, M. Freimund, N. Wright, T. Smirnova, S. Rheingrover, K. W. Harrison, Y. Tian, Y. Liu, J. A. Santanello Jr., and M. Shaw

100
Stochastic Analysis of Nonlinear Sorption at the Cape Cod Tracer Site
Neal T. Graham, University of Maryland, College Park, MD; and F. Miralles-Wilhelm

- Indicates paper has been withdrawn from meeting
- Indicates an Award Winner