141107 Strategies for evaluating quality assurance procedures

Thursday, 14 August 2008: 12:00 AM
Harmony AB (Telus Whistler Conference Centre)
Imke Durre, NOAA/NESDIS/NCDC, Asheville, NC; and M. J. Menne and R. S. Vose

In this presentation, we will outline evaluation strategies that facilitate the development and documentation of robust automated quality assurance (QA) procedures. Traditionally, thresholds for the QA of climate data have been based on target flag rates or statistical confidence limits. However, these approaches do not necessarily quantify a procedure's effectiveness at detecting true errors in the data. Rather, as will be illustrated by way of an "extremes check" for daily precipitation totals, information on the performance of a QA test is best obtained through a systematic manual inspection of samples of flagged values combined with a careful analysis of geographical and seasonal patterns of flagged observations. Such an evaluation process not only helps to document the effectiveness of each individual test, but when applied repeatedly throughout the development process, aids in choosing the optimal combination of QA procedures and associated thresholds. In addition, the approach constitutes a mechanism for reassessing system performance whenever revisions are made following initial development. In fact, these evaluation strategies were used to develop the suite of QA procedures currently applied to NCDC's daily Global Historical Climatology Network dataset.
- Indicates paper has been withdrawn from meeting
- Indicates an Award Winner