J3.10
Assessing the impact of systematic observation errors on climate and operational precipitation analyses
PAPER WITHDRAWN
Edward I. Tollerud, NOAA/FSL, Boulder, CO; and T. L. Fowler, B. G. Brown, and R. S. Collander
Apart from random transient errors, which may be thought to average out over month-to-climate scales, and which at any rate are often so flagrant as to be easily identified, precipitation observations are susceptible to types of systematic error that cannot be so easily dismissed. Examples from the Coop network are gage observers who make daily observations at non-standard hours or fail to report zero observations at all. Other systematic errors of measurement include inadequate metadata (inaccurate locations or unreported moves, for instance). Automatic reporting gages from operational networks like the hourly HADS may not be prey to these human foibles, but they receive less direct attention and may suffer from subtle, long-lasting, and thus potentially critical breakdowns. Depending on their use, both of these dataset types can produce network-specific “representativeness” errors that arise from variations in gagesite density, unresolved terrain features, etc.
The direct approach to these errors, of course, is to thoroughly identify and screen them. Given the difficulties in so doing, however, it must first be established that a set of many imperfectly screened stations is better than a smaller set of much more rigorously screened stations. We address this question of “quantity vs. quality” in the hourly HADS datastream by devising a set of automated screening algorithms for known gage errors and then quantitatively examining differences in applications of the HADS observations at various levels of screening. For the Coop network, we compare long-period time series based on verification algorithms (specifically, magnitude and frequency bias, and equitable threat score) that compare stations with neighbors. We emphasize the issue of the number of station-neighbor pairs (and hence the length of the diagnostic time period) required to confidently identify changes in station reporting quality.
Joint Session 3, Data Quality Control and Metadata (Joint with Applied Climatology, SMOI, and AASC)
Thursday, 23 June 2005, 8:00 AM-12:00 PM, South Ballroom
Previous paper Next paper