Monday, 13 May 2002: 2:30 PM
A climatology of extreme hourly temperature variability across the United States: application to quality control.
Conventional temporal consistency checks in the quality control of hourly temperature data focus on the magnitude of hourly rate of change. In an exploratory analysis, nearly all flagged historical hourly temperatures that were considered erroneous were spikes or dips, probably resulting from simple transcription errors. Two methods are developed using 1949—1958 hourly surface airways observations (SAO) that explicitly define a spike or dip as a temperature about which the signs of the two consecutive hourly differences are opposite. Spike/dip magnitude is defined as the minimum of the two consecutive hourly difference magnitudes (method MDH2) or as the residual from a five-hour median smooth (method MSR5). The flagging threshold is determined as the upper bound on the 95% confidence interval about the bootstrapped 99.95 percentile of all spikes/dip magnitudes at a season and station. Spatial and seasonal patterns are observed, with lower thresholds typically near the coasts and in winter and higher thresholds in the intermountain West and in summer or transition seasons. To evaluate the methods, 250 hourly observations were chosen at random from 1959—1963 SAO, for each of seven randomized experiments designed to mimic human transcribing errors (digit transposition, sign (c)omission, etc.). On average, method MSR5 flagged the greatest percentage of deliberate errors, while a conventional method that flags an absolute hourly difference of 18 Fahrenheit (DT18) flagged the lowest percentage. In flagging original 1959—1963 SAO, MSR5 was more stringent than MDH2, catching obvious inconsistencies as well as valid thunderstorm-related observations. MSR5 performance will likely improve with internal consistency checks added.
Supplementary URL: