Decisions involving climatic precipitation analyses often hinge on issues of data quantity and data quality. For instance, we frequently face the question: At what point do the liabilities of questionable historical gauge measurements override the advantages of maximum density of observations? When several independent (and often substantially different) gauge networks are available, what are the characteristics of the data or of the analyses being undertaken that limit the value of their combination?
We address these questions using observations from a network of hourly reporting stations called the Hourly Precipitation Dataset, (HPD), managed by the National Climatic Data Center. These data span periods of up to 50 years, and although the sites east of the Mississippi are more evenly distributed, the overall station density is fairly good across the contiguous 48 states. Data quality is generally good. However, one unfortunate characteristic of these data is a gradual changeover with time from gauges that report precipitation to the hundredth inch to gauges limited to tenth-inch resolution. By treating the two sets of gages as separate data sets and performing parallel analyses, we show significant differences between precipitation frequency that are obscured when all gages are included, irrespective of resolution. Longer-term averages and station extreme values are also examined with this resolution difference in mind. Finally, we use a Monte Carlo scheme to randomly vary the set of observations in gridded analyses and thereby compute a measure of expected analysis variance. This measure is then used to compare analyses that include all available observations with others that are more selective