51 Comparison of Radar- and Gauge-Derived Precipitation Fields in Southeast River Forecast Center's Hydrologic Service Area, 2005–2016

Monday, 8 January 2018
Exhibit Hall 3 (ACC) (Austin, Texas)
Ian Blaylock, NWS, Peachtree City, GA

Handout (1.7 MB)

Historically, the precipitation-runoff processing components of National Weather Service river forecast models have been calibrated based on gage-derived daily precipitation estimates. Radar-derived precipitation estimates, subject to rigorous quality control as part of daily operations, have been used as the primary hydrometeorological forcing for the Southeast River Forecast Center’s (SERFC) river forecasting operations since 2008. This mismatch between the calibration and operational datasets risks inadvertently introducing biases into the forecast model. The magnitude and spatial consistency of these potential biases must be assessed to determine whether or not re-calibration efforts are necessary and to strategically prioritize these efforts.

Locally archived radar- and gage- derived radar estimates for each operational forecast basin are available from early 2005 to the present. Detailed statistical analyses were performed to investigate the spatio-temporal relationship between basin-averaged gage- and radar- derived precipitation estimates. The effect of the quality of their input fields -- gage density and radar coverage -- was also assessed. Multiple trends were assessed based on a number of factors including operational changes, diurnal and seasonal trends. It was found that convective precipitation was associated with higher radar-derived estimates relative to gage-derived estimates, with the converse being true for stratiform environments. This correlation was more statistically significant in areas with higher gage density. Additionally, the positive bias toward radar-derived estimates was exaggerated with increased radar distance

- Indicates paper has been withdrawn from meeting
- Indicates an Award Winner