Monday, 21 January 2008
The effect of hydrologic model calibration on seasonal streamflow forecast performance
Exhibit Hall B (Ernest N. Morial Convention Center)
Forecasters view hydrologic model calibration as a central strategy for improving the performance of model-based streamflow forecasts. Forecast errors are often evaluated using measures such as root mean squared error (RMSE), which can be heavily influenced by bias, and in turn is clearly reduced by calibration. On the other hand, bias can also be reduced by post-processing -- e.g., using retrospective simulation error statistics to reduce or eliminate bias, leaving the variability of the forecasts about the true value as the major contribution to RMSE. This observation invites the question: how much is forecast error reduced by calibration, beyond what can be accomplished by post-processing to remove bias? We address this question through retrospective evaluation of forecast errors at eight streamflow forecast locations distributed across the western U.S., for lead times ranging from one to six months, and for forecasts initiated from December 1 through June 1, which span the period when most runoff occurs from snowmelt-dominated western U.S. rivers. We evaluate Ensemble Streamflow Prediction (ESP) forecast errors both for uncalibrated forecasts to which a percentile mapping bias correction approach is applied, and for forecasts from an objectively-calibrated model without explicit bias correction. Using the coefficient of prediction (Cp), which essentially is a measure of the fraction of variance explained by the forecast, we find that there remains a modest increase in Cp which can be captured by calibration beyond what is achievable by bias correction. This increment ranged from 0.02 to 0.15 across sites and forecast periods, with one case in which Cp was reduced for calibrated forecasts relative to uncalibrated forecasts with bias correction.
Supplementary URL: