18th Conference on Probability and Statistics in the Atmospheric Sciences

1.3

Probabilistic forecasts, calibration and sharpness

Tilmann Gneiting, University of Washington, Seattle, WA; and F. Balabdaoui and A. E. Raftery

We consider probabilistic forecasts of continuous or mixed discrete-continuous weather variables, such as temperature or wind speed, and propose a diagnostic approach to the evaluation of predictive performance. This is based on the paradigm of maximizing the sharpness of the forecast distributions subject to calibration. Calibration refers to the statistical consistency between the probabilistic forecasts and the observations, and is a joint property of the predictions and the verifications. Sharpness refers to the concentration of the predictive distributions, and is a property of the forecasts only.

A simple game-theoretic framework allows us to distinguish probabilistic calibration, exceedance calibration and marginal calibration. We propose and study tools for checking calibration and sharpness, among them the probability integral transform (PIT) histogram, marginal calibration plots, sharpness diagrams and proper scoring rules. The continuous ranked probability score (CRPS) is a particularly attractive scoring rule, in that it permits the direct comparison of deterministic forecasts, raw ensemble forecasts and postprocessed forecasts. This is illustrated by an application to temperature forecasts using the University of Washington mesoscale ensemble.

We close with a case study on pobabilistic 2-hour forecasts of wind speed at the Stateline wind energy center in the US Pacific Northwest.

.

Session 1, Forecast Evaluation
Monday, 30 January 2006, 9:00 AM-11:45 AM, A304

Previous paper  Next paper

Browse or search entire meeting

AMS Home Page