Choosing a scoring rule for verification of forecast probability distributions: Continuous ranked probability score or ignorance score?

- Indicates paper has been withdrawn from meeting
- Indicates an Award Winner
Thursday, 21 January 2010: 12:00 PM
B305 (GWCC)
Jonathan R. Moskaitis, Naval Research Laboratory, Monterey, CA

Forecast probability distributions for a scalar predictand are often verified using continuous ranked probability score (CRPS) and/or ignorance score (IGN). Both of these scoring rules are strictly proper, meaning that the expected score is uniquely optimized by a prediction of the true forecast probability distribution. Strict propriety is a necessary feature for such scoring rules to be viably used in the probabilistic forecast system development process. However, amongst strictly proper scoring rules like CRPS and IGN, the probabilistic forecast system developer is still left with a choice: Which scoring rule is the better one to use?

Insight into this question is gained by considering how CRPS and IGN score imperfect forecasts relative to each other and relative to the perfect forecast (true distribution). Under the assumptions of a Gaussian forecast distribution and a standard Gaussian true distribution, expected CRPS and expected IGN are plotted as a function of forecast mean and variance. For values of the forecast variance greater than the true variance, the expected CRPS and expected IGN fields are qualitatively similar, despite the radically different functional forms of CRPS and IGN. However, for values of the forecast variance less than the true variance, the expected score fields differ substantially. Relative to CRPS, IGN is expected to assign a very harsh penalty to a forecast with an erroneously low variance. Because of this property, it is argued that IGN verification is best-suited for probabilistic prediction applications in which it is of paramount importance to avoid underprediction of the true variance.