394
Toolbox for Evaluating Ensembles Using an Information Gain Measure

- Indicates paper has been withdrawn from meeting
- Indicates an Award Winner
Monday, 5 January 2015
Hannah Aizenman, City College of New York, New York, NY; and M. Grossberg, I. Gladkova, and N. Krakauer

Handout (1.7 MB)

The monthly NCEP Climate Forecast System Version 2 (CFSv2) yields an ensemble of predictions various variables a month in advance. We aggregate the temperature predictions into a probabilistic prediction to better capture the certainty of the climatology and ensemble predictions. In measuring the accuracy of these ensembles using information gain, we have also been able to use these metrics to diagnose specific problems in a forecast. Using the scientific python ecosystem, we built a library for developing, evaluating, and visualizing these probabilistic forecasts.

Using our library, we found that the standard deviation of the climatology is a better measure of forecast uncertainty than the ensemble spread. These results led us to implement a 2-3 month lookback auto-regressive climatological model to show that forecast provides more information then recent information. We then incorporated forecast into this vectorized auto-regressive model, finding that combining climatology, recent history, and forecast did better than and individual element.

The scientific Python stack was integral in our work. The simplicity of NumPy, SciPy, and Matplotlib made for a relatively quick turnaround from idea to code and the use of IPython notebooks greatly sped up the write, run, review results cycle.