88th Annual Meeting (20-24 January 2008)

Wednesday, 23 January 2008: 1:30 PM
The value of information map: a tool for combining statistical and economic metrics of forecast quality
219 (Ernest N. Morial Convention Center)
Arthur A. Small III, Venti Risk Management, State College, PA
This paper introduces a tool – the value of information map – designed to integrate statistical and economic measures of forecast quality. This tool is proposed as a useful technique for facilitate communication between producers and consumers of prediction systems, in settings where neither side has access to the specialized knowledge of the other.

Support for weather forecasting activity is based largely on its perceived economic and societal value. Yet despite the utilitarian motivation for weather forecasting, producers and consumers of forecasting systems typically do not employ a common, shared set of standards for identifying and quantifying forecast quality. For forecast consumers, quality is naturally quantified in terms of the expected value of forecast information. Producers and developers of prediction systems, by contrast, typically measure improvements in terms of “generic” statistical indicies (e.g., a decrease in RMSE for some selected predictand, or a related skill score). This difference matters: notions of quality substantially dictate priorities about system development, on many fronts: parameterization methods; model design; selection of ensemble members for ensemble-based predictions; decisions about the locations of observation nodes; and analyses of the likely value of additional investments.

If there were reliable rules by which the economic value of information could be related to “pure statistics,” then model developers might adopt these statistical measures of prediction quality as their benchmarks. They could then focus on improving their scores along these lines, with confidence that in so doing, they would also be doing their best to improve the utility of their predictions for users. Pursuing this line of thought, a small body of work has appeared that investigates the relationships between statistical versus economic notions of prediction quality. In an examination of five hog price prediction models, Gerlow et al. find that economic and statistical criteria (e.g., RMSE) deliver different rankings of model quality. In their investigation of forecasts of foreign exchange rates, Satchell and Timmermann identify conditions under which a statistically “inferior” non-linear model can form the basis for a useful trading strategy, as compared with a simple random-walk model. Similarly, Leitch and Tanner find that statistically uninformative forecasts of interest rate movements can nonetheless serve as the basis for profitable trading strategies (insofar as they correctly predict the direction of rate movements, even if doing a poor job on magnitudes). These studies and others suggest that, except in a few narrowly-drawn theoretical cases, there is no straightforward mapping between statistical notions of quality, and economic value to users. Divergence between the concepts seem especially likely for decision problems characterized by loss functions exhibiting asymmetries, non-linearities, and/or threshold effects.

It is tempting to suggest that users of forecasting systems might help to guide system development simply by communicating their preferences to developers. In practice, the divergent perspectives of producers and consumers raise serious communications challenges. Developers of prediction systems are typically technical area specialists; they may not be sufficiently familiar with consumers' decision problems to derive value-of-information measures for their prediction products. Conversely, prediction consumers typically lack the area knowledge they would need to provide detailed feedback to producers in the specialized jargon used to discuss prediction model development priorities and progress. These communication challenges are compounded when the same prediction system serves multiple consumers with different information needs, or with needs that change over time.

The value of information (VoI) map is proposed as mechanism for overcoming, or at least ameliorating, this communications challenge. The VoI map is a summary representation of how different statistical improvements in a forecasting system would be valued by users. For a given forecasting system, the VoI map is constructed by taking the joint probability distribution of predictions and observations, and overlaying contour lines for the expected value of small improvements in prediction resolution at different locations in the joint distribution. The resulting structure provides detailed information about the economic value of marginal changes in statistical properties of the prediction system. For example, the gradient of the value of information with respect to a sharpening of the prediction pdf towards the perfect prediction will be large in regions where sharpening the prediction will considerably improve decisions (When quality is measured in terms of RMSE, by contrast, these iso-quality curves are straight lines, which parallel the perfect prediction line.)

The paper describes the construction of a VoI map in the context of a simple decision problem, and discusses the tool's potential value as a communications mechanism between the producers and consumers of forecasting systems.

Supplementary URL: