J10.2
Computing the odds on a good probability forecast
Leonard Allen Smith, London School of Economics and Oxford University, London, United Kingdom
Operational weather forecasting contributed to significant advances in our understanding of statistics towards the end of the nineteenth century, and then to our understanding of dynamical systems towards the end of the twentieth. It is now widely accepted that uncertainty in the initial state of the atmosphere makes the hope of a single 'accurate' medium-range forecast unobtainable, motivating operational ensemble forecasts. Information from ensemble forecasts is crucial both for extracting the socio-economic value that has justified operational forecasts since those made by Fitzroy and for the empirical connection to the atmosphere that turns model simulations into weather forecasts. It is less widely accepted, but equally certain, that model inadequacy (errors in the details of any model) will prevent accurate, accountable probability forecasts. The implications this fact holds for both users and modellers is explored. The more common foundations objective probability theory are are based upon the notion of equally likely events; this is lost outside the perfect model scenario. Consideration of the evaluation and use of probability forecasts suggests the development of a truly multi-model framework (as opposed to a 'best model' framework) and an alternative approach to defining 'fair odds' in the context of risk management (as the "implied probabilities" corresponding at a set of odds need not sum to one). .
Joint Session 10, Probabilistic Forecasting/Ensembles: Part III (Joint between the Symposium on Forecasting the Weather and Climate of the Atmosphere and Ocean and the 20th Conference on Weather Analysis and Forecasting/16th Conference on Numerical Weather Prediction) (Room 6A)
Wednesday, 14 January 2004, 1:30 PM-2:45 PM, Room 6A
Previous paper Next paper