2A.1
Fog and thunderstorm forecasting in Melbourne, Australia
Harvey Stern, Bureau of Meteorology, Melbourne, Vic., Australia
A twelve-month "real-time" trial of a methodology utilised to generate Day-1 to Day-7 forecasts, by mechanically integrating judgmental (human) and automated predictions, was conducted between 20 August 2005 and 19 August 2006. After 365 days, the trial revealed that, overall, the various components (rainfall amount, sensible weather, minimum temperature, and maximum temperature) of Melbourne forecasts so generated explained 41.3% variance of the weather, 7.9% more variance than the 33.4% variance explained by the human (official) forecasts alone (Stern, 2007).
The trial continues and the purpose of the current paper is to report on its performance at predicting fog and thunderstorms between August 2005 and June 2007, a period of almost two years.
With regard to the accuracy of forecasts of fog, for verification purposes, it is said that there has been fog in the metropolitan area during a particular day when at least one of the 0300, 0600, 0900, 1200, 1500, 1800, 2100, or 2400 Melbourne CBD and/or Melbourne Airport observations include a report of fog (including shallow fog) and/or distant fog.
The automated component of the system used to forecast fog is that described by Stern and Parkyn, 1998, 1999, 2000, 2001). This component is a logistic model that, in summary, feeds observational data from the preceding afternoon into a set of prediction equations, developed by applying logistic regression to sets of synoptically stratified data, to yield an estimate of the probability of fog.
The combining process was shown to lift the Critical Success Index (CSI) (Wilks, 1995) from 13.6% to 15.2%, and to lift the Probability of Detection (PoD) from 17.7% to 28.0. However, the lift in the CSI and PoD for fog forecasts was achieved at a cost of a corresponding increase in the False Alarm Ratios (FARs), from 63.2% to 74.9%.
Although a lift did not occur in every single instance when the verification data was analysed with all lead times taken separately, a lift occurred in most instances. The exceptions were in the cases of Day-1 and Day-2 forecasts of fog, where the CSIs were substantially below corresponding CSIs for the human (official) forecasts. These forecasts are worthy of comment.
The inability (of the combining process) to improve on the Day-1 and Day-2 official forecasts of fog may very well be a consequence of the effort that the forecasting personnel of the Victorian Regional Office (and others) have invested over the years into short term fog and low cloud forecasting at Melbourne Airport (Goodhead, 1978; Keith, 1978; Stern and Parkyn, 1998, 1999, 2000, 2001; Newham, 2004, Boneh et al., 2006; Weymouth, 2007; Newham, 2007), most recently using a Bayesian network to combine various components of forecasting guidance.
This effort may have resulted in such a high level of pre-existing human forecast skill at short-term predicting of fog, that mechanically combining human fog forecasts with automated fog forecasts (generated by a methodology more than five years old) actually caused a decline in accuracy.
With regard to the accuracy of forecasts of thunderstorms, for verification purposes, it is said that there has been a thunderstorm in the metropolitan area during a particular day when at least one of the 0300, 0600, 0900, 1200, 1500, 1800, 2100, or 2400 Melbourne CBD and/or Melbourne Airport observations include a report of cumulonimbus with an anvil and/or lightning and/or funnel cloud and/or thunder (with or without precipitation).
The automated component of the system used to forecast thunderstorms is that described by Stern (2004). This component is a logistic model that, in summary, feeds a Quantitative Precipitation Forecast (QPF), and a Probability of Precipitation (PoP) estimate, into a set of prediction equations, developed by applying logistic regression to sets of synoptically stratified data, to yield an estimate of the probability of thunderstorms.
The combining process was shown to lift the Critical Success Index (CSI) (Wilks, 1995) from 13.7% to 18.2%, and to lift the Probability of Detection (PoD) from 15.9% to 26.7%. However, the lift in the CSI and PoD for thunderstorm forecasts was achieved at a cost of a corresponding increase in the False Alarm Ratios (FARs), from 50.9% to 63.5%. A lift in CSI occurred in every instance when the verification data was analysed with lead times taken separately, except for Day-2
The data presented demonstrate that, overall, adopting a strategy of combining human (official) and automated predictions of fog and thunderstorms enhances the skill displayed by such predictions.
However, the data suggest that predictions, which have been prepared by operational meteorologists armed with the very latest techniques, are sometimes capable of outperforming forecasts generated by combining those predictions with automated predictions that have been produced utilising old techniques.
Fig 1 CSIs for fog forecasts - click on this and other images to enlarge
Fig 2 PoDs for fog forecasts
Fig 3 FARs for fog forecasts
Fig 4 CSIs for thunderstorm forecasts
Fig 5 PoDs for thunderstorm forecasts
Fig 6 FARs for thunderstorm forecasts
Supplementary URL: http://www.bom.gov.au
Session 2A, International Applications - Part I
Monday, 21 January 2008, 10:45 AM-11:45 AM, 206
Next paper