The Probability of Precipitation forecasts are issued for 3 hour and 24 hour validity periods. The forecasts are issued on a 6 by 6 km (3.5 by 3.5 mile) grid, and are considered to be the probability that a rain gauge in that grid cell records at least 0.2 mm (0.008 inch) precipitation.
The objective guidance is a bias corrected consensus forecast based on Numerical Weather Prediction model output. The bias correction is calibrated using a 50 by 50 km (30 by 30 mile) gridded rainfall analysis. The guidance is interpolated to the 6 by 6 km grid before being presented to forecasters as a guidance source.
Forecasters have been slow to embrace the objective Probability of Precipitation guidance. Consultation with forecasters raised two main issues. The first was a distrust of the quality. The second is a result of business rules used to generate default pictorial (icon) and plain language (text) forecasts based on the Probability of Precipitation forecasts. Where the rules are not well aligned with the meaning of the probability forecasts, forecasters have been setting the numbers to obtain the icon and text they desire, rather than optimising the numerical forecasts. We will focus this talk on our efforts to tackle the issue regarding trust of the quality of the guidance. The business rule issue is being addressed by others within the organization.
Previous verification of the guidance has been against the gridded rainfall analysis on the 50 by 50 km scale. Verification of the official forecast has been against rain gauges. It was difficult to compare the results.
We liaised with forecasters and other stake holders to obtain agreement on verification techniques that would be meaningful. We chose to compare both guidance and forecasts against rain gauges, matching the service definition of the official forecasts. We used data from 434 automatic weather stations around Australia, grouped by region and topography, with results calculated separately for each season.
By aligning the verification of the guidance and the official forecasts we were able to make a direct comparison of the skill of each. Additionally, we were able to provide an estimate of our confidence of the official forecasts out-performing the guidance for various seasons, lead-times and groups of stations.
Preliminary results have been encouraging regarding the skill of the Probability of Precipitation guidance compared to current official forecasts. Discussions regarding verification have resulted in increased awareness of the importance of the numerical Probability of Precipitation forecasts to the public as well as increased appreciation of the skill of the guidance.
The verification results were made available on an internal, web-browser based, dashboard allowing forecasters to examine them in detail. In addition, a written report and presentations were provided to staff responsible for producing forecasts and those responsible for the service definition.
The general story has been that, even when considered on the rain gauge scale, automated forecasts based on the guidance are of a good quality, and often more skilful than the manual forecasts as illustrated by the Brier Score, particularly for lead-times of 3 days or more.
The verification work, and communication of it, is helping make guidance, already available to forecasters, more useful to them. With evidence regarding its strengths and weaknesses, forecasters can use the guidance better than they could previously. Ongoing verification will show the extent to which they have been able to put their knowledge into practice.