Monday, 11 January 2016
New Orleans Ernest N. Morial Convention Center
Correct prediction of the surface precipitation type is of pivotal importance for streamflow forecasting, road preparations, and aviation. These forecasts are usually obtained by subjecting numerical-model output to one or more postprocessing algorithms that assign the precipitation type. The skill of these algorithms can be assessed by comparing their output to available observations. Yet, precipitation-type observations have a certain degree of error or uncertainty that affects the apparent skill of the algorithms. The impacts of observational uncertainty are assessed through comparison of three different datasets: the augmented Automated Surface Observing Station (ASOS), nonaugmented ASOS, and the meteorological Phenomena Identification Near the Ground (mPING) networks. Several statistically-significant differences are found between these datasets including an underreporting of freezing drizzle and ice pellets (PL) by the nonaugmented ASOS (a consequence of instrument limitations) and an overreporting of PL and underreporting of freezing rain (FZRA) by the mPING network. These biases cause the different algorithms to appear more or less biased, depending on the classifier used and the validation dataset. Horizontal and temporal variability also constitute a limitation on the performance of classifiers. This is particularly true for FZRA and PL, which usually have variabilities that are beneath that which operational models can resolve. The effects of this on model validation are quite profound - depending on how a hit is defined, probabilities of detection can range from 25 to 75%, thus strongly influencing the supposed performance of precipitation-type classifiers.
- Indicates paper has been withdrawn from meeting
- Indicates an Award Winner