The challenges of accurate snowfall density forecasts: Implications for observing strategies, snowfall predictions, and future research efforts
David M. Schultz, NOAA/NSSL, Norman, OK; and P. J. Roebber, S. L. Bruening, E. Ware, and H. E. Brooks
The cold-season quantitative precipitation forecasting problem requires insight into the physical processes controlling the depth of new snowfall via the snow density. Snow-density forecasting is important not only for operational weather forecasts for snowfall, but also for avalanche forecasting, snowmelt runoff forecasting, snowdrift forecasting, and as an input parameter in the snow accumulation algorithm for the WSR-88D radars. Accurate forecasts of the depth of the snowfall are critical to many snow removal operations, since these activities are triggered by exceedances of specific snow-depth thresholds.
Despite substantial improvement in mesoscale numerical models over the years, current prediction of snowfall amounts is accomplished either by using empirical techniques that have questionable scientific validity, or by using a standard modification of liquid equivalent precipitation such as the ten-to-one rule. This rule, which supposes that the depth of the snowfall is ten times the liquid equivalent (a snow ratio of 10:1, reflecting an assumed snow density of 100 kg m-3), is a particularly popular technique with operational forecasters, although it dates from a limited nineteenth century study. Unfortunately, measurements of freshly fallen snow indicate that the snow ratio can vary on the order of 3:1 to as high as 100:1. Given this variability, quantitative snowfall forecasts could be substantially improved if a more robust method for forecasting the density of snow were available.
A review of the microphysical literature reveals that many factors may contribute to snow density, including in-cloud (crystal habit and size, the degree of riming and aggregation of the snowflake), sub-cloud (melting and sublimation) and surface processes (compaction and snowpack metamorphism). Despite this complexity, one purpose of this presentation is to explore the potential of surface and radiosonde data for the determination of snowfall density. A ten-member ensemble of artifical neural networks is employed to classify snow ratio in one of three classes: heavy (1:1 < ratio < 9:1), average (9:1 <=ratio <=15:1), and light (ratio > 15:1). The ensemble correctly diagnoses 60.4% of the cases, which is a substantial improvement over the 45.0% correct using the ten-to-one ratio, 41.7% correct using the sample climatology, and 51.7% correct using the National Weather Service (NWS) "new snowfall to estimated meltwater conversion" table. Measured another way, our approach shows an 184% improvement in Heidke skill score over the NWS table (0.341 versus 0.120, respectively).
Nevertheless, there are still substantial improvements that could be made. Analysis of misclassified events suggests that the poor quality of snow measurements or unobserved quantities such as snow crystal habits or ground surface temperatures limit the present approach. Also, there is a tremendous lack of understanding of snow crystal habit as a function of temperature and relative humidity given an atmospheric profile, or the snow density, given snow crystal habit and size (or distribution of habits and sizes). A module within mesoscale numerical models that explicitly calculates snow density would be ideal, but such an effort is hampered by the lack of the above knowledge. Thus, indirect methods of determining snow density, like the neural network approach in this talk, seem to be the forseeable future of snow density forecasting.
Joint Poster Session 2, Instrumentation and Remote Sensing (Joint with the Symposium on Observing and Understanding the Variability of Water in Weather and Climate and the 17th Conference on Hydrology)
Tuesday, 11 February 2003, 9:45 AM-9:45 AM
Previous paper Next paper
Browse or search entire meeting
AMS Home Page