TJ1.4
Using mPING Data to Drive a Forecast Precipitation Type Algorithm

- Indicates paper has been withdrawn from meeting
- Indicates an Award Winner
Monday, 5 January 2015: 4:45 PM
124B (Phoenix Convention Center - West and North Buildings)
Kimberly L. Elmore, CIMMS/Univ. of Oklahoma/NOAA/NSSL, Norman, OK; and H. Grams

Forecast winter precipitation type can for some and at times be a high-risk venture: decisions are made in advance on how best to maintain transportation infrastructure based on these forecasts. An incorrect forecast can have significant consequences and incur high costs. Some of the precipitation type algorithms used by numerical models were developed using observed soundings. Yet, numerical models tend to produce thermodynamic profiles that deviate from observed soundings due to any of a number of parameterizations or bias errors. Such deviations reduce the accuracy and skill of the precipitation type algorithm. As part of the Winter Surface Hydrometeor Classification Algorithm work at the National Severe Storms Laboratory, we work the problem from the other direction: use observed precipitation type and the model's internal characteristics to drive a random forest classifier that uses parameters extracted from forecast thermodynamic and kinematic profiles to generate precipitation type forecasts. The surface precipitation type observations come from the Meteorological Phenomena Identification Near the Ground (mPING) project, being run out of the University of Oklahoma and NSSL, mPING uses crowdsourcing to collect surface precipitation type from citizen scientists and has so far gathered more than 620,000 observations, more than half of which are in winter weather. The resulting new algorithm is compared to the current algorithms to assess improvements in skill and accuracy. Because we generate these random forest classifier for the RAP, NAM and GFS models independently, we perform a an inter-model classifier comparison by testing the classifiers generated by one model on the output of another to assess how tightly coupled each classifier is to its parent model. Finally, we perform an intra-model comparison to assess how tightly coupled a classifier is to the particular forecast hour for the model on which it is developed.