3.2
SVR and Bayesian Neural Networks applied to statistical downscaling of precipitation

- Indicates paper has been withdrawn from meeting
- Indicates an Award Winner
Tuesday, 4 February 2014: 8:45 AM
Room C204 (The Georgia World Congress Center )
Carlos Felipe Gaitan, University of Oklahoma - NOAA/GFDL, Princeton, NJ; and K. W. Dixon, J. Lanzante, V. Balaji, R. McPherson, B. Moore III, A. Radhakrishnan, and H. Vahlenkamp

When dealing with big-data problems in climate informatics, knowing the size of the data, the limitations of the numerical methods used, and the available software/hardware configuration is key to determine CPU running times and to schedule the simulations/experiments. In this context, GFDL's “perfect model” experimental design to test the validity of the stationarity assumption, common to all statistical downscaling (SD) methods, involves downscaling ~ 22,000, 30-year long daily precipitation datasets over the US48 region, using different methods, including support vector regression (SVR). SVR has been used previously in precipitation downscaling with promising results, but the algorithm's running time scales nonlinearly with the number of training points; thus developing an skillful downscaling model with reasonable CPU costs is crucial.

Our downscaling models used different classification algorithms like CART and SVM to obtain the rainy days, and then run the SVR on those days to obtain the precipitation amounts. Here we show results from 16 points across North America where we evaluated the mean absolute error skill score (MAE SS) of the different SD models and their corresponding CPU times. The results show non-homogeneous skill scores between the 16 points and higher skills near the Rocky Mountains. The results also show considerable differences in the Peirce skill score from the different classification algorithms, and additionally they show the impact a math library can have on the overall running times.