TJ15.4 Examining the stationarity assumption in statistical downscaling of climate projections: Is past performance an indication of future results?

Tuesday, 8 January 2013: 9:15 AM
Ballroom C (Austin Convention Center)
Keith W. Dixon, NOAA/GFDL, Princeton, NJ; and K. Hayhoe, J. Lanzante, A. M. K. Stoner, and A. Radhakrishnan

Handout (2.5 MB)

Regional assessments of climate change impacts require climate projections at sufficiently high spatial resolution to allow one to translate the effects of global-scale changes to the local environment. This translation process typically involves the use of statistical or dynamical downscaling techniques to refine the results of coarser resolution global climate models (GCMs) to the finer scales of interest to impacts researchers and stakeholders. Here we address one assumption inherent to all statistical downscaling of multi-decadal climate change projections – namely, that key statistical downscaling relationships determined to exist between observations and GCM simulations of the recent past are applicable to future climate projections.

Lacking observations of the future, we utilize a perfect-model experimental design. Using the output of high resolution global dynamical climate model simulations, we examine the ability of three different statistical downscaling methods to simulate current and future mean and extreme temperature and precipitation measures across the United States. The GCM used is the GFDL-HiRAM-C360 model, and the three downscaling techniques tested are the simple delta, monthly quantile mapping, and daily asynchronous quantile regression methods. The experimental design differs from the usual real-world application of statistical downscaling in that no observations are used. Instead, the study uses output from a set of high resolution GCM experiments – some of which were run to simulate the climate of recent decades and others that simulate conditions at the end of the 21st century under a high greenhouse gas emissions scenario. Companion data sets were constructed by interpolating the high resolution (~25km) GCM output to a much coarser grid (~200km). During the downscaling training step, statistical methods quantify relationships between the high resolution and coarse resolution data sets for the historical period. Then, using the coarsened data sets as input, we assess how well the downscaling relationships deduced from the historical period can reconstruct the high resolution GCM output, both for the historical period and for the late 21st century projections. This perfect model framework allows one to test the assumption of statistical stationarity by determining the extent to which a downscaling method's skill is degraded for future projections relative to the historical period.

Results will be presented showing how the validity of assuming stationarity varies regionally, seasonally, and by variable of interest. We also discuss how this methodology can be extended, including (a) exploring additional geographic regions and variables of interest, (b) using different GCMs and statistical downscaling methods, and (c) by generating more challenging tests by altering the distribution of the coarsened data sets rather than merely interpolating from the high resolution grid.

- Indicates paper has been withdrawn from meeting
- Indicates an Award Winner