Handout (2.5 MB)
Lacking observations of the future, we utilize a perfect-model experimental design. Using the output of high resolution global dynamical climate model simulations, we examine the ability of three different statistical downscaling methods to simulate current and future mean and extreme temperature and precipitation measures across the United States. The GCM used is the GFDL-HiRAM-C360 model, and the three downscaling techniques tested are the simple delta, monthly quantile mapping, and daily asynchronous quantile regression methods. The experimental design differs from the usual real-world application of statistical downscaling in that no observations are used. Instead, the study uses output from a set of high resolution GCM experiments some of which were run to simulate the climate of recent decades and others that simulate conditions at the end of the 21st century under a high greenhouse gas emissions scenario. Companion data sets were constructed by interpolating the high resolution (~25km) GCM output to a much coarser grid (~200km). During the downscaling training step, statistical methods quantify relationships between the high resolution and coarse resolution data sets for the historical period. Then, using the coarsened data sets as input, we assess how well the downscaling relationships deduced from the historical period can reconstruct the high resolution GCM output, both for the historical period and for the late 21st century projections. This perfect model framework allows one to test the assumption of statistical stationarity by determining the extent to which a downscaling method's skill is degraded for future projections relative to the historical period.
Results will be presented showing how the validity of assuming stationarity varies regionally, seasonally, and by variable of interest. We also discuss how this methodology can be extended, including (a) exploring additional geographic regions and variables of interest, (b) using different GCMs and statistical downscaling methods, and (c) by generating more challenging tests by altering the distribution of the coarsened data sets rather than merely interpolating from the high resolution grid.