Thursday, 14 January 2016: 11:30 AM
Room 226/227 ( New Orleans Ernest N. Morial Convention Center)
Here we expand upon a Perfect Model approach that allows one to quantitatively assess statistical downscaling (SD) skill both for the contemporary climate and for future projections. Our analysis approach is motivated by the recognition that different measures of weather or climate are key to different end use applications (e.g., climate impacts studies). Whereas some climate impacts studies are most sensitive to projected changes in central tendencies (e.g., trends in monthly mean temperatures), for other applications it is changes in extremes, thresholds, or short term exceedances (e.g., single day precipitation, heat waves) that are most important. By applying a variety of analysis metrics to data sets generated using different SD techniques, we explore relative strengths and weaknesses of a set of SD methods. Some of the analysis metrics consider the entire distribution, whereas others focus on measures of central tendencies, the tails, or climatic indices derived from daily temperature and precipitation time series. Hence, some analysis metrics will be more pertinent to particular end use applications. We demonstrate that SD methods that perform similarly in terms of central tendency metrics can exhibit noticeably different behavior in metrics that focus on other aspects of the distribution. In our Perfect Model framework, the output of a high resolution climate model serves as a proxy for the ‘truth' (past and future), and a proxy for typical climate model output is created by degrading the high resolution data sets via interpolation to a coarser resolution. The three time periods studied in the Perfect Model framework are: 1979-2008, 2026-2035, and 2086-2095. The region examined covers the contiguous United States. (See http://gfdl.noaa.gov/esd_eval_stationarity_pg1)
- Indicates paper has been withdrawn from meeting
- Indicates an Award Winner