Some progress has been made along those separate lines. But at least two important parts are left uncharacterized with that approach: 1- the known important interactions across those three characteristics measured separately in even the selected subsets of the GCM ensembles, and 2- objective sampling schemes from the widening population of the other, non-GCM components in the modeling chain. Characterizing those interactions and sucessfully representing a sufficient amount of the uncertainty space from which to subsample across all modeling components in an objective way requires substantial computational effort to create and visualize the future scenarios along with their associated uncertainty estimates.
Computing, visualizing, and using different scenarios developed with different modeling chain components is crucial for water-resource impact assessments because choices made over model chain component alternatives can produce important differences in final assessments of vulnerability and impact. Some of those modeling component choices include ones over hydrologic model, parameters, and parameterizations; over the climatology input forcings to the hydrology models determined with various climate downscaling approaches; over GCM selection and sets of output variables to be downscaled; and over the driving global emissions scenarios. Real-world water-resource vulnerability and future climate impacts assessments, however, can be highly time sensitive and strongly resource limited and so cannot typically accommodate full uncertainty quantification computed for all components of the modeling chain. This is a significant deficiency because different methods for producing gridded meteorological fields, for example, have been shown to produce very different effects on the projected hydrologic outcomes they drive, with uncertainties across those methods larger than the climate change signal in some cases. In a similar way, many popular climate downscaling methods simply rescale GCM precipitation, producing hydroclimatic projections having too much drizzle, incorrect representations of extreme events, and improper spatial scaling of variables crucial to characterizing hydrologic responses and thus for informing decisions about future actions. That deficiency results in assessments built on only partially revealed uncertainties which can misrepresent significant sensitivities and impacts in the final assessments of climate threats and hydrologic vulnerabilities.
A team of earth scientists and applied hydrologists from the US Army Corps of Engineers, the Bureau of Reclamation, the University of Washington, and the National Center for Atmospheric Research are developing techniques to subsample uncertainties objectively across modeling chain components and to integrate results into quantitative hydrologic storylines of climate-changed futures. Importantly, these quantitative storylines are not drawn from a small sample of models or components. Rather, they derive from a more comprehensive characterization of the full uncertainty space computed for each component and are anchored in actual water-management decisions potentially affected by climate change. This talk will describe part of our work in computing variability and uncertainty for multiple modeling chain components and in characterizing component interactions using newly developed observational data, models, and post-processing tools for reducing the computational load required at the impacts end of the modeling chain and for making the resulting quantitative storylines more directly useful in water-resource planning applications.