84th AMS Annual Meeting

Tuesday, 13 January 2004
Three distinct goals for research with climate models
Hall 4AB
S. Fred Singer, Science & Environmental Policy Project, Arlington, VA
Summary: We visualize three distinct types of model studies. The first is a comparison with recent observations (since 1958) for the purpose of model validation, using all of the relevant forcings. The second, an intercomparison between models, using only GH gas forcing, over a 100-year period, designed to discover the reasons for and to narrow the current dispersion in values of climate sensitivity. The third is an intercomparison of long runs (>500 years) of unforced models to study internal variability (“noise”) and determine to what extent it represents the natural variability or is model-dependent.

Introduction:

In view of its potential policy interest, a great deal of money has been spent on climate research in the past decade by public and private funding sources throughout the world. The US government has spent some $20 billion and is preparing to continue spending at the rate of nearly $2 billion a year. Not only climate specialists but also geophysicists and indeed taxpayers have a stake in seeing that these funds are invested wisely.

A large fraction of these funds is spent on climate model experiments; the expertise of the modeling community has become an important national asset. Climate models are essential for predicting future climate changes. But which of the many models can one rely on? Can any of them be validated by comparison with observations? Can they account for the observed natural variability?

Three Tasks for Climate Models

In pursuing these questions, the climate modeling community should undertake three important and quite distinct research tasks – listed here according to a suggested priority.

1. Validation of climate models against observations:

The most important task is the validation of models by comparing their results with observations, primarily with temperatures of the past ~50 years (where good data are available), but also with precipitation, changes in ocean currents, and perhaps other climate parameters. [One gains little information by reaching back to the 19th century, since early forcings are poorly known and temperature data are quite incomplete.]

For this purpose, a model must simulate the various forcings as accurately as possible, including their spatial and temporal dependence. In addition to greenhouse (GH) gases, this would include the direct and indirect effects of different classes of aerosols (both natural and manmade), changes in stratospheric and tropospheric ozone, and variability of solar irradiance and solar wind effects. Changes in land use and possibly growing air traffic can also influence climate trends. The time resolution for specifying forcings should be about a month or two and the runs should have appropriate time resolution to permit the observation of day-night effects.

The model results should be the temperature fields and trends at different atmospheric levels (from surface to stratosphere) with a time resolution of 1 to 2 months and a spatial resolution of several latitude zones. In addition, one would like to explore any trends in the seasonal ratios and in the diurnal temperature range (DTR).

In reviewing the history of climate modeling, e.g., in the three major IPCC reports, one sees increasing sophistication in simulating important climate forcings. Starting with only GH-gases a decade ago, models have gradually added the interaction with ocean circulation, the direct effects of sulfate aerosols, then ozone changes, and finally volcanic events and solar irradiance changes. But current research identifies additional forcings that have not as yet been included in standard models. So while there has been remarkable progress, we may yet discover other important forcings and feedbacks that must be included [3].

Thus the claimed validation presented in the IPCC-TAR Summary is doubtful because it shows only 1) annual global mean temperature, 2) surface values, and 3) uses only a limited number of forcings, ignoring some that are likely to be as large or larger than the ones included (e.g., Black Carbon, indirect aerosol effects, and effects of solar wind).

It is remarkable, however, that every IPCC Summary since 1990 has claimed that observed temperature trends are in agreement with model results and that the “climate sensitivity” (the temperature increase corresponding to a doubling of GH-gas forcing) is between 1.5 and 4.5 C. [Actually, even slight changes in cloud parameters (like droplet-size distribution) can yield higher (or lower) values.]

I personally believe that the temperature record of the past 50 years is dominated by non-GH forcings and that climate sensitivity is less than 1.5C, and perhaps much less – in which case future anthropogenic temperature increases will be inconsequential.

2. Defining Climate Sensitivity: Predicting the climate of 2100 and beyond:

The steady increase in GH-gas concentration should eventually overcome other forcings, like short-lived aerosols and the more-or-less periodic variations of ozone and of solar activity.

The important task is to intercompare model results, using only GH-gas forcing according to some agreed-to standard scenario. The purpose here is not to predict a future warming but to discover how climate sensitivity depends on various parameterizations (e. g., of clouds) and assumptions incorporated into particular models, such as algorithms for convection, radiation transfer codes, and many others.

One measure of success would be the narrowing of the existing 300% dispersion in climate-sensitivity results. The most important task –involving considerable research effort – will be to simulate successfully the important feedbacks of the atmosphere-ocean-land-cryosphere system. In particular, one needs to establish the feedbacks, both positive and negative, of clouds (from stratus to cirrus) and of upper-tropospheric water vapor at different latitudes.

3. Studying natural climate variability:

It is often claimed that a long-duration model run without any external forcing can give a realistic value of natural climate variability arising from the interaction of the atmosphere with the oceans. A priori, this seems doubtful, as does the implicit assumption of stationarity.

This doubt is reinforced by the fact that different models show different results for “natural variability.” It would be interesting to establish the causes for these differences among models, esp. for coupled atmosphere-ocean models.

Supplementary URL: