A core component of the much needed climate information system is a climate observing system. Indeed the economic risks for climate change are measured in Trillions of dollars. Given this challenge, can the USGCRP put forth a vision of how such a system might be designed and built? How its economic value to society might be estimated?
Recent research studies (Cooke et al. 2014, 2015) estimated the economic value of such a system at ~ $10 Trillion dollars to the world economy in today’s value (known as "net present value" in economics). In the simplest sense, this is the economic value of moving climate science learning forward by 15 years using better observations, analysis, and modeling/predictions. The study further estimated that if the world tripled its current economic investments in climate research (observations, analysis, modeling) to achieve such an advanced observing system, the return on investment would be ~ $50 for every $1 invested by society. Few investments could approach such return. Compare that message to the current situation of a zero sum economic game in climate observations: one unresolved science question struggles for funding against another: both critical to achieve. We need to change the question from "which critical science climate observation is more important?" to instead "what climate science observations are of high value and return as a societal investment?"
A great deal of work has already been done on the “Essential Climate Variables” (ECVs) needed for a climate observing system through the Global Climate Observing System (GCOS), with the latest assessment and recommendations just released in October 2015 (GCOS 2015). The ECVs have been developed in a fairly pragmatic way that takes into account the past record and capabilities as well as the needs, and may not include some climate variables regarded as vital but for which there is no current capability. In addition to assessing the ECVs, GCOS has also highlighted the needs for reprocessing and reanalysis of variables to produce consistent homogeneous datasets (see also Trenberth et al. 2013).
Recent discussions within USGCRP, AGU, AMS and WMO have indicated the need for a thoughtful approach to prioritizing and evaluating proposed climate observations to address specific climate science needs. Some of the conclusions of those discussions are that climate observations need to address specific climate questions and that proposed observations need to be evaluated based on whether they will be able to address those questions effectively.
Design of such an advanced and more rigorous international climate observing system would be a challenge in itself. Key elements of such a design might include:
Define quantified science goals or questions
Identify the key variables or groups of variables needed to address the critical science questions.
Quantify the spatial coverage and resolution required to address the science questions.
Quantify the temporal duration and resolution required to meet the science requirements.
Quantify the accuracy or quality of the measurement needed to achieve the science goal (e.g., calibration, orbit or surface sampling, algorithm uncertainties.
Defining the science goals or questions is an area where organizations, including AMS, can greatly assist in focusing the many under-observed areas of climate science.
For critical climate science questions, some groups have already organized thoughts and identified priorities for climate research. Key among these has been the IPCC WG I report (2013), the World Climate Research Program (WCRP) identification of Grand Challenges: Clouds, Circulation & Climate Sensitivity; Melting Ice & Global Consequences; Climate Extremes; Regional Sea-level Change & Coastal Impacts; and Water Availability, as well as GCOS and COSPAR. Further progress may be obtained through the USGCRP or through the new NASA/NOAA/USGS NRC Decadal Survey. A shortcoming of many of these efforts to date, however, is that goals are often expressed as qualitative understanding as opposed to quantitative hypothesis testing. Through careful development of observing system simulation experiments, quantitative evaluation of observing systems can be carried out, and if designed properly, proposed observations across a variety of platforms and approaches can be intercompared.