1.17
Applying science policy research: The case of the carbon cycle science program

- Indicates paper has been withdrawn from meeting
- Indicates an Award Winner
Wednesday, 1 February 2006: 4:15 PM
Applying science policy research: The case of the carbon cycle science program
A307 (Georgia World Congress Center)
Lisa Dilling, Center for Science and Technology Policy Research/Univ. of Colorado, Boulder, CO; and G. E. Maricle and R. A. Pielke Jr.

As carbon cycle science has become more organized in the United States over the past three decades, it has been repeatedly justified by statements that the science conducted in the program will be useful in supporting decision making or informing policy. In one of the earliest examples, in 1978 carbon cycle research was a component of the Carbon Dioxide and Climate program of the newly formed Department of Energy, with the goal of “predicting the environmental, social and economic costs of increasing atmospheric concentrations of carbon dioxide with sufficient confidence to permit policy decisions to be made on the future use of fossil fuels”. With the recent re-emergence of carbon cycle science over the past several years as a prominent element of the U.S. Global Change Research Program (USGCRP), the Climate Change Research Initiative (CCRI) and now the Climate Change Science Program (CCSP), this goal has been reaffirmed. The U.S. Global Change Research Act passed in 1990 stated that the program should produce “usable information on which to base policy decisions relating to global change”. In 1999, the community scientific planning document, A U.S. Carbon Cycle Science Plan, calls for “coordinated rigorous, interdisciplinary research that is strategically prioritized to address societal needs” and states that “the planned activities must not only enhance understanding of the carbon cycle, but also improve capabilities to anticipate future conditions and to make informed management decisions”. In 2001, under the President's Climate Change Research Initiative (CCRI), carbon cycle research was named as the second element of seven high priority items that would “best support improved public debate and decision-making in the near term.” And most recently, the U.S. Administration reorganized USGCRP under the Climate Change Science Program “to provide the best possible scientific information to support public discussion and decision making on climate-related issues” with decision support added as a key element of the Strategic Plan.

There is little doubt that federally-supported carbon cycle science programs seek to produce information that supports decision making. What is less clear are the details of how carbon cycle research is being conducted so that it does in fact support decision making. There does not appear to be a dedicated part of the integrated U.S. carbon cycle program focused on understanding how to conduct carbon cycle research so that it will be of use to decision makers. Rather, carbon cycle science policy decisions on priorities and project selection are internally governed according to standard scientific norms, under an apparent assumption that good science necessarily will be of use to decision makers.

This exemplifies a very common approach to conducting scientific research, even for that research justified explicitly as serving societal goals. The approach has been called by various names—the “linear model”, the “loading dock”, and several other monikers. What it distills to, in essence, is the belief that science should be conducted fairly independently from societal concerns, and the use of any scientific results by society happens somewhat automatically as a matter of course over time. No explicit mechanism is provided for understanding what information might be of use, or who users might be and therefore guiding science policies more deliberately according to societal need. The argument in favor of this approach is that science produces its best results when driven by curiosity alone, and that societal demands would detrimentally impinge on the outcomes of scientific research.

While this model may work well for basic research, it has been demonstrated to be less effective at producing research that will directly serve societal needs. As demonstrated by experience in several other areas of Earth science, scientific research does not necessarily generate information that is useful to anyone outside of the scientific community. For example, attempts to provide seasonal to interannual climate forecasting as a service to farmers and other natural resource managers have been disappointing: the information provided was not needed; the information that was needed was not provided; the information lacked regional specificity; the presentation and communication tools did not make the information accessible to potential users; potential users lacked trust in information and researchers; institutional constraints prevented use of new information; and so on.

We are introducing and applying a methodology that we call “Reconciling Supply and Demand” to analyze science policy decisions and provide insight into possible options for improving the ability of carbon cycle science to support decision making. The notion of reconciling supply and demand is straightforwardly borrowed from the classic economic concept of markets being driven by supply and demand for goods. The concept is applied to the use of information in order to identify where there is a good match of information needs and supply, and where there is a "missed opportunity," or a chance to perhaps better connect the supply of scientific information to societal need. This methodology is an example of policy research as described by Maricle et al. in a separate paper.

As part of our funded project “Science Policy Assessment and Research on Climate (SPARC), we have engaged in policy research on reconciling supply and demand, including workshops with potential users, carbon cycle scientists and science policy experts. It is clear that in some specific, limited circumstances, carbon cycle information is being used. In other situations, information might be needed but is not being provided, either because it doesn't exist or it is being insufficiently “translated”—clearly a “missed opportunity.” In this presentation we will conclude with some potential steps forward to capitalize on this opportunity, including options for structuring research, institutional implications, and lessons learned.