The 1990’s ushered in a new era cloud observations and climate modeling that reached maturity in the first decade of this century. The NASA Earth Observing Program, originally conceived in the late 1980s, led to the launch of the Terra (1999) and Aqua (2000) platforms and to the launch of additional satellites that have collectively, with Aqua, become known as the A-train. The DOE Atmospheric Radiation Program conceived of permanent ground-based observing sites that were implemented in the mid-1990s and continue through today. This vision encouraged the development of similar instrument concentrations in other countries.
The availability of faster and more capable computers led to both better climate models and much greater throughput, allowing more extensive simulations and model comparison studies that highlighted the differences in cloud properties and cloud feedbacks among models and the need for improvements in cloud and radiation parameterizations. The IPCC reports provide a fascinating history of the efforts to diagnose problems and improve the representation of clouds over the past several decades, but also point out the relatively modest progress that has been made in reducing the spread in model sensitivities due to cloud processes.
At this point, we have ten to twenty years of incredibly rich datasets and thousands of years of simulations of current climate. We have countless hours of science effort and hundreds of published papers. We have learned an incredible amount about clouds and cloud modeling. Nonetheless, we are still faced by large uncertainties in the representation of clouds in models. So why have we not achieved the goals that were so ambitiously laid out in the program plans and documents of the 1990s? Is this slow progress a result of limitations in our methods or datasets to address the cloud problem, and what can we do to resolve, at least to a substantial extent, model uncertainties in cloud properties and cloud feedbacks?
This talk will briefly survey the thirty-year history, focusing primarily on the use of data to confront models. In a nutshell, this history suggests that we have become quite good at identifying what the models do well and do not do well, using a variety of analysis techniques, but we have yet to develop a uniformly successful approach to moving from the identification of problems to methods that use this information to improve model representations.
The future holds both promise and looming problems. Availability of ever faster and more capable computers suggests that we are entering an era of global high-resolution modeling that may remove the need for some parameterization but also demand development of more sophisticated cloud parameterizations. In particular, this may lead to a need for better microphysical schemes at the cloud resolving model scale, which may well result in computational demands that are still beyond our reach for global simulations. In an era in which US funding for climate science is decreasing and our observational networks are aging and stressed, we will need to find a way to prioritize the measurements and methods that we need to make progress, or make the case that funding new observing systems and more research can accomplish our aims.