Q1, Q2, and beyond: a modeling perspective

- Indicates paper has been withdrawn from meeting
- Indicates an Award Winner
Thursday, 27 January 2011: 4:30 PM
Q1, Q2, and beyond: a modeling perspective
4C-3 (Washington State Convention Center)
Akio Arakawa, University of California, Los Angeles, CA

In the paper Yanai (1961) based on his Ph.D. thesis, Michio Yanai introduced the method of Q1/Q2 analysis to investigate the transition from a cold-core wave to a warm-core vortex. In this analysis, the emphasis is placed on the similarity between the horizontal distributions of Q1 and Q2, regarding them as independent estimates of the heat of condensation released in cumulus clouds. In addition to the fascinating description of the transition, the paper points out, “It is an important fact that the order of magnitude of local changes in potential temperature was very small compared with each of the heat source and the expected change from the dry adiabatic relation . . . this means that the stratification of the air was nearly neutral with respect to the moist-adiabatic process.” This statement, which was made at an amazingly early stage of his career, can be viewed as a prototype of the quasi-neutrality hypothesis postulated by Arakawa (1969) and Arakawa and Schubert (1974).

In large-scale numerical models, a cumulus parameterization is supposed to calculate Q1 and Q2 to predict the large-scale heat and moisture budgets. During the late 1960's and early 1970's, I was struggling to formulate Q1 and Q2 in terms of the “cumulus-induced” subsidence and detrainment of cloud air. My hope was that the use of such formulation in a GCM could produce realistic profiles of Q1 and Q2. By reversing this procedure, I expected that observed vertical profiles of Q1 and Q2 could be used to infer those environmental processes through the use of a bulk cloud ensemble model. The highly applauded paper by Yanai, Esbensen and Chu (1973) beautifully demonstrated this possibility. Since then, Q1 and Q2, called “apparent heat source” and “apparent moisture sink”, became well-accepted concepts in tropical meteorology and gave a rationale for mass-flux based cumulus parameterizations.

Due to the advance of computer technology, however, we can afford increasingly higher horizontal resolutions for numerical models of the atmosphere. Even a global cloud-resolving model (GCRM), which explicitly simulates the “true” heat source and moisture sink, can now be used for climate simulations as the previous speakers Satoh and Oouchi show. Thus we now have two families of global models with qualitatively different model physics: that of conventional GCMs to produce apparent sources and sinks and that of GCRMs to produce true sources and sinks. Each of these families is applicable to only a limited range of the horizontal resolution. Ideally, GCMs should converge to GCRMs as the resolution is refined so that intermediate resolutions can be freely chosen without changing the formulation of model physics. For this convergence to take place, the effects of sub-grid eddy transports on Q1/Q2 must be formulated in a more general way such that they automatically vanish as the GCM grid size approaches a CRM grid size.

Unfortunately, convergence in this sense does not take place with the conventional formulations of Q1/Q2 because they use the assumption σ<<1, either explicitly or implicitly, where σ is the horizontal area covered by all convective clouds in a grid cell normalized by the area of the grid cell. With this assumption, prediction of the mean over the grid cell essentially becomes that of the cloud environment as far as temperature and water vapor mixing ratio are concerned. As the resolution increases, however, the probability density distribution of σ should become bimodal, with σ=1 for grid cells with clouds and σ=0 for grid cells without clouds. Correspondingly, the sub-grid eddy transports should vanish because it is zero for both σ=1 and σ=0. Then Q1 and Q2 become “true” heat source and moisture sink in this limit.

In this talk, I will present evidence for the existence of such tendencies through an analysis of CRM-simulated data using different resolutions for diagnosis. I will also show that relatively minor modifications of conventional parameterizations can drastically broaden their applicability, resulting in a “unified parameterization” that converges to an explicit simulation of individual clouds in the limit of σ→1. Then the conventional GCMs and GCRMs are unified into a single family of models, allowing a flexible choice of the resolution depending on the objective of the application without changing the formulation of model physics. I call the approach described above “Route I” for unification of GCM and GCRM. There is another route for unification, called “Route II”, which uses the recently developed Quasi-3D Multiscale Modeling Framework (Q3D MMF, Jung and Arakawa 2010). I will present a brief outline of the Q3D MMF and the highlights of its preliminary tests. The results of the tests are extremely encouraging. It is about two orders of magnitude less expensive than a GCRM although it is about two orders of magnitude more expensive than Route I. I believe that comparisons of these two routes with GCRMs and observations will significantly enhance our understanding of the multiscale roles of cloud-associated processes and our ability to interpret Q1 and Q2 in a broader context.