12.6 Seeing Inside Convective Clouds Using Scattering Tomography

Wednesday, 31 January 2024: 5:45 PM
Key 12 (Hilton Baltimore Inner Harbor)
Anthony B. Davis, NASA Jet Propulsion Laboratory / California Institute of Technology, Pasadena, CA; and L. Forster, N. LaHaye, S. Mauceri, and M. J. Kurowski

Convective clouds play key roles in the radiation and hydrological cycles of the Earth’s climate and in planetary boundary layer processes. All the above need better representation in global climate models to reduce the currently crippling uncertainty in future climate predictions. To characterize cloud microphysical properties on a global scale, we use space-based imaging sensors that measure sunlight reflected by clouds. The observed radiance is then related to cloud properties using pixel-level algorithms that assume plane-parallel cloud geometry and horizontally uniform internal structure. This radical simplification of cloud structure is required to invoke standard 1D radiative transfer (RT) models that account for the multiple scattering in the cloudy volume. These 1D RT models are used to produce look-up tables (LUTs) that convert a pair of pixel-scale spectral radiances, one visible (VIS) and one short-wave IR (SWIR), into two cloud properties: optical thickness (largely from VIS) and effective particle size (to which SWIR is sensitive). In combination, these quantities yield vertically-integrated liquid- or ice-water path.

As expected, these algorithms perform better—but far from perfectly—for single-layer stratiform cloud scenarios since they at least resemble the assumed plane-parallel cloud shape. But the LUT-based retrieval fails, often catastrophically, for vertically developed convective clouds. How can remote sensing address these important, spatially complex objects? We have mm-wave cloud radar, of course, since the single-scattering assumption in the radar equation becomes reasonable in that spectral region. And, if not quite, a multiple scattering correction can be applied. However, cloud profiling radars sample the horizontal plane in just one direction, the sub-satellite track. We thus obtain only a 2D "curtain" of data, which limits the scientific output.

Can we have the best of both worlds: a 3D gridded reconstruction of convective clouds from imaging remote sensing observations? We claim that it is possible, as long as the sensor or sensor system can look at the convective cloud scene from a sufficient number of diverse viewing directions. Inspired partly by so-called "optical diffusion tomography" developed for non-invasive biomedical imaging diagnostics, we developed a 3D scattering-based tomography of cloud masses. At the Symposium, we will survey this decade-worth of NASA-sponsored research.

First, we will show how the breakthrough happened using a physics-based forward model, namely, a deterministic computational 3D RT scheme, in combination with recent advances in inverse problem theory. However, this 3D cloud tomography demonstration is limited to relatively small cumulus, which already consume large amounts of CPU time. Also, the pixel/voxel scale is limited to a few 10s of meters, typical of airborne imaging sensors. These limitations impede practical processing of many clouds, say, observed during a field campaign.

We will then report on recent research into machine-learning (ML) methods that promise to be significantly faster, once the ML model is trained on high-fidelity cloud models from Large Eddy Simulations rendered with a high-fidelity Monte Carlo 3D RT model. Moreover, we have reason to believe that the ML-based approach to scattering cloud tomography will be agnostic to pixel size, and thus amenable to the large swaths of cloud imagery captured by space-based sensors.

Finally, we will describe how these cloud tomography methods can support present and future satellite missions. We will focus on futuristic cloud products that could shed new light on cloud physics and aerosol-cloud interactions.

- Indicates paper has been withdrawn from meeting
- Indicates an Award Winner