Wednesday, 31 January 2024
Hall E (The Baltimore Convention Center)
Bayesian parameter inference solves a major challenge in atmospheric model parameterization development: estimating “optimal” parameters for a given set of target performance metrics (i.e., the likelihood is a function of these performance metrics’ values and uncertainties as observed or evaluated from a target and the model with the parameterization of interest). While atmospheric models are too computationally expensive for direct application of Bayesian inference methods such as Markov Chain Monte Carlo (MCMC), machine learning can overcome this limitation. Here we use machine learning-enabled Bayesian parameter inference to further develop a flexible bulk microphysics parameterization (BOSS: Bayesian Observationally constrained Statistical-physical Scheme). Specifically, we constrain BOSS parameters in a large-eddy simulation (LES) via target performance metrics from the same LES with bin microphysics or Lagrangian super droplet microphysics. For the LES with BOSS, we use machine learning to emulate these metrics with uncertainty estimates as a function of the BOSS microphysics parameters and environmental conditions. We then perform Bayesian parameter inference via MCMC sampling using the computationally cheap emulated model and estimates of its uncertainty rather than the expensive LES. After validating this workflow, we use it to compare the information gained from different sets of performance metrics to explore the dependence of parameter constraint on metric selection. We also use the workflow to evaluate the performance of different parameterization structural designs. This work addresses fundamental issues for parameterization development in atmospheric models: computationally efficient quantitative parameter selection, structural design comparison, and performance metric selection for limiting parameter equifinality.

