P1.9
Use of Markov chain Monte Carlo sampling methods to assess and improve variational MODIS cloud retrievals

- Indicates paper has been withdrawn from meeting
- Indicates an Award Winner
Monday, 30 January 2006
Use of Markov chain Monte Carlo sampling methods to assess and improve variational MODIS cloud retrievals
Exhibit Hall A2 (Georgia World Congress Center)
Derek J. Posselt, University of Michigan, Ann Arbor, MI; and T. S. L'Ecuyer and G. L. Stephens

Poster PDF (2.1 MB)

Most remote-sensing retrievals are based on Bayesian estimation (optimal estimation) theory, and by necessity assume mean-zero Gaussian error statistics. As a result, a Gaussian probability density distribution is imposed on the resulting retrieval, restricting description of the error statistics to mean and (co)variance alone. If the retrieval problem is well-posed and well-constrained such that a single, well-defined probability maximum exists, then the optimal estimation method works well. However, there is no way to know whether the shape of the retrieval error is Gaussian, and hence whether the retrieval has produced a robust result. Although information content theory can be used to assess the performance of an optimal estimation retrieval, it offers little information on the shape of the retrieved error distributions.

In theory, it would be useful to explicitly sample the conditional probability density distribution for each of the retrieved variables given the available observations. Given relevant observations, and a range of realistic values for each retrieved parameter, Markov Chain Monte Carlo sampling methods (hereafter MCMC) can be used to randomly sample these distributions, efficiently seeking regions of relatively high probability via a quantitative measure of the fit between observations and retrieved variables. In practice, in each MCMC search iteration a retrieved variable is randomly perturbed, new forward reflectances are obtained, and the value of the likelihood function is computed and compared to the previous maximum likelihood. The new state is accepted and stored if the computed likelihood exceeds the previous maximum likelihood, or if the new likelihood is sufficiently close to the old as determined in a probabilistic manner. The MCMC algorithm is run for many successive iterations to allow a thorough sampling of the state space, and a conditional probability density distribution is built for each retrieved parameter from the set of accepted states. It is these resulting distributions that provide valuable information on the robustness of the estimation of each retrieved variable.

In this presentation, the properties of the MCMC sampler are demonstrated using a relatively simple two-parameter MODIS cloud retrieval. Observations consist of MODIS reflectances in visible and near-infrared channels (e.g., Nakajima and King, 1990), and the range of possible values of single scatter albedo and cloud optical depth define the probability space to be sampled. Results from a MCMC-based retrieval are quantitatively compared with results from an optimal estimation retrieval, and the utility of the MCMC method is explored. It is demonstrated that, while the MCMC method is far too computationally expensive to be applied operationally, it provides unique information on the error characteristics of the retrieved variables.