Robert D. T. Wendt* (1) and Wendell A. Nuss (1)
(1) Naval Postgraduate School, Monterey, CA, 93943, http://www.nps.edu/
Dynamical model predictions of sensible weather variables often manifest systematic patterns of forecast error. Ensemble prediction systems (EPS) manifest the same deficiencies and, as a result, frequently produce biased central tendencies and underdispersive ensemble spread (e.g., Gneiting et al. 2005 and Hodyss et al. 2016). In this way, the NWP community has largely focused on upgrades to upstream model components – that is, improvements in data assimilation, the governing dynamics, numerical techniques, and various parameterizations of subgrid-scale processes – to affect additional skill from contemporary objective guidance. However, comparatively little attention has been given to rigorous statistical analyses of forecast data downstream of the aforementioned model components. To this end, statistical post-processing exploits correlations between forecast variables and contemporary observations to mitigate these defects and improve the reliability of deterministic and probabilistic forecast estimates. It seeks unbiased single-valued forecast estimates - for example, the influential regression method of model output statistics (MOS) introduced by Glahn and Lowry (1972) – and calibrated predictive distributions that have unbiased means and optimized sharpness subject to statistical consistency with the observations (vis-à-vis Gneiting et al. 2007).
This abstract explores the efficacy of a direct Bayesian extension of the method of ensemble model output statics (EMOS), which was introduced by Gneiting et al. (2005) and later refined by Richter (2012), in short timescale forecasts of surface temperature and wind speed made by the NCEP Short Range Ensemble Forecast (SREF) system. In particular, a hierarchical Bayesian probability model has been developed to invert the canonical probability statement and stochastically parametrize observable forecast variables with unobservable model parameters and hyperparameters within a multivariate multiple linear regression framework. In this way, a priori forecast beliefs are conditioned on a time series of previous model forecasts (i.e., predictors) and their corresponding observations (i.e., predictands) to train a hierarchical multivariate Bayesian predictive model. Finally, an adaptive variant of the random-walk Metropolis algorithm, with block-wise multidimensional parameter updates, has been developed within a Markov chain Monte Carlo (MCMC) sampling framework (Figure 1a) to generate Bayesian posterior statistical beliefs for latent model parameters (Figure 1b) and posterior predictive distributions (PPD) – that is, probabilistic forecast distributions over sensible weather variables. These Bayesian PPDs explicitly incorporate the coupled uncertainty of multiple parameter inferences and corresponding forecast variables with probability density functions conditioned on the joint probability model and available training data (e.g., Figure 1c and 1d).
In this way, this study will explore three principle issues: 1) whether Bayesian parameter estimation and MCMC sampling techniques provide a computationally viable method of statistical post-processing when compared with canonical ensemble methods; 2) whether the Bayesian/MCMC post-processing approach provides meaningful improvements in forecast skill when compared with suitable reference forecasts (i.e., traditional linear regression techniques completed with the method of ordinary least squares and maximum likelihood estimation); and 3) whether the Bayesian PPDs provide reliable, calibrated forecast distributions with probabilistic interpretations that correspond to the observed frequency and variability of the observable predictands. This Bayesian estimation approach should result in a family of joint and marginal probability distributions that can be converted into intuitive probability statements that reliably communicate the uncertainty of the forecast.
Glahn, H. R., and D. A. Lowry, 1972: The use of model output statistics (MOS) in objective weather forecasting. J. Appl. Meteor., 11, 1203–1211.
Gneiting, T., A. E. Raftery, A. Westveld, and T. Goldman, 2005: Calibrated probabilistic forecasting using ensemble model output statistics and minimum CRPS estimation. Monthly Weather Review, 133, 1098–118.
Gneiting, T., F. Balabdaoui, and A. E. Raftery, 2007: Probabilistic forecasts, calibration and Sharpness. Journal of the Royal Statistical Society, Series B, Statistical Methodology, 69, 243–68.
Hodyss, D., E. Satterfield, J. McLay, T. M. Hamill, and M. Scheuerer, 2016: Inaccuracies with multi-model post-processing methods involving weighted, regression-corrected forecasts. Monthly Weather Review, 144(4), 1649–68.