Monday, 29 January 2024
Hall E (The Baltimore Convention Center)
In air quality modelling in support of regulatory programs there is a pressing need for well-motivated and timely estimates of uncertainty to facilitate good policy decisions. If policy makers are not provided with confidence intervals, statistically insignificant differences between emissions scenarios may spuriously inform policy, translating into unintended consequences. While ensemble modelling would allow us to estimate uncertainties, the large computational cost and time needed to perform a large number of model runs is prohibitive. We describe our attempts to adequately quantify uncertainty within the constraints of limited computational resources. One approach we have used is confidence ratios (CR), where the condition CR > 1 defines a spatial region over which two GEM-MACH (our chemical transport model) simulations are significantly different, which is then used as a mask for later calculations. One drawback of this method is that it assumes the values around the means are normally distributed. Another approach currently under development is to use bootstrapping (random sampling with replacement) on the output of GEM-MACH simulations to place confidence intervals on standard pollutant metrics. An advantage of bootstrapping is that it is relatively simple to implement and non-parametric -- no assumptions are made about the probability distribution of the model data. A disadvantage is that bootstrapping does not account for model error (e.g. in the choice of parameter values). To account for model error the usual approach is to run an ensemble, but this is unaffordable in our context. A possible alternative, which we are in the early stages of exploring, is to combine a minimal ensemble with post-processing by a neural network -- this approach can address all types of error, is computationally affordable, and allows the computation of uncertainties.

