In this talk, results will first be presented from evaluation of the model at resolutions between 1.5 km and 100 m. Storms are identified from surface rain-rate thresholds, and compared statistically to storms identified in the same way from the 5-min surface rain-rate estimates from the Met Office radar network. It is found that the simulated storms become smaller as the grid length is reduced, with the storms in the 1.5 km model being typically too large compared to the radar and those in 200- and 100-m simulations being typically too small. Moreover, the small storms in the higher resolution simulations tend to be much too intense.
We then investigate the sensitivity of storm morphology in the model to the mixing length used in the sub-grid Smagorinsky turbulence scheme. As the sub-grid mixing length is decreased in a model of a particular resolution, the number of small storms with high area-averaged rain rates increases. Thus we show that by changing the mixing length we can produce a lower resolution simulation that produces more similar morphologies to a higher resolution simulation. However, there appears not to be a single value of mixing length that is optimum both for cases with deep convection in summer and shallower scattered showers in the spring. The theoretical reasons for this will be discussed. Changes to the model's microphysics specification are found to have little effect on either storm scale or intensity.
Finally, by tracking storms in both the model and the observations, we find that storms in the model tend to progress through their lifetime too slowly. The sensitivity of this behavior on resolution and mixing-length specification will be reported in the talk.