Tuesday, 24 January 2017
4E (Washington State Convention Center )
Frederic Fabry, McGill University, Montreal, QC, Canada
Common wisdom suggests that an atmospheric phenomenon of dimension
Δx can be reasonably well resolved in observations or models only if it is observed or modeled at a resolution of
Δx/5 or better. This implies that to minimally constrain the atmospheric fields in a thunderstorm cell measuring (10 km)
3 and lasting 30 minutes for prediction purposes, 600+ observations or retrieved values (one every 2 km and 6 min) are needed for each of pressure, temperature, humidity, winds, clouds, and precipitation. To forecast phenomena of sub-storm size such as tornadoes of powerful downdrafts, even more is required. While radar provides such a data resolution for the end product of the storm process, precipitation and one component of target velocity, to the best of my knowledge, there has yet to be a storm whose quantities that dictate storm evolution (pressure, temperature, humidity and clouds) have been measured with a data density approaching such minimum requirements.
Proper prediction of storms hence requires a considerable expansion of the observation enterprise in parallel to much better approaches to constrain unobserved quantities using the very few observed ones available at our disposal. Only if we succeed to constrain atmospheric fields with such a density can we hope to predict storms numerically and markedly increase the lead times of the warnings that save lives and property. Otherwise, we will remain limited to rely on mesoscale forecasts that cannot capture the specificity of a particular threatening storm, and on empirical approaches based on existing storm signatures observed by radar or other tools that intrinsically have lead times much shorter than the lifetime of storms. The physical, technical, and economic plausibility of such a plan will be discussed.
- Indicates paper has been withdrawn from meeting
- Indicates an Award Winner