180 Killing Butterflies - How Meteorologists Get Rid of Undesirable Short Waves on Numerical Models

Monday, 23 January 2017
4E (Washington State Convention Center )
Isimar de Azevedo Santos, Universidade Estadual do Norte Fluminense Darcy Ribeiro, MacaƩ, Brazil; and J. Buchmann and N. S. Ferreira

Almost one hundred years ago, Richardson tried to implement a “numerical weather prediction”, by solving the Navier-Stokes and mass continuity equations (including the effect of the Earth's rotation), together with the first law of thermodynamics (under the ideal gas law), as a full set of prognostic equations upon which the change in space and time of wind, pressure, density and temperature are described in the atmosphere. In fact, Richardson's effort resulted totally frustrated people due to two factors unknown at that time: (a) Applying finite difference methods to discretize and solve numerically the derivatives of his model, Richardson didn't know that a distance less than one grid length per time step was necessary to achieve the stability in the numerical solution (the CFL criterion); (b) Inadvertently Richardson put noises in the solution due to unbalanced pressure and density fields assimilated as the initial condition, and worse, the nonlinear character of the equations in the model produced spurious amplifications of those noises, totally disfiguring the forecasted fields.

Linear models used in the past to simulate and predict atmospheric behavior showed regular motions in space and time, that is, well-behaved motions were represented through continuous functions. Nonlinear models used today, although representing better the natural behavior of the atmosphere, present sharp transitions, even when forced by steady boundary conditions. Those unexpected transitions normally come from nonlinear interactions of the solutions of these models during the integration phase. As discovered by Lorenz (1963), very small changes in some parameters can cause great differences in the results, what Lorenz has called chaos. The spectral response of a nonlinear system to oscillatory external forcing usually exhibits frequencies not present in the forcing, besides phase and frequency coupling, synchronization and other indications of nonlinearity, deteriorating the solutions just after a few days of the model’s integration. Today the concern that really deserves attention is the sensibility of the modern numerical models to the uncertainties in the initial conditions. Thanks to the Lorenz' original discovery, today it is known that there exist limitations or rupture points associated with the long term prediction as a result of the chaotic character of the non-linear models. Although there are no "butterflies" or "tornadoes" in atmospheric models, the Lorenz discovery and its implications have today a central importance in the efforts to model the Earth's climate.

In the early times of the weather forecast, the predictions were made starting from the “analysis” on what was named “synoptic maps”, subjectively prepared by experienced meteorologists who had in mind some important physical rules like the geostrophism and conceptual models of fronts and extra-tropical cyclones, while constructing isoline fields of those analyses. Today, several forms of interpolation of synoptic information were incorporated into the analyses through the data assimilation techniques based on the Theory of Optimal Control. The construction of a set of atmospheric and surface data, that is, the analysis to be used as the initial condition in the models of numerical weather prediction, is treated as a problem of Bayesian inversion, using observations, and prior information from short-range forecasts and also their uncertainties, as constraints for the wanted simulations. These calculations, involving a global minimization, are performed in four dimensions to produce a set of initial conditions physically consistent in space and time. In parallel with these developments of computational techniques, an increasing use of satellite data has occurred, by combining weather forecast models with computationally efficient radiative transfer models, with a much refined characterization of short-range forecasts. Thanks to better use of unconventional observations associated with significant improvements in physical parameterizations, the numerical weather prediction has achieved an excellent level of quality, which people clearly perceive today.

As has been seen, the butterfly effect represents a real concern because chaos appears every time weather forecasting is running in numerical models, although there are no evidences in nature that any butterfly could cause a tornado, as is popularly believed. Let us consider that the butterflies are actually any short waves and that the methodologies used to integrate the atmospheric numerical models can kill these short waves or at least impede them to grow. Since the meteorological community began to use nonlinear models to simulate and predict the atmospheric behavior (like the convection model used by Lorenz when he discovered the chaos effects), and included the dangerous advective terms, effective efforts has been expended in order to minimize the natural amplification of those short waves due to the chaos that characterize those models. Proficient forecasters knew that these short waves can be realistic, due to observations made near clouds and storms, or they can be fictitious, resulting from the observational errors or imbalanced initial conditions. They also know that those short waves are dangerous all the time, because those small scale disturbances can easily be amplified due to the non-linearity of the models. In the present situation, the "butterflies" are any short waves in the models, and when one uses techniques to eliminate those perturbations, we are saying the butterflies are killed or at least are maintained under control to impede its undesirable growing.

- Indicates paper has been withdrawn from meeting
- Indicates an Award Winner