Challenges and Opportunities in Modeling of the Global Atmosphere
The progress in atmospheric modelling has always been related to advances in computer technology. The unprecedented computer power that is now available allows operational application of horizontal resolutions on the order of 10 km on the global scale. However, the current parallel computer architecture requires that some widely adopted modeling paradigms be reconsidered in order to productively utilize as much power of parallel processing as possible.
For example, with parallel processing using distributed memory, each core may work on a small subdomain of the full integration domain, and only exchange few rows of halo data with the neighbouring cores in order to specify lateral boundary conditions they need. This scenario promises good scaling because the communications are restricted only to the halo exchanges between neighboring cores. However, note that the described simplified scenario implies that all the discretization algorithms used in the model are Eulerian and horizontally local. Unfortunately, such algorithms do not include those using spectral representation and semi-implicit time differencing that require much more communications and therefore cannot scale so well. Without the semi-implicit scheme the semi-Lagrangian approach may become computationally prohibitively expensive.
Still, a wide freedom of choice is left to modellers designing limited area models. However, on the global scales, the treatment of spherical geometry remains an issue. The straightforward latitude-longitude grid with local in space and explicit in time differencing has been a natural early choice in the sixties, and remained in use ever since. The problem with this method is that the grid size in the longitudinal direction tends to zero as the poles are approached due to the convergence of the meridians. So, in addition to having unnecessarily high resolution near the poles, polar filtering has to be applied in order to keep the integration stable using a time step of decent size. However, the polar filtering based on conventional fast Fourier transform requires transpositions involving extra communications and thus limits scaling.
The discovery of the spectral transform method in early seventies, and later on, the development of the semi-implicit semi-Lagrangian schemes in the eighties, opened the way for a wide application of the spectral representation in global models. With some variations, these techniques are still used operationally and for research in most major centers. However, in addition to spectral ringing, the two-dimensional non-locality is inherent to the spectral representation and implicit time differencing, which is often considered as a bottleneck inhibiting scaling on a large number of cores. In this respect the lat-lon grid with fast Fourier transform represents a significant step in the right direction.
Other grid topologies with reduced variability of the grid distances were also considered at an early stage. Among these were, for example, the cubed sphere and the hexagonal/pentagonal ("soccer ball ") grids pioneered in the sixties. These grids suffered from inhomogeneities that could result in developing large scale (wavenumber 4 and 5) fictitious solutions ("grid imprinting") with significant amplitudes. Due to their large scales, that are comparable to the scales of the dominant Rossby waves, such fictitious solutions are hard to identify and remove. So, these types of grids did not gain much popularity in the past. However, the interest for them has been revived by the hope that their problems can be alleviated or eliminated with the resolutions that are becoming affordable and more refined grid definitions. It remains to be seen whether this hope will fully materialize.
Another challenge on the global scale is that the limit of validity of the hydrostatic approximation is rapidly being approached. Having in mind the sensitivity of extended deterministic forecasts to small disturbances, we may need global non-hydrostatic models sooner than we think, and that is likely to bring additional problems with computational efficiency of nonlocal schemes.
As a recent example of a multi-scale model, consider the unified Nonhydrostatic Multiscale Model on the Arakawa B grid (NMMB) that is being developed for spatial scales ranging from meso to global at the National Centers for Environmental Prediction (NCEP) as a part of the new NOAA Environmental Modeling System (NEMS). The model follows the general modeling philosophy of the older NCEP's WRF NMM grid-point regional dynamical core. The nonhydrostatic dynamics were designed in such a way as to avoid overspecification. The global version has been run mostly on the latitude-longitude grid, but the work on an advanced cubed sphere option is under way as well. The regional version uses rotated latitude-longitude grid to reduce the variation of the grid size.
The model formulation has been successfully tested on various scales. The regional version of the NMMB replaced the WRF NMM as the main NCEP's operational short range forecasting model for North America (NAM), as well as in a number of high resolution nested runs. The work on upgrading the NCEP's operational "Hurricane WRF" from the WRF NMM to the NMMB dynamics is under way. The latitude-longitude global NMMB has been run in parallel in order to test it. The system is initialized and verified using the spectral analyses of NCEP's Global Forecasting System (GFS). Nevertheless, the skill of the medium range forecasts produced by the NMMB was comparable to that of other major medium range forecasting systems. As an illustration of the performance of the model, one year average global anomaly correlation coefficient at 500 hPa of the NMMB (red line) and GFS (black line) for the 00 Z runs are shown in Fig. 1 as functions of forecast time.