Monday, 29 January 2024: 5:45 PM
345/346 (The Baltimore Convention Center)
Deep-learning global weather-prediction models have emerged in the past few years, and rapidly evolved such that they now produce forecasts competitive with the best operational systems in the world. Why these models exhibit this excellent performance is unclear, as is whether these models have "learned" atmospheric dynamics, or simply statistical pattern matching. Answering this question is crucial to establishing the utility of these models as tools for testing scientific hypotheses. Here we subject one such model, Pangu-weather, to a set of four classical dynamical tests that do not resemble any of the model training data. One test involves perturbing the model dynamics and evaluating the solution beginning from a steady state. The remaining three tests involve initial-value problems aimed at simulating fast processes in one case, and synoptic-scale phenomena in the other two. In all cases, Pangu-weather produces solutions that closely resemble those from physical models, and we conclude that the model has "learned" real physics, not just statistical pattern matching. Although we do not perform identical experiments with a physics-based model, the results we present here motivate such a comparison in future research.

