900 Analyzing and Exploring Training Recipes for Large-Scale Transformer-Based Weather Prediction

Thursday, 1 February 2024
Hall E (The Baltimore Convention Center)
Jared Daniel Willard, LBNL, Berkeley, CA; and P. Harrington, S. Subramanian, A. Mahesh, T. A. O'Brien, and W. D. Collins
Manuscript (320.2 kB)

The rapid rise of deep learning (DL) in numerical weather prediction (NWP) has led to a proliferation of models which forecast atmospheric variables with comparable or superior skill than traditional physics-based NWP. However, among these leading DL models, there is a wide variance in both the training settings and architecture used. Further, the lack of thorough ablation studies makes it hard to discern which components are most critical to success. In this work, we show that it is possible to attain high forecast skill even with relatively off-the-shelf architectures, simple training procedures, and moderate compute budgets. Specifically, we train a minimally modified SwinV2 transformer on ERA5 data, and find that it attains superior forecast skill when compared against IFS, despite using considerably less compute than some competing models. We present some ablations on key aspects of the training pipeline, exploring different loss functions, data normalizations, and multi-step fine-tuning to investigate their effect. We also examine the model performance with metrics beyond the typical ACC and RMSE, and investigate how the performance scales with model and dataset size.
- Indicates paper has been withdrawn from meeting
- Indicates an Award Winner