Thursday, 1 February 2024
Hall E (The Baltimore Convention Center)
Recent deep learning based weather prediction models have overcome some of the prominent challenges faced by traditional Numerical Weather Prediction (NWP) methods. For example, existing data-driven models like FourCastNet and Pangu-Weather outperformed NWP models in computation time, cost, and accuracy. However, one big concern with the models is that they often produce highly skilled but “fuzzy” forecasts, meaning they are not physically realistic unlike a typical NWP model. To alleviate such concerns, our research examines the sharpness of data-driven predictions. Sharpness, traditionally referenced in photography, refers to how clearly details are rendered in an image, and more importantly, is easily observable by end-users. Evaluation metrics for sharpness have been well studied with sharpness related to deep learning generated images being briefly touched on. However, sharpness in the context of weather predictions, where outputs are not defined by pixel color, is not well-explored.
In this work, we build on the work of Ebert-Uphoff et al (2024) who demonstrated how various metrics could be used to measure sharpness for weather forecasts. Our work uses these metrics to examine the sharpness of forecasts for a synthetic radar product as a function of the loss metric and AI model architecture. These methods, when combined with preexisting deep learning based weather prediction models, should lead to results more readily accepted by end-users.

