7B.2 Using Deep Learning to Detect Atmospheric Rivers across Climate Datasets and Resolutions

Wednesday, 15 January 2020: 8:45 AM
156A (Boston Convention and Exhibition Center)
Ankur Mahesh, Lawrence Berkeley National Lab, Berkeley, CA; ClimateAi, San Francisco, CA; and T. A. O'Brien, K. Kashinath, M. Mudigonda, M. Prabhat, C. A. Shields, J. J. Rutz, L. R. Leung, A. E. Payne, F. M. Ralph, M. Wehner, and W. D. Collins

Atmospheric rivers (ARs) are weather phenomena that can alleviate drought or cause intense precipitation. They are narrow filaments of moisture that transport large amounts of water vapor, primarily in association with extratropical cyclones. Because of their significance for human systems, determining the trend of these events’ intensity and frequency in historical and future data is of crucial importance. The Atmospheric River Tracking Method Intercomparison Project (ARTMIP) has developed a catalogue of AR labels in the MERRA2 Reanalysis dataset, and work is underway to develop catalogues for climate change simulations. These labels are based on algorithms with parameters (e.g., thresholds) that are set based on expert opinion, which may not be appropriate for other datasets which operate at different resolutions or under different scenarios of climate change. Regardless, ARTMIP requires a large amount of coordination and human effort to explore uncertainty in AR detection. We propose using machine learning methods as a way to detect ARs in a wide variety of observations and atmospheric model outputs. Using the ARTMIP catalogue as training data, we investigate the ability of two machine learning techniques to emulate the average behavior of AR tracking algorithms on multiple climate datasets. In a previous presentation on this research, we demonstrated that deep learning methods can successfully and accurately predict the spatiotemporal pattern of the proportion of ARTMIP algorithms that detect ARs. This initial model had a major drawback, in that the quality of the model output degraded substantially when applied to datasets on which it had not been trained. We propose and demonstrate two methods to address this drawback. First, to detect ARs across different spatial resolutions, we explore perceptual loss functions, which assess the quality of neural networks without using grid-cell-level comparisons between predictions and truth. Second, style transfer techniques are traditionally used to convert a photorealistic image to the style of a painting, such as Vincent Van Gogh’s Starry Night. We explore style transfer to create a neural network that identifies ARs in both reanalysis and climate models, despite the different spatial characteristics of fields of various datasets that arise due to model differences.
- Indicates paper has been withdrawn from meeting
- Indicates an Award Winner