Monday, 29 January 2024: 2:15 PM
345/346 (The Baltimore Convention Center)
EXplainable Artificial Intelligence (XAI) methods are gaining popularity as a means to explain the decision-making process of deep learning models. Shapely additive explanation, a XAI method, is employed to fairly distribute the contribution among input features. This XAI method assesses the input features’ contribution to the model prediction by replacing features in the input with a baseline and evaluating its impacts on the predictions. Recent findings indicate that XAI results are strongly influenced by the chosen baseline, and different baselines can be used to answer different scientific questions. Here, we utilize a convolutional neural network trained for nowcasting radar-based convective initiation (CI) in the summer of 2020 and 2021 using GOES-R infrared observations as predictors. Through experiments with dry and moist baselines, XAI results revealed the scientific insights underlying CI in dry and moist environments, respectively. Compared to a moist environmental baseline, lower-tropospheric moisture and cloud cover are the most important features behind CI, whereas when compared to a dry condition baseline, mid-tropospheric moisture and cloud depth are the dominant features behind CI. This study underscores XAI's potential in facilitating nuanced comprehension of deep learning models and offering precise insights into the underlying mechanisms across diverse scenarios.

