Tuesday, 30 January 2024: 8:45 AM
338 (The Baltimore Convention Center)
Mounir Chrit, University of North Dakota (UND), GRAND FORKS, ND; University of North Dakota, Grand Forks, ND; and M. Majdi
Artificial Intelligence (AI) and machine learning methods are increasingly used to provide weather predictions and develop decision-support tools intended for operational use by the aerospace community. However, these models are “black-boxes” that are difficult to explain and can be potentially misleading because they are trained on finite amounts of data. Therefore, it is critical to develop the “trustworthy” predictions. In addition, regulatory agencies are increasingly imposing regulations that require trustworthiness in AI systems used in decision-making. Two major components of Trustworthy AI are uncertainty quantification and explainability. On the one hand, uncertainty awareness and decomposition will improve the understanding of the limitations of models and the data they were trained upon to make better choices about model architecture and data collection. In addition, it will inform decision-makers about extrapolated predictions generated from out-of-distribution data. On the other hand, explainable AI (XAI) methods are used to gather information of what features drive predictions.
In this presentation, we will show how we applied a Bayesian Neural Network able to accurately predict (every 5 min up to 6 hours) wind and turbulence parameters over an Advanced Air Mobility predetermined airway over Chicago and provide information about prediction uncertainty. These predictions of winds and turbulence aloft are based on historical ground-based sensor data. We will focus on combining 1) uncertainty decomposition (aleatoric and epistemic) and 2) an explainable AI method namely the SHapley Additive exPlanation (SHAP) to globally understand the mapping between inputs and outputs. This presentation should provide guidance to model developers and end-users about the situations where the model expects to fail and whether gathering additional data samples in those situations would increase the models’ confidence. We will also show what are the most impactful historical times on predictions.

- Indicates paper has been withdrawn from meeting

- Indicates an Award Winner