J10B.5 Forecaster Perceptions of Trustworthiness, Explainability, and Interpretability in the Context of AI-Derived Guidance

Wednesday, 31 January 2024: 11:45 AM
338 (The Baltimore Convention Center)
Mariana Goodall Cains, NCAR, Boulder, CO; and C. D. Wirz, J. L. Demuth, A. Bostrom, M. C. White, and J. T. Radford

Advances in meteorological artificial intelligence (AI) have pushed beyond abilities to fully understand the internal mechanisms of certain algorithms. This has prompted discussions about what it means for AI to be “explainable,” the degree to which AI functionality can be understood with post-hoc methods, or “interpretable,” the extent to which a person can understand the AI model functionality without supplementary techniques. These terms are often used in the literature but are conceptualized in different ways. Many researchers hypothesize that the explainability and interpretability of an AI model influences its trustworthiness, which also is thought about in a variety of ways. Additionally, one of the goals for making AI algorithms and models explainable, interpretable, and trustworthy is to promote use of the AI-derived guidance by decision-makers, like forecasters, who are assessing and managing hazardous weather risks. However, there is very limited understanding of how domain experts, as potential users of the AI products, navigate or make sense of these concepts. To address that limited understanding, we examine how National Weather Service forecasters, as a target user of meteorological AI products, perceive and understand the concepts of trustworthiness, interpretability, and explainability. We interviewed over 25 forecasters and asked them what the terms trustworthy, explainable, and interpretable meant to them in the context of AI-derived weather guidance. We systematically analyzed the data and synthesized forecasters’ perceptions, including how they are similar to and different from AI researchers’ and developers’ ways of thinking about these terms. Preliminary results suggest users focus more on how the AI-derived weather guidance translates into their decision-making and communication contexts, which was seen through their focus on personal experience with a product, the AI user interface, and how the guidance would or would not play a role in their communication with other forecasters and their core partners (e.g., emergency managers, broadcast meteorologists). Conversely, within the AI literature the academic and development representations focused on technical dimensions of the model, like performance, technical understanding, and specific development techniques. In this presentation, we will elaborate on the ways that forecasters conceptualize these ideas and how they do and do not align with AI developers’ thinking, and it will discuss implications of the results.
- Indicates paper has been withdrawn from meeting
- Indicates an Award Winner