This work investigates how one explainable artificial intelligence (xAI) method, tree interpreter (TI), can provide context to predictions from the Colorado State University Machine Learning Probabilities (CSU-MLP) system: a tool that provides probabilistic severe weather guidance out to eight days for operational forecasters. TI is a package that disaggregates random forest-based probabilities by each feature, which provides insight on how individual model inputs can influence a final prediction on a point-by-point basis. Here, TI is used to extract feature contributions for approximately two years of CSU-MLP severe weather forecasts. Feature contributions from operational Global Ensemble Forecast System version 12 (GEFSv12) environmental fields (which serve as model predictors) are analyzed in aggregate over time and space for daily forecasts initialized between January 2021 to December 2022. Results focus on diurnal, seasonal, and spatial trends that were identified amongst the aggregated contributions for Day 2-4 forecasts. The patterns found amongst the aggregated feature contributions are largely consistent with what one would expect to be supportive of severe weather, suggesting that the model’s statistical predictions are consistent with physical expectations. For instance, tornado probabilities are often strongly dictated by contributions from LCL height, and contributions from surface-based CAPE and shear tend to have a dominant role in influencing probabilities across all severe hazards. An example case is also shown to demonstrate how TI can contextualize CSU-MLP probabilities for an individual forecast period. Collectively, this research argues that TI’s properties, such as its straightforward interpretability, make it a promising xAI tool for deciphering ML products in real-time forecasting settings.

