The influence of grouping features on explainable artificial intelligence for a complex fog prediction deep learning model

Date

2022-04

Authors

Krell, Evan
Kamangir, Hamid
Friesand, Josh
Judge, Julianna
Collins, Waylon
King, Scott A.
Tissot, Philippe

ORCID

Journal Title

Journal ISSN

Volume Title

Publisher

DOI

Abstract

Advances in machine learning have enabled modeling complex nonlinear relationships. High performance models are increasingly reliant on “black boxes” such as deep learning where it is impractical to determine why predictions are made. This limits the user’s trust in the model, motivating the field of eXplainable Artificial Intelligence (XAI) to provide tools to understand how models make decisions. XAI techniques are used to explain FogNet: a complex model for predicting coastal fog whose input is a raster of 384 atmospheric variable channels. XAI techniques struggle with feature correlations and interactions, making it challenging to explain FogNet whose data is highly correlated by design. For example, a group of 108 channels represents the lower atmosphere thermodynamic profile. This gradient is used by forecasters to predict fog but complicates XAI with strong spatial-wise and channel-wise autocorrelation. Grouping related features has been proposed to improve XAI accuracy. Here, XAI techniques are applied with features grouped at multiple levels of granularity. The coarsest is to divide the raster channels into five groups based on physical similarity. The second scheme is to treat each individual channel as a feature, and finally superpixels within each channel. To analyze the sensitivity of explanations on the feature grouping used, the more granular outputs are aggregated into the coarser groups. This allows direct comparison of, for example, channel-wise explanations when using channels as feature groups, and when using superpixels within those channels as groups. The results indicate that the choice of feature grouping scheme influences the explanations, which can make interpretation of XAI results challenging. However, there are also consistencies that emerge that provide confidence in certain aspects of the explanations. Combined with forecaster domain knowledge, we demonstrate using the XAI outputs to generate hypotheses that drive the next phase of model development.

Description

Keywords

model interpretability, explainability, permutation feature importance, shap, meteorology

Sponsorship

Rights:

Attribution-ShareAlike 4.0 International

Citation