The influence of grouping features on explainable artificial intelligence for a complex fog prediction deep learning model

dc.contributor.authorKrell, Evan
dc.contributor.authorKamangir, Hamid
dc.contributor.authorFriesand, Josh
dc.contributor.authorJudge, Julianna
dc.contributor.authorCollins, Waylon
dc.contributor.authorKing, Scott A.
dc.contributor.authorTissot, Philippe
dc.date.accessioned2022-05-05T14:21:00Z
dc.date.available2022-05-05T14:21:00Z
dc.date.issued2022-04
dc.description.abstractAdvances in machine learning have enabled modeling complex nonlinear relationships. High performance models are increasingly reliant on “black boxes” such as deep learning where it is impractical to determine why predictions are made. This limits the user’s trust in the model, motivating the field of eXplainable Artificial Intelligence (XAI) to provide tools to understand how models make decisions. XAI techniques are used to explain FogNet: a complex model for predicting coastal fog whose input is a raster of 384 atmospheric variable channels. XAI techniques struggle with feature correlations and interactions, making it challenging to explain FogNet whose data is highly correlated by design. For example, a group of 108 channels represents the lower atmosphere thermodynamic profile. This gradient is used by forecasters to predict fog but complicates XAI with strong spatial-wise and channel-wise autocorrelation. Grouping related features has been proposed to improve XAI accuracy. Here, XAI techniques are applied with features grouped at multiple levels of granularity. The coarsest is to divide the raster channels into five groups based on physical similarity. The second scheme is to treat each individual channel as a feature, and finally superpixels within each channel. To analyze the sensitivity of explanations on the feature grouping used, the more granular outputs are aggregated into the coarser groups. This allows direct comparison of, for example, channel-wise explanations when using channels as feature groups, and when using superpixels within those channels as groups. The results indicate that the choice of feature grouping scheme influences the explanations, which can make interpretation of XAI results challenging. However, there are also consistencies that emerge that provide confidence in certain aspects of the explanations. Combined with forecaster domain knowledge, we demonstrate using the XAI outputs to generate hypotheses that drive the next phase of model development.en_US
dc.identifier.urihttps://hdl.handle.net/1969.6/90558
dc.language.isoen_USen_US
dc.rightsAttribution-ShareAlike 4.0 International*
dc.rights.urihttp://creativecommons.org/licenses/by-sa/4.0/*
dc.subjectmodel interpretabilityen_US
dc.subjectexplainabilityen_US
dc.subjectpermutation feature importanceen_US
dc.subjectshapen_US
dc.subjectmeteorologyen_US
dc.titleThe influence of grouping features on explainable artificial intelligence for a complex fog prediction deep learning modelen_US
dc.typePresentationen_US

Files

Original bundle

Now showing 1 - 1 of 1
No Thumbnail Available
Name:
Krell_SSRS2022_poster_Evan Krell.pptx
Size:
2.11 MB
Format:
Microsoft Powerpoint XML
Description:
Poster

License bundle

Now showing 1 - 1 of 1
No Thumbnail Available
Name:
license.txt
Size:
1.72 KB
Format:
Item-specific license agreed upon to submission
Description: