Interpretable Artificial Intelligence through the Lens of Feature Interaction

03/01/2021
by   Michael Tsang, et al.
0

Interpretation of deep learning models is a very challenging problem because of their large number of parameters, complex connections between nodes, and unintelligible feature representations. Despite this, many view interpretability as a key solution to trustworthiness, fairness, and safety, especially as deep learning is applied to more critical decision tasks like credit approval, job screening, and recidivism prediction. There is an abundance of good research providing interpretability to deep learning models; however, many of the commonly used methods do not consider a phenomenon called "feature interaction." This work first explains the historical and modern importance of feature interactions and then surveys the modern interpretability methods which do explicitly consider feature interactions. This survey aims to bring to light the importance of feature interactions in the larger context of machine learning interpretability, especially in a modern context where deep learning models heavily rely on feature interactions.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/08/2019

Quantifying Interpretability of Arbitrary Machine Learning Models Through Functional Decomposition

To obtain interpretable machine learning models, either interpretable mo...
research
06/18/2021

It's FLAN time! Summing feature-wise latent representations for interpretability

Interpretability has become a necessary feature for machine learning mod...
research
05/17/2023

Exploring the cloud of feature interaction scores in a Rashomon set

Interactions among features are central to understanding the behavior of...
research
02/04/2022

The impact of feature importance methods on the interpretation of defect classifiers

Classifier specific (CS) and classifier agnostic (CA) feature importance...
research
11/22/2019

Learning Feature Interactions with Lorentzian Factorization Machine

Learning representations for feature interactions to model user behavior...
research
06/27/2021

Online Interaction Detection for Click-Through Rate Prediction

Click-Through Rate prediction aims to predict the ratio of clicks to imp...
research
04/14/2022

Interpretability of Machine Learning Methods Applied to Neuroimaging

Deep learning methods have become very popular for the processing of nat...

Please sign up or login with your details

Forgot password? Click here to reset