Exploring the cloud of feature interaction scores in a Rashomon set

05/17/2023
by   Sichao Li, et al.
0

Interactions among features are central to understanding the behavior of machine learning models. Recent research has made significant strides in detecting and quantifying feature interactions in single predictive models. However, we argue that the feature interactions extracted from a single pre-specified model may not be trustworthy since: a well-trained predictive model may not preserve the true feature interactions and there exist multiple well-performing predictive models that differ in feature interaction strengths. Thus, we recommend exploring feature interaction strengths in a model class of approximately equally accurate predictive models. In this work, we introduce the feature interaction score (FIS) in the context of a Rashomon set, representing a collection of models that achieve similar accuracy on a given task. We propose a general and practical algorithm to calculate the FIS in the model class. We demonstrate the properties of the FIS via synthetic data and draw connections to other areas of statistics. Additionally, we introduce a Halo plot for visualizing the feature interaction variance in high-dimensional space and a swarm plot for analyzing FIS in a Rashomon set. Experiments with recidivism prediction and image classification illustrate how feature interactions can vary dramatically in importance for similarly accurate predictive models. Our results suggest that the proposed FIS can provide valuable insights into the nature of feature interactions in machine learning models.

READ FULL TEXT
research
01/10/2019

Variable Importance Clouds: A Way to Explore Variable Importance for the Set of Good Models

Variable importance is central to scientific studies, including the soci...
research
03/01/2021

Interpretable Artificial Intelligence through the Lens of Feature Interaction

Interpretation of deep learning models is a very challenging problem bec...
research
06/19/2020

Feature Interaction Interpretability: A Case for Explaining Ad-Recommendation Systems via Neural Interaction Detection

Recommendation is a prevalent application of machine learning that affec...
research
03/28/2023

Understanding and Exploring the Whole Set of Good Sparse Generalized Additive Models

In real applications, interaction between machine learning model and dom...
research
04/29/2022

A Framework for Constructing Machine Learning Models with Feature Set Optimisation for Evapotranspiration Partitioning

A deeper understanding of the drivers of evapotranspiration and the mode...
research
03/02/2023

SHAP-IQ: Unified Approximation of any-order Shapley Interactions

Predominately in explainable artificial intelligence (XAI) research, the...
research
03/02/2022

Faith-Shap: The Faithful Shapley Interaction Index

Shapley values, which were originally designed to assign attributions to...

Please sign up or login with your details

Forgot password? Click here to reset