Increasing Performance And Sample Efficiency With Model-agnostic Interactive Feature Attributions

06/28/2023
by   Joran Michiels, et al.
0

Model-agnostic feature attributions can provide local insights in complex ML models. If the explanation is correct, a domain expert can validate and trust the model's decision. However, if it contradicts the expert's knowledge, related work only corrects irrelevant features to improve the model. To allow for unlimited interaction, in this paper we provide model-agnostic implementations for two popular explanation methods (Occlusion and Shapley values) to enforce entirely different attributions in the complex model. For a particular set of samples, we use the corrected feature attributions to generate extra local data, which is used to retrain the model to have the right explanation for the samples. Through simulated and real data experiments on a variety of models we show how our proposed approach can significantly improve the model's performance only by augmenting its training dataset based on corrected explanations. Adding our interactive explanations to active learning settings increases the sample efficiency significantly and outperforms existing explanatory interactive strategies. Additionally we explore how a domain expert can provide feature attributions which are sufficiently correct to improve the model.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/06/2020

Active Learning++: Incorporating Annotator's Rationale using Local Model Explanation

We propose a new active learning (AL) framework, Active Learning++, whic...
research
05/29/2023

Reason to explain: Interactive contrastive explanations (REASONX)

Many high-performing machine learning models are not interpretable. As t...
research
11/23/2022

MEGAN: Multi-Explanation Graph Attention Network

Explainable artificial intelligence (XAI) methods are expected to improv...
research
04/19/2022

A survey on improving NLP models with human explanations

Training a model with access to human explanations can improve data effi...
research
12/05/2022

This changes to that : Combining causal and non-causal explanations to generate disease progression in capsule endoscopy

Due to the unequivocal need for understanding the decision processes of ...
research
04/21/2020

Considering Likelihood in NLP Classification Explanations with Occlusion and Language Modeling

Recently, state-of-the-art NLP models gained an increasing syntactic and...
research
09/07/2022

Semantic Interactive Learning for Text Classification: A Constructive Approach for Contextual Interactions

Interactive Machine Learning (IML) shall enable intelligent systems to i...

Please sign up or login with your details

Forgot password? Click here to reset