DeepAI AI Chat
Log In Sign Up

Contrastive Explanations for Explaining Model Adaptations

by   André Artelt, et al.
Bielefeld University

Many decision making systems deployed in the real world are not static - a phenomenon known as model adaptation takes place over time. The need for transparency and interpretability of AI-based decision models is widely accepted and thus have been worked on extensively. Usually, explanation methods assume a static system that has to be explained. Explaining non-static systems is still an open research question, which poses the challenge how to explain model adaptations. In this contribution, we propose and (empirically) evaluate a framework for explaining model adaptations by contrastive explanations. We also propose a method for automatically finding regions in data space that are affected by a given model adaptation and thus should be explained.


page 1

page 2

page 3

page 4


One Explanation to Rule them All – Ensemble Consistent Explanations

Transparency is a major requirement of modern AI based decision making s...

Contrastive Corpus Attribution for Explaining Representations

Despite the widespread use of unsupervised models, very few methods are ...

Explaining NLP Models via Minimal Contrastive Editing (MiCE)

Humans give contrastive explanations that explain why an observed event ...

Explaining Black-Box Algorithms Using Probabilistic Contrastive Counterfactuals

There has been a recent resurgence of interest in explainable artificial...

Explaining Results of Multi-Criteria Decision Making

We introduce a method for explaining the results of various linear and h...

EiX-GNN : Concept-level eigencentrality explainer for graph neural networks

Explaining is a human knowledge transfer process regarding a phenomenon ...

Code Repositories



view repo