Inverse Classification for Comparison-based Interpretability in Machine Learning

12/22/2017
by   Thibault Laugel, et al.
0

In the context of post-hoc interpretability, this paper addresses the task of explaining the prediction of a classifier, considering the case where no information is available, neither on the classifier itself, nor on the processed data (neither the training nor the test data). It proposes an instance-based approach whose principle consists in determining the minimal changes needed to alter a prediction: given a data point whose classification must be explained, the proposed method consists in identifying a close neighbour classified differently, where the closeness definition integrates a sparsity constraint. This principle is implemented using observation generation in the Growing Spheres algorithm. Experimental results on two datasets illustrate the relevance of the proposed approach that can be used to gain knowledge about the classifier.

READ FULL TEXT
research
04/25/2022

Integrating Prior Knowledge in Post-hoc Explanations

In the field of eXplainable Artificial Intelligence (XAI), post-hoc inte...
research
06/11/2019

Issues with post-hoc counterfactual explanations: a discussion

Counterfactual post-hoc interpretability approaches have been proven to ...
research
06/10/2022

Explaining Neural Networks without Access to Training Data

We consider generating explanations for neural networks in cases where t...
research
03/15/2023

Understanding Post-hoc Explainers: The Case of Anchors

In many scenarios, the interpretability of machine learning models is a ...
research
10/02/2020

An ensemble of Density based Geometric One-Class Classifier and Genetic Algorithm

One of the most rising issues in recent machine learning research is One...
research
10/08/2022

Accurate Small Models using Adaptive Sampling

We highlight the utility of a certain property of model training: instea...

Please sign up or login with your details

Forgot password? Click here to reset