DeepAI AI Chat
Log In Sign Up

Inverse Classification for Comparison-based Interpretability in Machine Learning

by   Thibault Laugel, et al.
Laboratoire d'Informatique de Paris 6

In the context of post-hoc interpretability, this paper addresses the task of explaining the prediction of a classifier, considering the case where no information is available, neither on the classifier itself, nor on the processed data (neither the training nor the test data). It proposes an instance-based approach whose principle consists in determining the minimal changes needed to alter a prediction: given a data point whose classification must be explained, the proposed method consists in identifying a close neighbour classified differently, where the closeness definition integrates a sparsity constraint. This principle is implemented using observation generation in the Growing Spheres algorithm. Experimental results on two datasets illustrate the relevance of the proposed approach that can be used to gain knowledge about the classifier.


Integrating Prior Knowledge in Post-hoc Explanations

In the field of eXplainable Artificial Intelligence (XAI), post-hoc inte...

Issues with post-hoc counterfactual explanations: a discussion

Counterfactual post-hoc interpretability approaches have been proven to ...

Explaining Neural Networks without Access to Training Data

We consider generating explanations for neural networks in cases where t...

Understanding Post-hoc Explainers: The Case of Anchors

In many scenarios, the interpretability of machine learning models is a ...

An ensemble of Density based Geometric One-Class Classifier and Genetic Algorithm

One of the most rising issues in recent machine learning research is One...

Adaptive kNN using Expected Accuracy for Classification of Geo-Spatial Data

The k-Nearest Neighbor (kNN) classification approach is conceptually sim...

Accurate Small Models using Adaptive Sampling

We highlight the utility of a certain property of model training: instea...