Behavior of k-NN as an Instance-Based Explanation Method

09/14/2021
by   Chhavi Yadav, et al.
0

Adoption of DL models in critical areas has led to an escalating demand for sound explanation methods. Instance-based explanation methods are a popular type that return selective instances from the training set to explain the predictions for a test sample. One way to connect these explanations with prediction is to ask the following counterfactual question - how does the loss and prediction for a test sample change when explanations are removed from the training set? Our paper answers this question for k-NNs which are natural contenders for an instance-based explanation method. We first demonstrate empirically that the representation space induced by last layer of a neural network is the best to perform k-NN in. Using this layer, we conduct our experiments and compare them to influence functions (IFs)  <cit.> which try to answer a similar question. Our evaluations do indicate change in loss and predictions when explanations are removed but we do not find a trend between k and loss or prediction change. We find significant stability in the predictions and loss of MNIST vs. CIFAR-10. Surprisingly, we do not observe much difference in the behavior of k-NNs vs. IFs on this question. We attribute this to training set subsampling for IFs.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/15/2021

CounterNet: End-to-End Training of Counterfactual Aware Predictions

This work presents CounterNet, a novel end-to-end learning framework whi...
research
08/05/2017

e-QRAQ: A Multi-turn Reasoning Dataset and Simulator with Explanations

In this paper we present a new dataset and user simulator e-QRAQ (explai...
research
07/04/2019

Explaining Predictions from Tree-based Boosting Ensembles

Understanding how "black-box" models arrive at their predictions has spa...
research
07/05/2022

"Even if ..." – Diverse Semifactual Explanations of Reject

Machine learning based decision making systems applied in safety critica...
research
04/13/2021

δ-CLUE: Diverse Sets of Explanations for Uncertainty Estimates

To interpret uncertainty estimates from differentiable probabilistic mod...
research
01/26/2021

Better sampling in explanation methods can prevent dieselgate-like deception

Machine learning models are used in many sensitive areas where besides p...
research
06/08/2020

Evaluation Criteria for Instance-based Explanation

Explaining predictions made by complex machine learning models helps use...

Please sign up or login with your details

Forgot password? Click here to reset