Regularized adversarial examples for model interpretability

11/18/2018
by   Yoel Shoshan, et al.
0

As machine learning algorithms continue to improve, there is an increasing need for explaining why a model produces a certain prediction for a certain input. In recent years, several methods for model interpretability have been developed, aiming to provide explanation of which subset regions of the model input is the main reason for the model prediction. In parallel, a significant research community effort is occurring in recent years for developing adversarial example generation methods for fooling models, while not altering the true label of the input,as it would have been classified by a human annotator. In this paper, we bridge the gap between adversarial example generation and model interpretability, and introduce a modification to the adversarial example generation process which encourages better interpretability. We analyze the proposed method on a public medical imaging dataset, both quantitatively and qualitatively, and show that it significantly outperforms the leading known alternative method. Our suggested method is simple to implement, and can be easily plugged into most common adversarial example generation frameworks. Additionally, we propose an explanation quality metric - APE - "Adversarial Perturbative Explanation", which measures how well an explanation describes model decisions.

READ FULL TEXT

page 5

page 7

research
01/25/2019

Towards Interpretable Deep Neural Networks by Leveraging Adversarial Examples

Sometimes it is not enough for a DNN to produce an outcome. For example,...
research
03/22/2021

ExAD: An Ensemble Approach for Explanation-based Adversarial Detection

Recent research has shown Deep Neural Networks (DNNs) to be vulnerable t...
research
12/14/2020

Combining Similarity and Adversarial Learning to Generate Visual Explanation: Application to Medical Image Classification

Explaining decisions of black-box classifiers is paramount in sensitive ...
research
01/31/2019

An Evaluation of the Human-Interpretability of Explanation

Recent years have seen a boom in interest in machine learning systems th...
research
11/01/2022

Interpretability in the Wild: a Circuit for Indirect Object Identification in GPT-2 small

Research in mechanistic interpretability seeks to explain behaviors of m...
research
05/18/2022

The Solvability of Interpretability Evaluation Metrics

Feature attribution methods are popular for explaining neural network pr...

Please sign up or login with your details

Forgot password? Click here to reset