Gradient-Based Interpretability Methods and Binarized Neural Networks

06/23/2021
by   Amy Widdicombe, et al.
0

Binarized Neural Networks (BNNs) have the potential to revolutionize the way that deep learning is carried out in edge computing platforms. However, the effectiveness of interpretability methods on these networks has not been assessed. In this paper, we compare the performance of several widely used saliency map-based interpretabilty techniques (Gradient, SmoothGrad and GradCAM), when applied to Binarized or Full Precision Neural Networks (FPNNs). We found that the basic Gradient method produces very similar-looking maps for both types of network. However, SmoothGrad produces significantly noisier maps for BNNs. GradCAM also produces saliency maps which differ between network types, with some of the BNNs having seemingly nonsensical explanations. We comment on possible reasons for these differences in explanations and present it as an example of why interpretability techniques should be tested on a wider range of network types.

READ FULL TEXT
POST COMMENT

Comments

There are no comments yet.

Authors

page 3

page 4

12/02/2020

Improving Interpretability in Medical Imaging Diagnosis using Adversarial Training

We investigate the influence of adversarial training on the interpretabi...
02/13/2019

Why are Saliency Maps Noisy? Cause of and Solution to Noisy Saliency Maps

Saliency Map, the gradient of the score function with respect to the inp...
11/21/2020

Backdoor Attacks on the DNN Interpretation System

Interpretability is crucial to understand the inner workings of deep neu...
11/29/2021

Improving Deep Learning Interpretability by Saliency Guided Training

Saliency methods have been widely used to highlight important input feat...
10/13/2021

When saliency goes off on a tangent: Interpreting Deep Neural Networks with nonlinear saliency maps

A fundamental bottleneck in utilising complex machine learning systems f...
12/22/2021

Comparing radiologists' gaze and saliency maps generated by interpretability methods for chest x-rays

The interpretability of medical image analysis models is considered a ke...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.