Understanding Deep Networks via Extremal Perturbations and Smooth Masks

10/18/2019
by   Ruth Fong, et al.
17

The problem of attribution is concerned with identifying the parts of an input that are responsible for a model's output. An important family of attribution methods is based on measuring the effect of perturbations applied to the input. In this paper, we discuss some of the shortcomings of existing approaches to perturbation analysis and address them by introducing the concept of extremal perturbations, which are theoretically grounded and interpretable. We also introduce a number of technical innovations to compute extremal perturbations, including a new area constraint and a parametric family of smooth perturbations, which allow us to remove all tunable hyper-parameters from the optimization problem. We analyze the effect of perturbations as a function of their area, demonstrating excellent sensitivity to the spatial properties of the deep neural network under stimulation. We also extend perturbation analysis to the intermediate layers of a network. This application allows us to identify the salient channels necessary for classification, which, when visualized using feature inversion, can be used to elucidate model behavior. Lastly, we introduce TorchRay, an interpretability library built on PyTorch.

READ FULL TEXT

page 1

page 2

page 4

page 6

page 7

page 8

research
07/12/2018

Maximizing Invariant Data Perturbation with Stochastic Optimization

Feature attribution methods, or saliency maps, are one of the most popul...
research
09/01/2021

Spatio-Temporal Perturbations for Video Attribution

The attribution method provides a direction for interpreting opaque neur...
research
04/22/2020

Understanding Integrated Gradients with SmoothTaylor for Deep Neural Network Attribution

Integrated gradients as an attribution method for deep neural network mo...
research
09/16/2020

Captum: A unified and generic model interpretability library for PyTorch

In this paper we introduce a novel, unified, open-source model interpret...
research
09/01/2021

Shared Certificates for Neural Network Verification

Existing neural network verifiers compute a proof that each input is han...
research
10/19/2019

NormGrad: Finding the Pixels that Matter for Training

The different families of saliency methods, either based on contrastive ...
research
02/23/2021

The Sensitivity of Word Embeddings-based Author Detection Models to Semantic-preserving Adversarial Perturbations

Authorship analysis is an important subject in the field of natural lang...

Please sign up or login with your details

Forgot password? Click here to reset