Towards Better Input Masking for Convolutional Neural Networks

11/26/2022
by   Sriram Balasubramanian, et al.
0

The ability to remove features from the input of machine learning models is very important to understand and interpret model predictions. However, this is non-trivial for vision models since masking out parts of the input image and replacing them with a baseline color like black or grey typically causes large distribution shifts. Masking may even make the model focus on the masking patterns for its prediction rather than the unmasked portions of the image. In recent work, it has been shown that vision transformers are less affected by such issues as one can simply drop the tokens corresponding to the masked image portions. They are thus more easily interpretable using techniques like LIME which rely on input perturbation. Using the same intuition, we devise a masking technique for CNNs called layer masking, which simulates running the CNN on only the unmasked input. We find that our method is (i) much less disruptive to the model's output and its intermediate activations, and (ii) much better than commonly used masking techniques for input perturbation based interpretability techniques like LIME. Thus, layer masking is able to close the interpretability gap between CNNs and transformers, and even make CNNs more interpretable in many cases.

READ FULL TEXT

page 2

page 4

page 7

research
10/11/2022

Curved Representation Space of Vision Transformers

Neural networks with self-attention (a.k.a. Transformers) like ViT and S...
research
08/19/2021

Do Vision Transformers See Like Convolutional Neural Networks?

Convolutional neural networks (CNNs) have so far been the de-facto model...
research
11/20/2019

DRNet: Dissect and Reconstruct the Convolutional Neural Network via Interpretable Manners

This paper proposes to use an interpretable method to dissect the channe...
research
05/23/2019

The Convolutional Tsetlin Machine

Deep neural networks have obtained astounding successes for important pa...
research
05/25/2023

Making Vision Transformers Truly Shift-Equivariant

For computer vision tasks, Vision Transformers (ViTs) have become one of...
research
09/11/2020

Deducing neighborhoods of classes from a fitted model

In todays world the request for very complex models for huge data sets i...
research
08/16/2019

Gradient Weighted Superpixels for Interpretability in CNNs

As Convolutional Neural Networks embed themselves into our everyday live...

Please sign up or login with your details

Forgot password? Click here to reset