MDM:Visual Explanations for Neural Networks via Multiple Dynamic Mask

07/17/2022
by   Yitao Peng, et al.
0

The active region lookup of a neural network tells us which regions the neural network focuses on when making a decision, which gives us a basis for interpretability when the neural network makes a classification decision. We propose an algorithm Multiple Dynamic Mask(MDM), which is a general saliency graph query method with interpretability of the inference process. Its proposal is based on an assumption: when a picture is input to a neural network that has been trained, the activation features related to classification will affect the classification results of the neural network, and the features unrelated to classification will hardly affect the classification results of the network. MDM: A learning-based end-to-end algorithm for finding regions of interest for neural network classification. It has the following advantages: 1. It has the interpretability of the reasoning process. 2. It is universal, it can be used for any neural network and does not depend on the internal structure of the neural network. 3. The search performance is better. Because the algorithm is based on learning to generate masks and has the ability to adapt to different data and networks, the performance is better than the method proposed in the previous paper. For the MDM saliency map search algorithm, we experimentally compared the performance indicators of various saliency map search methods and the MDM with ResNet and DenseNet as the trained neural networks. The search effect performance of the MDM reached the state of the art. We applied the MDM to the interpretable neural network ProtoPNet and XProtoNet, which improved the interpretability of the model and the prototype search performance. We visualize the performance of convolutional neural architecture and Transformer architecture on saliency map search.

READ FULL TEXT

page 6

page 9

page 10

page 11

research
10/15/2022

DProtoNet: Decoupling the inference module and the explanation module enables neural networks to have better accuracy and interpretability

The interpretation of decisions made by neural networks is the focus of ...
research
01/12/2023

Hierarchical Dynamic Masks for Visual Explanation of Neural Networks

Saliency methods generating visual explanatory maps representing the imp...
research
05/23/2018

Saliency deep embedding for aurora image search

Deep neural networks have achieved remarkable success in the field of im...
research
06/16/2020

A generalizable saliency map-based interpretation of model outcome

One of the significant challenges of deep neural networks is that the co...
research
11/10/2020

DoLFIn: Distributions over Latent Features for Interpretability

Interpreting the inner workings of neural models is a key step in ensuri...
research
02/02/2022

Image Forgery Detection with Interpretability

In this work, we present a learning based method focusing on the convolu...
research
12/14/2016

Fast-AT: Fast Automatic Thumbnail Generation using Deep Neural Networks

Fast-AT is an automatic thumbnail generation system based on deep neural...

Please sign up or login with your details

Forgot password? Click here to reset