DProtoNet: Decoupling the inference module and the explanation module enables neural networks to have better accuracy and interpretability

10/15/2022
by   Yitao Peng, et al.
0

The interpretation of decisions made by neural networks is the focus of recent research. In the previous method, by modifying the architecture of the neural network, the network simulates the human reasoning process, that is, by finding the decision elements to make decisions, so that the network has the interpretability of the reasoning process. The specific interpretable architecture will limit the fitting space of the network, resulting in a decrease in the classification performance of the network, unstable convergence, and general interpretability. We propose DProtoNet (Decoupling Prototypical network), it stores the decision basis of the neural network by using feature masks, and it uses Multiple Dynamic Masks (MDM) to explain the decision basis for feature mask retention. It decouples the neural network inference module from the interpretation module, and removes the specific architectural limitations of the interpretable network, so that the decision-making architecture of the network retains the original network architecture as much as possible, making the neural network more expressive, and greatly improving the interpretability. Classification performance and interpretability of explanatory networks. We propose to replace the prototype learning of a single image with the prototype learning of multiple images, which makes the prototype robust, improves the convergence speed of network training, and makes the accuracy of the network more stable during the learning process. We test on multiple datasets, DProtoNet can improve the accuracy of recent advanced interpretable network models by 5 classification performance is comparable to that of backbone networks without interpretability. It also achieves the state of the art in interpretability performance.

READ FULL TEXT

page 13

page 22

research
07/17/2022

MDM:Visual Explanations for Neural Networks via Multiple Dynamic Mask

The active region lookup of a neural network tells us which regions the ...
research
11/03/2021

On the Effectiveness of Interpretable Feedforward Neural Network

Deep learning models have achieved state-of-the-art performance in many ...
research
11/15/2019

Explanatory Masks for Neural Network Interpretability

Neural network interpretability is a vital component for applications ac...
research
12/04/2018

Prototype-based Neural Network Layers: Incorporating Vector Quantization

Neural networks currently dominate the machine learning community and th...
research
04/03/2019

Interpretable Deep Learning for Two-Prong Jet Classification with Jet Spectra

Classification of jets with deep learning has gained significant attenti...
research
02/17/2019

Attention-Based Prototypical Learning Towards Interpretable, Confident and Robust Deep Neural Networks

We propose a new framework for prototypical learning that bases decision...
research
03/27/2020

A copula-based visualization technique for a neural network

Interpretability of machine learning is defined as the extent to which h...

Please sign up or login with your details

Forgot password? Click here to reset