LAP: An Attention-Based Module for Faithful Interpretation and Knowledge Injection in Convolutional Neural Networks

01/27/2022
by   Rassa Ghavami Modegh, et al.
2

Despite the state-of-the-art performance of deep convolutional neural networks, they are susceptible to bias and malfunction in unseen situations. The complex computation behind their reasoning is not sufficiently human-understandable to develop trust. External explainer methods have tried to interpret the network decisions in a human-understandable way, but they are accused of fallacies due to their assumptions and simplifications. On the other side, the inherent self-interpretability of models, while being more robust to the mentioned fallacies, cannot be applied to the already trained models. In this work, we propose a new attention-based pooling layer, called Local Attention Pooling (LAP), that accomplishes self-interpretability and the possibility for knowledge injection while improving the model's performance. Moreover, several weakly-supervised knowledge injection methodologies are provided to enhance the process of training. We verified our claims by evaluating several LAP-extended models on three different datasets, including Imagenet. The proposed framework offers more valid human-understandable and more faithful-to-the-model interpretations than the commonly used white-box explainer methods.

READ FULL TEXT

page 1

page 6

page 7

page 10

page 11

research
09/22/2022

Improving Attention-Based Interpretability of Text Classification Transformers

Transformers are widely used in NLP, where they consistently achieve sta...
research
08/20/2019

Saccader: Improving Accuracy of Hard Attention Models for Vision

Although deep convolutional neural networks achieve state-of-the-art per...
research
05/09/2019

Learning Interpretable Features via Adversarially Robust Optimization

Neural networks are proven to be remarkably successful for classificatio...
research
09/27/2018

Introducing Noise in Decentralized Training of Neural Networks

It has been shown that injecting noise into the neural network weights d...
research
06/20/2018

Towards Robust Interpretability with Self-Explaining Neural Networks

Most recent work on interpretability of complex machine learning models ...
research
10/20/2022

Towards Better Guided Attention and Human Knowledge Insertion in Deep Convolutional Neural Networks

Attention Branch Networks (ABNs) have been shown to simultaneously provi...
research
12/02/2020

Attention-gating for improved radio galaxy classification

In this work we introduce attention as a state of the art mechanism for ...

Please sign up or login with your details

Forgot password? Click here to reset