B-cos Networks: Alignment is All We Need for Interpretability

05/20/2022
by   Moritz Böhle, et al.
0

We present a new direction for increasing the interpretability of deep neural networks (DNNs) by promoting weight-input alignment during training. For this, we propose to replace the linear transforms in DNNs by our B-cos transform. As we show, a sequence (network) of such transforms induces a single linear transform that faithfully summarises the full model computations. Moreover, the B-cos transform introduces alignment pressure on the weights during optimisation. As a result, those induced linear transforms become highly interpretable and align with task-relevant features. Importantly, the B-cos transform is designed to be compatible with existing architectures and we show that it can easily be integrated into common models such as VGGs, ResNets, InceptionNets, and DenseNets, whilst maintaining similar performance on ImageNet. The resulting explanations are of high visual quality and perform well under quantitative metrics for interpretability. Code available at https://www.github.com/moboehle/B-cos.

READ FULL TEXT

page 1

page 7

page 8

page 12

page 14

page 15

page 16

page 17

research
06/19/2023

B-cos Alignment for Inherently Interpretable CNNs and Vision Transformers

We present a new direction for increasing the interpretability of deep n...
research
09/27/2021

Optimising for Interpretability: Convolutional Dynamic Alignment Networks

We introduce a new family of neural network models called Convolutional ...
research
03/31/2021

Convolutional Dynamic Alignment Networks for Interpretable Classifications

We introduce a new family of neural network models called Convolutional ...
research
07/08/2014

Regression-Based Image Alignment for General Object Categories

Gradient-descent methods have exhibited fast and reliable performance fo...
research
11/18/2022

Diagnostics for Deep Neural Networks with Automated Copy/Paste Attacks

Deep neural networks (DNNs) are powerful, but they can make mistakes tha...
research
06/16/2020

Gradient Alignment in Deep Neural Networks

One cornerstone of interpretable deep learning is the high degree of vis...
research
05/27/2020

Explaining Neural Networks by Decoding Layer Activations

To derive explanations for deep learning models, ie. classifiers, we pro...

Please sign up or login with your details

Forgot password? Click here to reset