Learning Interpretable Concept Groups in CNNs

09/21/2021
by   Saurabh Varshneya, et al.
10

We propose a novel training methodology – Concept Group Learning (CGL) – that encourages training of interpretable CNN filters by partitioning filters in each layer into concept groups, each of which is trained to learn a single visual concept. We achieve this through a novel regularization strategy that forces filters in the same group to be active in similar image regions for a given layer. We additionally use a regularizer to encourage a sparse weighting of the concept groups in each layer so that a few concept groups can have greater importance than others. We quantitatively evaluate CGL's model interpretability using standard interpretability evaluation techniques and find that our method increases interpretability scores in most cases. Qualitatively we compare the image regions that are most active under filters learned using CGL versus filters learned without CGL and find that CGL activation regions more strongly concentrate around semantically relevant features.

READ FULL TEXT

page 1

page 6

research
01/08/2019

Interpretable CNNs

This paper proposes a generic method to learn interpretable convolutiona...
research
09/18/2020

Contextual Semantic Interpretability

Convolutional neural networks (CNN) are known to learn an image represen...
research
07/16/2020

Training Interpretable Convolutional Neural Networks by Differentiating Class-specific Filters

Convolutional neural networks (CNNs) have been successfully used in a ra...
research
10/20/2022

Pruning by Active Attention Manipulation

Filter pruning of a CNN is typically achieved by applying discrete masks...
research
05/17/2018

RotDCF: Decomposition of Convolutional Filters for Rotation-Equivariant Deep Networks

Explicit encoding of group actions in deep features makes it possible fo...
research
03/19/2023

Unsupervised Interpretable Basis Extraction for Concept-Based Visual Explanations

An important line of research attempts to explain CNN image classifier p...
research
08/31/2023

Unsupervised discovery of Interpretable Visual Concepts

Providing interpretability of deep-learning models to non-experts, while...

Please sign up or login with your details

Forgot password? Click here to reset