Net2Vec: Quantifying and Explaining how Concepts are Encoded by Filters in Deep Neural Networks

01/10/2018
by   Ruth Fong, et al.
0

In an effort to understand the meaning of the intermediate representations captured by deep networks, recent papers have tried to associate specific semantic concepts to individual neural network filter responses, where interesting correlations are often found, largely by focusing on extremal filter responses. In this paper, we show that this approach can favor easy-to-interpret cases that are not necessarily representative of the average behavior of a representation. A more realistic but harder-to-study hypothesis is that semantic representations are distributed, and thus filters must be studied in conjunction. In order to investigate this idea while enabling systematic visualization and quantification of multiple filter responses, we introduce the Net2Vec framework, in which semantic concepts are mapped to vectorial embeddings based on corresponding filter responses. By studying such embeddings, we are able to show that 1., in most cases, multiple filters are required to code for a concept, that 2., often filters are not concept specific and help encode multiple concepts, and that 3., compared to single filter activations, filter embeddings are able to better characterize the meaning of a representation and its relationship to other concepts.

READ FULL TEXT

page 1

page 6

page 7

research
07/17/2019

Differentiable Disentanglement Filter: an Application Agnostic Core Concept Discovery Probe

It has long been speculated that deep neural networks function by discov...
research
01/15/2020

Filter Grafting for Deep Neural Networks

This paper proposes a new learning paradigm called filter grafting, whic...
research
05/14/2021

Cause and Effect: Concept-based Explanation of Neural Networks

In many scenarios, human decisions are explained based on some high-leve...
research
02/25/2023

Bayesian Neural Networks Tend to Ignore Complex and Sensitive Concepts

In this paper, we focus on mean-field variational Bayesian Neural Networ...
research
08/04/2020

Making Sense of CNNs: Interpreting Deep Representations Their Invariances with INNs

To tackle increasingly complex tasks, it has become an essential ability...
research
12/06/2019

Visualizing Deep Neural Networks for Speech Recognition with Learned Topographic Filter Maps

The uninformative ordering of artificial neurons in Deep Neural Networks...
research
05/20/2021

Biologically Inspired Semantic Lateral Connectivity for Convolutional Neural Networks

Lateral connections play an important role for sensory processing in vis...

Please sign up or login with your details

Forgot password? Click here to reset