Partial Information Decomposition Reveals the Structure of Neural Representations

09/21/2022
by   David A. Ehrlich, et al.
0

In neural networks, task-relevant information is represented jointly by groups of neurons. However, the specific way in which the information is distributed among the individual neurons is not well understood: While parts of it may only be obtainable from specific single neurons, other parts are carried redundantly or synergistically by multiple neurons. We show how Partial Information Decomposition (PID), a recent extension of information theory, can disentangle these contributions. From this, we introduce the measure of "Representational Complexity", which quantifies the difficulty of accessing information spread across multiple neurons. We show how this complexity is directly computable for smaller layers. For larger layers, we propose subsampling and coarse-graining procedures and prove corresponding bounds on the latter. Empirically, for quantized deep neural networks solving the MNIST task, we observe that representational complexity decreases both through successive hidden layers and over training. Overall, we propose representational complexity as a principled and interpretable summary statistic for analyzing the structure of neural representations.

READ FULL TEXT

page 2

page 6

research
10/13/2021

Detecting Modularity in Deep Neural Networks

A neural network is modular to the extent that parts of its computationa...
research
06/03/2023

Infomorphic networks: Locally learning neural networks derived from partial information decomposition

Understanding the intricate cooperation among individual neurons in perf...
research
10/31/2018

Conceptual Content in Deep Convolutional Neural Networks: An analysis into multi-faceted properties of neurons

In this paper we analyze convolutional layers of VGG16 model pre-trained...
research
11/24/2015

Convergent Learning: Do different neural networks learn the same representations?

Recent success in training deep neural networks have prompted active inv...
research
04/18/2018

Understanding Individual Neuron Importance Using Information Theory

In this work, we characterize the outputs of individual neurons in a tra...
research
02/11/2018

Understanding Convolutional Networks with APPLE : Automatic Patch Pattern Labeling for Explanation

With the success of deep learning, recent efforts have been focused on a...
research
08/10/2021

Logical Information Cells I

In this study we explore the spontaneous apparition of visible intelligi...

Please sign up or login with your details

Forgot password? Click here to reset