Decomposing neural networks as mappings of correlation functions

02/10/2022
by   Kirsten Fischer, et al.
0

Understanding the functional principles of information processing in deep neural networks continues to be a challenge, in particular for networks with trained and thus non-random weights. To address this issue, we study the mapping between probability distributions implemented by a deep feed-forward network. We characterize this mapping as an iterated transformation of distributions, where the non-linearity in each layer transfers information between different orders of correlation functions. This allows us to identify essential statistics in the data, as well as different information representations that can be used by neural networks. Applied to an XOR task and to MNIST, we show that correlations up to second order predominantly capture the information processing in the internal layers, while the input layer also extracts higher-order correlations from the data. This analysis provides a quantitative and explainable perspective on classification.

READ FULL TEXT

page 7

page 9

research
04/12/2023

Fluctuation based interpretable analysis scheme for quantum many-body snapshots

Microscopically understanding and classifying phases of matter is at the...
research
02/19/2002

On model selection and the disability of neural networks to decompose tasks

A neural network with fixed topology can be regarded as a parametrizatio...
research
07/24/2021

Neural Function Modules with Sparse Arguments: A Dynamic Approach to Integrating Information across Layers

Feed-forward neural networks consist of a sequence of layers, in which e...
research
10/01/2022

PathFinder: Discovering Decision Pathways in Deep Neural Networks

Explainability is becoming an increasingly important topic for deep neur...
research
07/09/2019

Characterizing Inter-Layer Functional Mappings of Deep Learning Models

Deep learning architectures have demonstrated state-of-the-art performan...
research
11/12/2021

Thermodynamics of Encoding and Encoders

Non-isolated systems have diverse coupling relations with the external e...
research
03/21/2022

Origami in N dimensions: How feed-forward networks manufacture linear separability

Neural networks can implement arbitrary functions. But, mechanistically,...

Please sign up or login with your details

Forgot password? Click here to reset