Information Theoretic Interpretation of Deep learning

03/21/2018
by   Tianchen Zhao, et al.
0

We interpret part of the experimental results of Shwartz-Ziv and Tishby [2017]. Inspired by these results, we established a conjecture of the dynamics of the machinary of deep neural network. This conjecture can be used to explain the counterpart result by Saxe et al. [2018].

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/20/2019

Towards Further Understanding of Sparse Filtering via Information Bottleneck

In this paper we examine a formalization of feature distribution learnin...
research
10/23/2022

Entropic exercises around the Kneser-Poulsen conjecture

We develop an information-theoretic approach to study the Kneser–Poulsen...
research
01/28/2019

Interpreting Deep Neural Networks Through Variable Importance

While the success of deep neural networks (DNNs) is well-established acr...
research
01/08/2021

An Information-theoretic Progressive Framework for Interpretation

Both brain science and the deep learning communities have the problem of...
research
05/26/2023

Investigating how ReLU-networks encode symmetries

Many data symmetries can be described in terms of group equivariance and...
research
02/25/2021

Proof of the Contiguity Conjecture and Lognormal Limit for the Symmetric Perceptron

We consider the symmetric binary perceptron model, a simple model of neu...
research
10/08/2018

Convexity and Operational Interpretation of the Quantum Information Bottleneck Function

In classical information theory, the information bottleneck method (IBM)...

Please sign up or login with your details

Forgot password? Click here to reset