A Probabilistic Representation of Deep Learning

08/26/2019
by   Xinjie Lan, et al.
0

In this work, we introduce a novel probabilistic representation of deep learning, which provides an explicit explanation for the Deep Neural Networks (DNNs) in three aspects: (i) neurons define the energy of a Gibbs distribution; (ii) the hidden layers of DNNs formulate Gibbs distributions; and (iii) the whole architecture of DNNs can be interpreted as a Bayesian neural network. Based on the proposed probabilistic representation, we investigate two fundamental properties of deep learning: hierarchy and generalization. First, we explicitly formulate the hierarchy property from the Bayesian perspective, namely that some hidden layers formulate a prior distribution and the remaining layers formulate a likelihood distribution. Second, we demonstrate that DNNs have an explicit regularization by learning a prior distribution and the learning algorithm is one reason for decreasing the generalization ability of DNNs. Moreover, we clarify two empirical phenomena of DNNs that cannot be explained by traditional theories of generalization. Simulation results validate the proposed probabilistic representation and the insights into these properties of deep learning based on a synthetic dataset.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/22/2019

Explicitly Bayesian Regularizations in Deep Learning

Generalization is essential for deep learning. In contrast to previous w...
research
06/01/2019

A synthetic dataset for deep learning

In this paper, we propose a novel method for generating a synthetic data...
research
02/20/2020

Bayesian Deep Learning and a Probabilistic Perspective of Generalization

The key distinguishing property of a Bayesian approach is marginalizatio...
research
10/27/2020

A Probabilistic Representation of Deep Learning for Improving The Information Theoretic Interpretability

In this paper, we propose a probabilistic representation of MultiLayer P...
research
10/22/2021

The Equilibrium Hypothesis: Rethinking implicit regularization in Deep Neural Networks

Modern Deep Neural Networks (DNNs) exhibit impressive generalization pro...
research
08/29/2023

Input margins can predict generalization too

Understanding generalization in deep neural networks is an active area o...
research
09/19/2020

Redundancy of Hidden Layers in Deep Learning: An Information Perspective

Although the deep structure guarantees the powerful expressivity of deep...

Please sign up or login with your details

Forgot password? Click here to reset