Towards a theory of machine learning

04/15/2020
by   Vitaly Vanchurin, et al.
0

We define a neural network as a septuple consisting of (1) a state vector, (2) an input projection, (3) an output projection, (4) a weight matrix, (5) a bias vector, (6) an activation map and (7) a loss function. We argue that the loss function can be imposed either on the boundary (i.e. input and/or output neurons) or in the bulk (i.e. hidden neurons) for both supervised and unsupervised systems. We apply the principle of maximum entropy to derive a canonical ensemble of the state vectors subject to a constraint imposed on the bulk loss function by a Lagrange multiplier (or an inverse temperature parameter). We show that in an equilibrium the canonical partition function must be a product of two factors: a function of the temperature and a function of the bias vector and weight matrix. Consequently, the total Shannon entropy consists of two terms which represent respectively a thermodynamic entropy and a complexity of the neural network. We derive the first and second laws of learning: during learning the total entropy must decrease until the systems reaches an equilibrium (i.e. the second law) and in the equilibrium an increment in the loss function must be proportional to an increment in the thermodynamic entropy plus an increment in the complexity (i.e. the first law). We calculate the entropy destruction to show that the efficiency of learning is given by the Laplacian of the total free energy which is to be maximized in an optimal neural architecture, and explain why the optimization condition is better satisfied in a deep network with a large number of hidden layers. The key properties of the model are verified numerically by training a supervised feedforward neural network using the method of stochastic gradient descent. We also discuss a possibility that the entire universe on its most fundamental level is a neural network.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/04/2020

The world as a neural network

We discuss a possibility that the entire universe on its most fundamenta...
research
03/05/2018

Energy-entropy competition and the effectiveness of stochastic gradient descent in machine learning

Finding parameters that minimise a loss function is at the core of many ...
research
05/16/2022

Thermodynamics as Combinatorics: A Toy Theory

We discuss a simple toy model which allows, in a natural way, for derivi...
research
08/01/2019

Low-Rank plus Sparse Decomposition of Covariance Matrices using Neural Network Parametrization

This paper revisits the problem of decomposing a positive semidefinite m...
research
12/09/2020

Emergent Quantumness in Neural Networks

It was recently shown that the Madelung equations, that is, a hydrodynam...
research
06/15/2019

RECAL: Reuse of Established CNN classifer Apropos unsupervised Learning paradigm

Recently, clustering with deep network framework has attracted attention...
research
11/19/2017

Convergence Analysis of the Dynamics of a Special Kind of Two-Layered Neural Networks with ℓ_1 and ℓ_2 Regularization

In this paper, we made an extension to the convergence analysis of the d...

Please sign up or login with your details

Forgot password? Click here to reset