Energy-entropy competition and the effectiveness of stochastic gradient descent in machine learning

03/05/2018
by   Yao Zhang, et al.
0

Finding parameters that minimise a loss function is at the core of many machine learning methods. The Stochastic Gradient Descent algorithm is widely used and delivers state of the art results for many problems. Nonetheless, Stochastic Gradient Descent typically cannot find the global minimum, thus its empirical effectiveness is hitherto mysterious. We derive a correspondence between parameter inference and free energy minimisation in statistical physics. The degree of undersampling plays the role of temperature. Analogous to the energy-entropy competition in statistical physics, wide but shallow minima can be optimal if the system is undersampled, as is typical in many applications. Moreover, we show that the stochasticity in the algorithm has a non-trivial correlation structure which systematically biases it towards wide minima. We illustrate our argument with two prototypical models: image classification using deep learning, and a linear neural network where we can analytically reveal the relationship between entropy and out-of-sample error.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/28/2020

Uncertainty Quantification in Deep Learning through Stochastic Maximum Principle

We develop a probabilistic machine learning method, which formulates a c...
research
05/30/2022

Agnostic Physics-Driven Deep Learning

This work establishes that a physical system can perform statistical lea...
research
06/26/2023

Black holes and the loss landscape in machine learning

Understanding the loss landscape is an important problem in machine lear...
research
04/15/2020

Towards a theory of machine learning

We define a neural network as a septuple consisting of (1) a state vecto...
research
01/22/2022

Physical geometry of channel degradation

We outline a geometrical correspondence between capacity and effective f...
research
09/28/2018

Fluctuation-dissipation relations for stochastic gradient descent

The notion of the stationary equilibrium ensemble has played a central r...
research
03/10/2022

neos: End-to-End-Optimised Summary Statistics for High Energy Physics

The advent of deep learning has yielded powerful tools to automatically ...

Please sign up or login with your details

Forgot password? Click here to reset