SGD Through the Lens of Kolmogorov Complexity

11/10/2021
by   Gregory Schwartzman, et al.
0

We prove that stochastic gradient descent (SGD) finds a solution that achieves (1-ϵ) classification accuracy on the entire dataset. We do so under two main assumptions: (1. Local progress) There is consistent improvement of the model accuracy over batches. (2. Models compute simple functions) The function computed by the model is simple (has low Kolmogorov complexity). Intuitively, the above means that local progress of SGD implies global progress. Assumption 2 trivially holds for underparameterized models, hence, our work gives the first convergence guarantee for general, underparameterized models. Furthermore, this is the first result which is completely model agnostic - we don't require the model to have any specific architecture or activation function, it may not even be a neural network. Our analysis makes use of the entropy compression method, which was first introduced by Moser and Tardos in the context of the Lovász local lemma.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/16/2021

Convergence of stochastic gradient descent schemes for Lojasiewicz-landscapes

In this article, we consider convergence of stochastic gradient descent ...
research
04/18/2023

Convergence of stochastic gradient descent under a local Lajasiewicz condition for deep neural networks

We extend the global convergence result of Chatterjee <cit.> by consider...
research
10/04/2021

Global Convergence and Stability of Stochastic Gradient Descent

In machine learning, stochastic gradient descent (SGD) is widely deploye...
research
07/22/2021

Local SGD Optimizes Overparameterized Neural Networks in Polynomial Time

In this paper we prove that Local (S)GD (or FedAvg) can optimize two-lay...
research
10/13/2022

From Gradient Flow on Population Loss to Learning with Stochastic Gradient Descent

Stochastic Gradient Descent (SGD) has been the method of choice for lear...
research
09/10/2018

Privacy-Preserving Deep Learning for any Activation Function

This paper considers the scenario that multiple data owners wish to appl...
research
02/21/2023

SGD learning on neural networks: leap complexity and saddle-to-saddle dynamics

We investigate the time complexity of SGD learning on fully-connected ne...

Please sign up or login with your details

Forgot password? Click here to reset