On the Power of Differentiable Learning versus PAC and SQ Learning

08/09/2021
by   Emmanuel Abbe, et al.
0

We study the power of learning via mini-batch stochastic gradient descent (SGD) on the population loss, and batch Gradient Descent (GD) on the empirical loss, of a differentiable model or neural network, and ask what learning problems can be learnt using these paradigms. We show that SGD and GD can always simulate learning with statistical queries (SQ), but their ability to go beyond that depends on the precision ρ of the gradient calculations relative to the minibatch size b (for SGD) and sample size m (for GD). With fine enough precision relative to minibatch size, namely when b ρ is small enough, SGD can go beyond SQ learning and simulate any sample-based learning algorithm and thus its learning power is equivalent to that of PAC learning; this extends prior work that achieved this result for b=1. Similarly, with fine enough precision relative to the sample size m, GD can also simulate any sample-based learning algorithm based on m samples. In particular, with polynomially many bits of precision (i.e. when ρ is exponentially small), SGD and GD can both simulate PAC learning regardless of the mini-batch size. On the other hand, when b ρ^2 is large enough, the power of SGD is equivalent to that of SQ learning.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/18/2017

The Power of Interpolation: Understanding the Effectiveness of SGD in Modern Over-parametrized Learning

Stochastic Gradient Descent (SGD) with small mini-batch is a key compone...
research
05/28/2023

Acceleration of stochastic gradient descent with momentum by averaging: finite-sample rates and asymptotic normality

Stochastic gradient descent with momentum (SGDM) has been widely used in...
research
09/11/2023

Stochastic Gradient Descent-like relaxation is equivalent to Glauber dynamics in discrete optimization and inference problems

Is Stochastic Gradient Descent (SGD) substantially different from Glaube...
research
11/15/2019

Optimal Mini-Batch Size Selection for Fast Gradient Descent

This paper presents a methodology for selecting the mini-batch size that...
research
10/12/2016

Parallelizing Stochastic Approximation Through Mini-Batching and Tail-Averaging

This work characterizes the benefits of averaging techniques widely used...
research
06/05/2023

Decentralized SGD and Average-direction SAM are Asymptotically Equivalent

Decentralized stochastic gradient descent (D-SGD) allows collaborative l...
research
02/23/2020

Improve SGD Training via Aligning Min-batches

Deep neural networks (DNNs) for supervised learning can be viewed as a p...

Please sign up or login with your details

Forgot password? Click here to reset