Tight Risk Bounds for Gradient Descent on Separable Data

03/02/2023
by   Matan Schliserman, et al.
0

We study the generalization properties of unregularized gradient methods applied to separable linear classification – a setting that has received considerable attention since the pioneering work of Soudry et al. (2018). We establish tight upper and lower (population) risk bounds for gradient descent in this setting, for any smooth loss function, expressed in terms of its tail decay rate. Our bounds take the form Θ(r_ℓ,T^2 / γ^2 T + r_ℓ,T^2 / γ^2 n), where T is the number of gradient steps, n is size of the training set, γ is the data margin, and r_ℓ,T is a complexity term that depends on the (tail decay rate) of the loss function (and on T). Our upper bound matches the best known upper bounds due to Shamir (2021); Schliserman and Koren (2022), while extending their applicability to virtually any smooth loss function and relaxing technical assumptions they impose. Our risk lower bounds are the first in this context and establish the tightness of our upper bounds for any given tail decay rate and in all parameter regimes. The proof technique used to show these results is also markedly simpler compared to previous work, and is straightforward to extend to other gradient methods; we illustrate this by providing analogous results for Stochastic Gradient Descent.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/19/2023

Lower Generalization Bounds for GD and SGD in Smooth Stochastic Convex Optimization

Recent progress was made in characterizing the generalization error of g...
research
02/27/2022

Stability vs Implicit Bias of Gradient Methods on Separable Data and Beyond

An influential line of recent work has focused on the generalization pro...
research
04/04/2018

Stability and Convergence Trade-off of Iterative Optimization Algorithms

The overall performance or expected excess risk of an iterative machine ...
research
02/13/2020

An Optimal Multistage Stochastic Gradient Method for Minimax Problems

In this paper, we study the minimax optimization problem in the smooth a...
research
02/01/2021

Information-Theoretic Generalization Bounds for Stochastic Gradient Descent

We study the generalization properties of the popular stochastic gradien...
research
05/31/2020

Tree-Projected Gradient Descent for Estimating Gradient-Sparse Parameters on Graphs

We study estimation of a gradient-sparse parameter vector θ^* ∈ℝ^p, havi...
research
12/08/2018

Weighted Risk Minimization & Deep Learning

Importance weighting is a key ingredient in many algorithms for causal i...

Please sign up or login with your details

Forgot password? Click here to reset