Stability vs Implicit Bias of Gradient Methods on Separable Data and Beyond

02/27/2022
by   Matan Schliserman, et al.
0

An influential line of recent work has focused on the generalization properties of unregularized gradient-based learning procedures applied to separable linear classification with exponentially-tailed loss functions. The ability of such methods to generalize well has been attributed to the their implicit bias towards large margin predictors, both asymptotically as well as in finite time. We give an additional unified explanation for this generalization and relate it to two simple properties of the optimization objective, that we refer to as realizability and self-boundedness. We introduce a general setting of unconstrained stochastic convex optimization with these properties, and analyze generalization of gradient methods through the lens of algorithmic stability. In this broader setting, we obtain sharp stability bounds for gradient descent and stochastic gradient descent which apply even for a very large number of gradient steps, and use them to derive general generalization bounds for these algorithms. Finally, as direct applications of the general bounds, we return to the setting of linear classification with separable data and establish several novel test loss and test accuracy bounds for gradient descent and stochastic gradient descent for a variety of loss functions with different tail decay rates. In some of these case, our bounds significantly improve upon the existing generalization error bounds in the literature.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/15/2022

Decentralized Learning with Separable Data: Generalization and Fast Algorithms

Decentralized learning offers privacy and communication efficiency when ...
research
06/30/2020

Gradient Methods Never Overfit On Separable Data

A line of recent works established that when training linear predictors ...
research
03/02/2023

Tight Risk Bounds for Gradient Descent on Separable Data

We study the generalization properties of unregularized gradient methods...
research
06/09/2019

The Implicit Bias of AdaGrad on Separable Data

We study the implicit bias of AdaGrad on separable linear classification...
research
05/22/2023

Fast Convergence in Learning Two-Layer Neural Networks with Separable Data

Normalized gradient descent has shown substantial success in speeding up...
research
05/27/2023

Faster Margin Maximization Rates for Generic Optimization Methods

First-order optimization methods tend to inherently favor certain soluti...
research
02/03/2021

The Instability of Accelerated Gradient Descent

We study the algorithmic stability of Nesterov's accelerated gradient me...

Please sign up or login with your details

Forgot password? Click here to reset