Mirror Descent Maximizes Generalized Margin and Can Be Implemented Efficiently

05/25/2022
by   Haoyuan Sun, et al.
0

Driven by the empirical success and wide use of deep neural networks, understanding the generalization performance of overparameterized models has become an increasingly popular question. To this end, there has been substantial effort to characterize the implicit bias of the optimization algorithms used, such as gradient descent (GD), and the structural properties of their preferred solutions. This paper answers an open question in this literature: For the classification setting, what solution does mirror descent (MD) converge to? Specifically, motivated by its efficient implementation, we consider the family of mirror descent algorithms with potential function chosen as the p-th power of the ℓ_p-norm, which is an important generalization of GD. We call this algorithm p-. For this family, we characterize the solutions it obtains and show that it converges in direction to a generalized maximum-margin solution with respect to the ℓ_p-norm for linearly separable classification. While the MD update rule is in general expensive to compute and perhaps not suitable for deep learning, p- is fully parallelizable in the same manner as SGD and can be used to train deep neural networks with virtually no additional computational overhead. Using comprehensive experiments with both linear and deep neural network models, we demonstrate that p- can noticeably affect the structure and the generalization performance of the learned models.

READ FULL TEXT
research
06/24/2023

A Unified Approach to Controlling Implicit Regularization via Mirror Descent

Inspired by the remarkable success of deep neural networks, there has be...
research
08/15/2021

Implicit Regularization of Bregman Proximal Point Algorithm and Mirror Descent on Separable Data

Bregman proximal point algorithm (BPPA), as one of the centerpieces in t...
research
12/11/2020

The Implicit Bias for Adaptive Optimization Algorithms on Homogeneous Neural Networks

Despite their overwhelming capacity to overfit, deep neural networks tra...
research
05/23/2017

The Marginal Value of Adaptive Gradient Methods in Machine Learning

Adaptive optimization methods, which perform local optimization with a m...
research
11/28/2020

On Generalization of Adaptive Methods for Over-parameterized Linear Regression

Over-parameterization and adaptive methods have played a crucial role in...
research
06/15/2023

Stochastic Re-weighted Gradient Descent via Distributionally Robust Optimization

We develop a re-weighted gradient descent technique for boosting the per...
research
06/09/2019

The Implicit Bias of AdaGrad on Separable Data

We study the implicit bias of AdaGrad on separable linear classification...

Please sign up or login with your details

Forgot password? Click here to reset