Practical Sharpness-Aware Minimization Cannot Converge All the Way to Optima

06/16/2023
by   Dongkuk Si, et al.
0

Sharpness-Aware Minimization (SAM) is an optimizer that takes a descent step based on the gradient at a perturbation y_t = x_t + ρ∇ f(x_t)/‖∇ f(x_t) ‖ of the current point x_t. Existing studies prove convergence of SAM for smooth functions, but they do so by assuming decaying perturbation size ρ and/or no gradient normalization in y_t, which is detached from practice. To address this gap, we study deterministic/stochastic versions of SAM with practical configurations (i.e., constant ρ and gradient normalization in y_t) and explore their convergence properties on smooth functions with (non)convexity assumptions. Perhaps surprisingly, in many scenarios, we find out that SAM has limited capability to converge to global minima or stationary points. For smooth strongly convex functions, we show that while deterministic SAM enjoys tight global convergence rates of Θ̃(1/T^2), the convergence bound of stochastic SAM suffers an inevitable additive term O(ρ^2), indicating convergence only up to neighborhoods of optima. In fact, such O(ρ^2) factors arise for stochastic SAM in all the settings we consider, and also for deterministic SAM in nonconvex cases; importantly, we prove by examples that such terms are unavoidable. Our results highlight vastly different characteristics of SAM with vs. without decaying perturbation size or gradient normalization, and suggest that the intuitions gained from one version may not apply to the other.

READ FULL TEXT

page 3

page 25

research
05/23/2016

Fast Stochastic Methods for Nonsmooth Nonconvex Optimization

We analyze stochastic algorithms for optimizing nonconvex, nonsmooth fin...
research
11/14/2014

Stochastic Compositional Gradient Descent: Algorithms for Minimizing Compositions of Expected-Value Functions

Classical stochastic gradient methods are well suited for minimizing exp...
research
06/07/2022

Sampling without Replacement Leads to Faster Rates in Finite-Sum Minimax Optimization

We analyze the convergence rates of stochastic gradient algorithms for s...
research
10/21/2019

On Distributed Stochastic Gradient Algorithms for Global Optimization

The paper considers the problem of network-based computation of global m...
research
05/22/2023

SignSVRG: fixing SignSGD via variance reduction

We consider the problem of unconstrained minimization of finite sums of ...
research
06/30/2023

Systematic Investigation of Sparse Perturbed Sharpness-Aware Minimization Optimizer

Deep neural networks often suffer from poor generalization due to comple...

Please sign up or login with your details

Forgot password? Click here to reset