DeepAI AI Chat
Log In Sign Up

Revisiting Subgradient Method: Complexity and Convergence Beyond Lipschitz Continuity

by   Xiao Li, et al.

The subgradient method is one of the most fundamental algorithmic schemes for nonsmooth optimization. The existing complexity and convergence results for this algorithm are mainly derived for Lipschitz continuous objective functions. In this work, we first extend the typical complexity results for the subgradient method to convex and weakly convex minimization without assuming Lipschitz continuity. Specifically, we establish π’ͺ(1/√(T)) bound in terms of the suboptimality gap β€œf(x) - f^*” for convex case and π’ͺ(1/T^1/4) bound in terms of the gradient of the Moreau envelope function for weakly convex case. Furthermore, we provide convergence results for non-Lipschitz convex and weakly convex objective functions using proper diminishing rules on the step sizes. In particular, when f is convex, we show π’ͺ(log(k)/√(k)) rate of convergence in terms of the suboptimality gap. With an additional quadratic growth condition, the rate is improved to π’ͺ(1/k) in terms of the squared distance to the optimal solution set. When f is weakly convex, asymptotic convergence is derived. The central idea is that the dynamics of properly chosen step sizes rule fully controls the movement of the subgradient method, which leads to boundedness of the iterates, and then a trajectory-based analysis can be conducted to establish the desired results. To further illustrate the wide applicability of our framework, we extend the complexity results to the truncated subgradient, the stochastic subgradient, the incremental subgradient, and the proximal subgradient methods for non-Lipschitz functions.


page 1

page 2

page 3

page 4

βˆ™ 08/30/2023

A Unified Analysis for the Subgradient Methods Minimizing Composite Nonconvex, Nonsmooth and Non-Lipschitz Functions

In this paper we propose a proximal subgradient method (Prox-SubGrad) fo...
βˆ™ 03/17/2018

Stochastic model-based minimization of weakly convex functions

We consider an algorithm that successively samples and minimizes stochas...
βˆ™ 05/29/2020

Long term dynamics of the subgradient method for Lipschitz path differentiable functions

We consider the long-term dynamics of the vanishing stepsize subgradient...
βˆ™ 06/30/2022

Randomized Coordinate Subgradient Method for Nonsmooth Optimization

Nonsmooth optimization finds wide applications in many engineering field...
βˆ™ 05/25/2021

An incremental descent method for multi-objective optimization

Current state-of-the-art multi-objective optimization solvers, by comput...
βˆ™ 09/22/2019

Contractivity of Runge-Kutta methods for convex gradient systems

We consider the application of Runge-Kutta (RK) methods to gradient syst...
βˆ™ 07/05/2019

A-priori error analysis of local incremental minimization schemes for rate-independent evolutions

This paper is concerned with a priori error estimates for the local incr...