A subgradient method with constant step-size for ℓ_1-composite optimization

02/23/2023
by   Alessandro Scagliotti, et al.
0

Subgradient methods are the natural extension to the non-smooth case of the classical gradient descent for regular convex optimization problems. However, in general, they are characterized by slow convergence rates, and they require decreasing step-sizes to converge. In this paper we propose a subgradient method with constant step-size for composite convex objectives with ℓ_1-regularization. If the smooth term is strongly convex, we can establish a linear convergence result for the function values. This fact relies on an accurate choice of the element of the subdifferential used for the update, and on proper actions adopted when non-differentiability regions are crossed. Then, we propose an accelerated version of the algorithm, based on conservative inertial dynamics and on an adaptive restart strategy. Finally, we test the performances of our algorithms on some strongly and non-strongly convex examples.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/18/2019

Convergence Analysis of a Momentum Algorithm with Adaptive Step Size for Non Convex Optimization

Although ADAM is a very popular algorithm for optimizing the weights of ...
research
04/01/2022

Learning to Accelerate by the Methods of Step-size Planning

Gradient descent is slow to converge for ill-conditioned problems and no...
research
07/01/2014

SAGA: A Fast Incremental Gradient Method With Support for Non-Strongly Convex Composite Objectives

In this work we introduce a new optimisation method called SAGA in the s...
research
10/18/2016

Analysis and Implementation of an Asynchronous Optimization Algorithm for the Parameter Server

This paper presents an asynchronous incremental aggregated gradient algo...
research
09/07/2023

An Element-wise RSAV Algorithm for Unconstrained Optimization Problems

We present a novel optimization algorithm, element-wise relaxed scalar a...
research
08/25/2022

Accelerated Sparse Recovery via Gradient Descent with Nonlinear Conjugate Gradient Momentum

This paper applies an idea of adaptive momentum for the nonlinear conjug...
research
09/13/2021

Minimizing Quantum Renyi Divergences via Mirror Descent with Polyak Step Size

Quantum information quantities play a substantial role in characterizing...

Please sign up or login with your details

Forgot password? Click here to reset