Survey Descent: A Multipoint Generalization of Gradient Descent for Nonsmooth Optimization

11/30/2021
by   X. Y. Han, et al.
0

For strongly convex objectives that are smooth, the classical theory of gradient descent ensures linear convergence relative to the number of gradient evaluations. An analogous nonsmooth theory is challenging. Even when the objective is smooth at every iterate, the corresponding local models are unstable and the number of cutting planes invoked by traditional remedies is difficult to bound, leading to convergences guarantees that are sublinear relative to the cumulative number of gradient evaluations. We instead propose a multipoint generalization of the gradient descent iteration for local optimization. While designed with general objectives in mind, we are motivated by a “max-of-smooth” model that captures the subdifferential dimension at optimality. We prove linear convergence when the objective is itself max-of-smooth, and experiments suggest a more general phenomenon.

READ FULL TEXT
research
07/26/2019

Bias of Homotopic Gradient Descent for the Hinge Loss

Gradient descent is a simple and widely used optimization method for mac...
research
11/14/2019

Gradientless Descent: High-Dimensional Zeroth-Order Optimization

Zeroth-order optimization is the process of minimizing an objective f(x)...
research
09/10/2019

First Analysis of Local GD on Heterogeneous Data

We provide the first convergence analysis of local gradient descent for ...
research
07/12/2023

Provably Faster Gradient Descent via Long Steps

This work establishes provably faster convergence rates for gradient des...
research
06/02/2022

A Communication-efficient Algorithm with Linear Convergence for Federated Minimax Learning

In this paper, we study a large-scale multi-agent minimax optimization p...
research
06/07/2023

A Mirror Descent Perspective on Classical and Quantum Blahut-Arimoto Algorithms

The Blahut-Arimoto algorithm is a well known method to compute classical...
research
09/20/2021

Generalized Optimization: A First Step Towards Category Theoretic Learning Theory

The Cartesian reverse derivative is a categorical generalization of reve...

Please sign up or login with your details

Forgot password? Click here to reset