Anderson acceleration of coordinate descent

11/19/2020
by   Quentin Bertrand, et al.
0

Acceleration of first order methods is mainly obtained via inertial techniques à la Nesterov, or via nonlinear extrapolation. The latter has known a recent surge of interest, with successful applications to gradient and proximal gradient techniques. On multiple Machine Learning problems, coordinate descent achieves performance significantly superior to full-gradient methods. Speeding up coordinate descent in practice is not easy: inertially accelerated versions of coordinate descent are theoretically accelerated, but might not always lead to practical speed-ups. We propose an accelerated version of coordinate descent using extrapolation, showing considerable speed up in practice, compared to inertial accelerated coordinate descent and extrapolated (proximal) gradient descent. Experiments on least squares, Lasso, elastic net and logistic regression validate the approach.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/30/2015

Even Faster Accelerated Coordinate Descent Using Non-Uniform Sampling

Accelerated coordinate descent is widely used in optimization due to its...
research
06/02/2020

Acceleration of Descent-based Optimization Algorithms via Carathéodory's Theorem

We propose a new technique to accelerate algorithms based on Gradient De...
research
12/15/2017

Catalyst Acceleration for First-order Convex Optimization: from Theory to Practice

We introduce a generic scheme for accelerating gradient-based optimizati...
research
03/02/2017

Encrypted accelerated least squares regression

Information that is stored in an encrypted format is, by definition, usu...
research
10/26/2022

Coordinate Descent for SLOPE

The lasso is the most famous sparse regression and feature selection met...
research
12/29/2016

Geometric descent method for convex composite minimization

In this paper, we extend the geometric descent method recently proposed ...
research
09/10/2013

Accelerated Proximal Stochastic Dual Coordinate Ascent for Regularized Loss Minimization

We introduce a proximal version of the stochastic dual coordinate ascent...

Please sign up or login with your details

Forgot password? Click here to reset