Acceleration of Descent-based Optimization Algorithms via Carathéodory's Theorem

06/02/2020
by   Francesco Cosentino, et al.
0

We propose a new technique to accelerate algorithms based on Gradient Descent using Carathéodory's Theorem. In the case of the standard Gradient Descent algorithm, we analyse the theoretical convergence of the approach under convexity assumptions and empirically display its ameliorations. As a core contribution, we then present an application of the acceleration technique to Block Coordinate Descent methods. Experimental comparisons on least squares regression with a LASSO regularisation term show remarkably improved performance on LASSO than the ADAM and SAG algorithms.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/19/2020

Anderson acceleration of coordinate descent

Acceleration of first order methods is mainly obtained via inertial tech...
research
03/11/2019

Gradient Descent based Optimization Algorithms for Deep Learning Models Training

In this paper, we aim at providing an introduction to the gradient desce...
research
03/02/2017

Encrypted accelerated least squares regression

Information that is stored in an encrypted format is, by definition, usu...
research
10/09/2020

Reparametrizing gradient descent

In this work, we propose an optimization algorithm which we call norm-ad...
research
09/16/2022

Federated Coordinate Descent for Privacy-Preserving Multiparty Linear Regression

Distributed privacy-preserving regression schemes have been developed an...
research
10/26/2022

Coordinate Descent for SLOPE

The lasso is the most famous sparse regression and feature selection met...
research
03/14/2018

Redundancy Techniques for Straggler Mitigation in Distributed Optimization and Learning

Performance of distributed optimization and learning systems is bottlene...

Please sign up or login with your details

Forgot password? Click here to reset