A Stochastic Newton Algorithm for Distributed Convex Optimization

10/07/2021
by   Brian Bullins, et al.
0

We propose and analyze a stochastic Newton algorithm for homogeneous distributed stochastic convex optimization, where each machine can calculate stochastic gradients of the same population objective, as well as stochastic Hessian-vector products (products of an independent unbiased estimator of the Hessian of the population objective with arbitrary vectors), with many such stochastic computations performed between rounds of communication. We show that our method can reduce the number, and frequency, of required communication rounds compared to existing methods without hurting performance, by proving convergence guarantees for quasi-self-concordant objectives (e.g., logistic regression), alongside empirical evidence.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
01/01/2015

Communication-Efficient Distributed Optimization of Self-Concordant Empirical Loss

We consider distributed convex optimization problems originated from sam...
research
12/22/2019

Modeling Hessian-vector products in nonlinear optimization: New Hessian-free methods

In this paper, we suggest two ways of calculating interpolation models f...
research
05/18/2023

Q-SHED: Distributed Optimization at the Edge via Hessian Eigenvectors Quantization

Edge networks call for communication efficient (low overhead) and robust...
research
07/07/2020

A Distributed Cubic-Regularized Newton Method for Smooth Convex Optimization over Networks

We propose a distributed, cubic-regularized Newton method for large-scal...
research
02/02/2021

The Min-Max Complexity of Distributed Stochastic Convex Optimization with Intermittent Communication

We resolve the min-max complexity of distributed stochastic convex optim...
research
11/19/2020

On the asymptotic rate of convergence of Stochastic Newton algorithms and their Weighted Averaged versions

The majority of machine learning methods can be regarded as the minimiza...
research
04/27/2021

Discriminative Bayesian Filtering Lends Momentum to the Stochastic Newton Method for Minimizing Log-Convex Functions

To minimize the average of a set of log-convex functions, the stochastic...

Please sign up or login with your details

Forgot password? Click here to reset