A Hessian inversion-free exact second order method for distributed consensus optimization

04/06/2022
by   Dusan Jakovetic, et al.
0

We consider a standard distributed consensus optimization problem where a set of agents connected over an undirected network minimize the sum of their individual local strongly convex costs. Alternating Direction Method of Multipliers ADMM and Proximal Method of Multipliers PMM have been proved to be effective frameworks for design of exact distributed second order methods involving calculation of local cost Hessians. However, existing methods involve explicit calculation of local Hessian inverses at each iteration that may be very costly when the dimension of the optimization variable is large. In this paper we develop a novel method termed INDO Inexact Newton method for Distributed Optimization that alleviates the need for Hessian inverse calculation. INDO follows the PMM framework but unlike existing work approximates the Newton direction through a generic fixed point method, e.g., Jacobi Overrelaxation, that does not involve Hessian inverses. We prove exact global linear convergence of INDO and provide analytical studies on how the degree of inexactness in the Newton direction calculation affects the overall methods convergence factor. Numerical experiments on several real data sets demonstrate that INDOs speed is on par or better as state of the art methods iterationwise hence having a comparable communication cost. At the same time, for sufficiently large optimization problem dimensions n (even at n on the order of couple of hundreds), INDO achieves savings in computational cost by at least an order of magnitude.

READ FULL TEXT
research
07/18/2018

Distributed Second-order Convex Optimization

Convex optimization problems arise frequently in diverse machine learnin...
research
12/01/2022

Second-order optimization with lazy Hessians

We analyze Newton's method with lazy Hessian updates for solving general...
research
02/18/2020

Distributed Adaptive Newton Methods with Globally Superlinear Convergence

This paper considers the distributed optimization problem over a network...
research
02/12/2021

Newton Method over Networks is Fast up to the Statistical Precision

We propose a distributed cubic regularization of the Newton method for s...
research
06/17/2022

FedNew: A Communication-Efficient and Privacy-Preserving Newton-Type Method for Federated Learning

Newton-type methods are popular in federated learning due to their fast ...
research
02/01/2018

Distributed Newton Methods for Deep Neural Networks

Deep learning involves a difficult non-convex optimization problem with ...
research
11/08/2019

Penalty Method for Inversion-Free Deep Bilevel Optimization

Bilevel optimizations are at the center of several important machine lea...

Please sign up or login with your details

Forgot password? Click here to reset