Network-GIANT: Fully distributed Newton-type optimization via harmonic Hessian consensus

05/13/2023
by   Alessio Maritan, et al.
0

This paper considers the problem of distributed multi-agent learning, where the global aim is to minimize a sum of local objective (empirical loss) functions through local optimization and information exchange between neighbouring nodes. We introduce a Newton-type fully distributed optimization algorithm, Network-GIANT, which is based on GIANT, a Federated learning algorithm that relies on a centralized parameter server. The Network-GIANT algorithm is designed via a combination of gradient-tracking and a Newton-type iterative algorithm at each node with consensus based averaging of local gradient and Newton updates. We prove that our algorithm guarantees semi-global and exponential convergence to the exact solution over the network assuming strongly convex and smooth loss functions. We provide empirical evidence of the superior convergence performance of Network-GIANT over other state-of-art distributed learning algorithms such as Network-DANE and Newton-Raphson Consensus.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/10/2020

DONE: Distributed Newton-type Method for Federated Edge Learning

There is growing interest in applying distributed machine learning to ed...
research
02/18/2020

Distributed Adaptive Newton Methods with Globally Superlinear Convergence

This paper considers the distributed optimization problem over a network...
research
02/09/2021

Consensus Based Multi-Layer Perceptrons for Edge Computing

In recent years, storing large volumes of data on distributed devices ha...
research
06/29/2023

Fast and Robust State Estimation and Tracking via Hierarchical Learning

Fully distributed estimation and tracking solutions to large-scale multi...
research
11/15/2019

A System Theoretical Perspective to Gradient-Tracking Algorithms for Distributed Quadratic Optimization

In this paper we consider a recently developed distributed optimization ...
research
03/24/2021

The Gradient Convergence Bound of Federated Multi-Agent Reinforcement Learning with Efficient Communication

The paper considers a distributed version of deep reinforcement learning...
research
01/16/2019

DINGO: Distributed Newton-Type Method for Gradient-Norm Optimization

For optimization of a sum of functions in a distributed computing enviro...

Please sign up or login with your details

Forgot password? Click here to reset