Communication-Efficient Distributed Optimization in Networks with Gradient Tracking

09/12/2019
by   Boyue Li, et al.
0

There is a growing interest in large-scale machine learning and optimization over decentralized networks, e.g. in the context of multi-agent learning and federated learning. Due to the imminent need to alleviate the communication burden, the investigation of communication-efficient distributed optimization algorithms --- particularly for empirical risk minimization --- has flourished in recent years. A large faction of these algorithms have been developed for the master/slave setting, relying on the presence of a central parameter server that can communicate with all agents. This paper focuses on distributed optimization over the network-distributed or the decentralized setting, where each agent is only allowed to aggregate information from its neighbors over a network (namely, no centralized coordination is present). By properly adjusting the global gradient estimate via a tracking term, we develop a communication-efficient approximate Newton-type method, called Network-DANE, which generalizes DANE [Shamir et al., 2014] for decentralized networks. We establish linear convergence of Network-DANE for quadratic losses, which shed light on the impact of data homogeneity and network connectivity upon the rate of convergence. Our key algorithmic ideas can be applied, in a systematic manner, to obtain decentralized versions of other master/slave distributed algorithms. A notable example is our development of Network-SVRG, which employs stochastic variance reduction [Johnson and Zhang, 2013] at each agent to accelerate local computation. The proposed algorithms are built upon the primal formulation without resorting to the dual. Numerical evidence is provided to demonstrate the appealing performance of our algorithms over competitive baselines, in terms of both communication and computation efficiency.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/04/2021

DESTRESS: Computation-Optimal and Communication-Efficient Decentralized Nonconvex Finite-Sum Optimization

Emerging applications in multi-agent environments such as internet-of-th...
research
06/30/2022

Stochastic Bilevel Distributed Optimization over a Network

Bilevel optimization has been applied to a wide variety of machine learn...
research
06/09/2023

Communication-Efficient Zeroth-Order Distributed Online Optimization: Algorithm, Theory, and Applications

This paper focuses on a multi-agent zeroth-order online optimization pro...
research
03/17/2019

DSPG: Decentralized Simultaneous Perturbations Gradient Descent Scheme

In this paper, we present an asynchronous approximate gradient method th...
research
12/08/2020

A Primal-Dual Framework for Decentralized Stochastic Optimization

We consider the decentralized convex optimization problem, where multipl...
research
04/27/2022

Understanding A Class of Decentralized and Federated Optimization Algorithms: A Multi-Rate Feedback Control Perspective

Distributed algorithms have been playing an increasingly important role ...
research
05/23/2018

Collective Online Learning via Decentralized Gaussian Processes in Massive Multi-Agent Systems

Distributed machine learning (ML) is a modern computation paradigm that ...

Please sign up or login with your details

Forgot password? Click here to reset