On Consensus-Optimality Trade-offs in Collaborative Deep Learning

05/30/2018
by   Zhanhong Jiang, et al.
0

In distributed machine learning, where agents collaboratively learn from diverse private data sets, there is a fundamental tension between consensus and optimality. In this paper, we build on recent algorithmic progresses in distributed deep learning to explore various consensus-optimality trade-offs over a fixed communication topology. First, we propose the incremental consensus-based distributed SGD (i-CDSGD) algorithm, which involves multiple consensus steps (where each agent communicates information with its neighbors) within each SGD iteration. Second, we propose the generalized consensus-based distributed SGD (g-CDSGD) algorithm that enables us to navigate the full spectrum from complete consensus (all agents agree) to complete disagreement (each agent converges to individual model parameters). We analytically establish convergence of the proposed algorithms for strongly convex and nonconvex objective functions; we also analyze the momentum variants of the algorithms for the strongly convex case. We support our algorithms via numerical experiments, and demonstrate significant improvements over existing methods for collaborative deep learning.

READ FULL TEXT

page 11

page 24

research
06/23/2017

Collaborative Deep Learning in Fixed Topology Networks

There is significant recent interest to parallelize deep learning algori...
research
10/21/2020

Decentralized Deep Learning using Momentum-Accelerated Consensus

We consider the problem of decentralized deep learning where multiple ag...
research
09/22/2019

Gradient-Consensus Method for Distributed Optimization in Directed Multi-Agent Networks

In this article, a distributed optimization problem for minimizing a sum...
research
11/17/2020

Consensus of Multi-Agent Systems Using Back-Tracking and History Following Algorithms

This paper proposes two algorithms, namely "back-tracking" and "history ...
research
06/01/2023

DSGD-CECA: Decentralized SGD with Communication-Optimal Exact Consensus Algorithm

Decentralized Stochastic Gradient Descent (SGD) is an emerging neural ne...
research
04/25/2017

Stochastic Optimization from Distributed, Streaming Data in Rate-limited Networks

Motivated by machine learning applications in networks of sensors, inter...
research
09/03/2023

Distributed robust optimization for multi-agent systems with guaranteed finite-time convergence

A novel distributed algorithm is proposed for finite-time converging to ...

Please sign up or login with your details

Forgot password? Click here to reset