Trajectory Normalized Gradients for Distributed Optimization

01/24/2019
by   Jianqiao Wangni, et al.
14

Recently, researchers proposed various low-precision gradient compression, for efficient communication in large-scale distributed optimization. Based on these work, we try to reduce the communication complexity from a new direction. We pursue an ideal bijective mapping between two spaces of gradient distribution, so that the mapped gradient carries greater information entropy after the compression. In our setting, all servers should share a reference gradient in advance, and they communicate via the normalized gradients, which are the subtraction or quotient, between current gradients and the reference. To obtain a reference vector that yields a stronger signal-to-noise ratio, dynamically in each iteration, we extract and fuse information from the past trajectory in hindsight, and search for an optimal reference for compression. We name this to be the trajectory-based normalized gradients (TNG). It bridges the research from different societies, like coding, optimization, systems, and learning. It is easy to implement and can universally combine with existing algorithms. Our experiments on benchmarking hard non-convex functions, convex problems like logistic regression demonstrate that TNG is more compression-efficient for communication of distributed optimization of general functions.

READ FULL TEXT
research
10/26/2017

Gradient Sparsification for Communication-Efficient Distributed Optimization

Modern large scale machine learning applications require stochastic opti...
research
11/16/2021

Wyner-Ziv Gradient Compression for Federated Learning

Due to limited communication resources at the client and a massive numbe...
research
05/17/2021

Compressed Communication for Distributed Training: Adaptive Methods and System

Communication overhead severely hinders the scalability of distributed m...
research
06/18/2018

Distributed learning with compressed gradients

Asynchronous computation and gradient compression have emerged as two ke...
research
05/31/2019

PowerSGD: Practical Low-Rank Gradient Compression for Distributed Optimization

We study gradient compression methods to alleviate the communication bot...
research
06/04/2019

Distributed Training with Heterogeneous Data: Bridging Median and Mean Based Algorithms

Recently, there is a growing interest in the study of median-based algor...
research
03/13/2020

Communication Efficient Sparsification for Large Scale Machine Learning

The increasing scale of distributed learning problems necessitates the d...

Please sign up or login with your details

Forgot password? Click here to reset