THC: Accelerating Distributed Deep Learning Using Tensor Homomorphic Compression

02/16/2023
by   Minghao Li, et al.
0

Deep neural networks (DNNs) are the de-facto standard for essential use cases, such as image classification, computer vision, and natural language processing. As DNNs and datasets get larger, they require distributed training on increasingly larger clusters. A main bottleneck is then the resulting communication overhead where workers exchange model updates (i.e., gradients) on a per-round basis. To address this bottleneck and accelerate training, a widely-deployed approach is compression. However, previous deployments often apply bi-directional compression schemes by simply using a uni-directional gradient compression scheme in each direction. This results in significant computational overheads at the parameter server and increased compression error, leading to longer training and lower accuracy. We introduce Tensor Homomorphic Compression (THC), a novel bi-directional compression framework that enables the direct aggregation of compressed values while optimizing the bandwidth to accuracy tradeoff, thus eliminating the aforementioned overheads. Moreover, THC is compatible with in-network aggregation (INA), which allows for further acceleration. Evaluation over a testbed shows that THC improves time-to-accuracy in comparison to alternatives by up to 1.32x with a software PS and up to 1.51x using INA. Finally, we demonstrate that THC is scalable and tolerant for acceptable packet-loss rates.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/18/2021

Efficient Distributed Auto-Differentiation

Although distributed machine learning has opened up numerous frontiers o...
research
04/21/2021

ScaleCom: Scalable Sparsified Gradient Compression for Communication-Efficient Distributed Training

Large-scale distributed training of Deep Neural Networks (DNNs) on state...
research
02/01/2023

: Downlink Compression for Cross-Device Federated Learning

Many compression techniques have been proposed to reduce the communicati...
research
03/16/2021

Learned Gradient Compression for Distributed Deep Learning

Training deep neural networks on large datasets containing high-dimensio...
research
02/05/2021

DeepReduce: A Sparse-tensor Communication Framework for Distributed Deep Learning

Sparse tensors appear frequently in distributed deep learning, either as...
research
05/27/2019

Natural Compression for Distributed Deep Learning

Due to their hunger for big data, modern deep learning models are traine...
research
02/16/2022

Practical Network Acceleration with Tiny Sets

Network compression is effective in accelerating the inference of deep n...

Please sign up or login with your details

Forgot password? Click here to reset