-
Communication-Efficient Distributed Blockwise Momentum SGD with Error-Feedback
Communication overhead is a major bottleneck hampering the scalability o...
read it
-
Sparse Communication for Training Deep Networks
Synchronous stochastic gradient descent (SGD) is the most common method ...
read it
-
PowerGossip: Practical Low-Rank Communication Compression in Decentralized Deep Learning
Lossy gradient compression has become a practical tool to overcome the c...
read it
-
Accordion: Adaptive Gradient Communication via Critical Learning Regime Identification
Distributed model training suffers from communication bottlenecks due to...
read it
-
GradiVeQ: Vector Quantization for Bandwidth-Efficient Gradient Aggregation in Distributed CNN Training
Data parallelism can boost the training speed of convolutional neural ne...
read it
-
Trajectory Normalized Gradients for Distributed Optimization
Recently, researchers proposed various low-precision gradient compressio...
read it
-
Adaptive Gradient Quantization for Data-Parallel SGD
Many communication-efficient variants of SGD use gradient quantization s...
read it
PowerSGD: Practical Low-Rank Gradient Compression for Distributed Optimization
We study gradient compression methods to alleviate the communication bottleneck in data-parallel distributed optimization. Despite the significant attention received, current compression schemes either do not scale well or fail to achieve the target test accuracy. We propose a new low-rank gradient compressor based on power iteration that can i) compress gradients rapidly, ii) efficiently aggregate the compressed gradients using all-reduce, and iii) achieve test performance on par with SGD. The proposed algorithm is the only method evaluated that achieves consistent wall-clock speedups when benchmarked against regular SGD with an optimized communication backend. We demonstrate reduced training times for convolutional networks as well as LSTMs on common datasets. Our code is available at https://github.com/epfml/powersgd.
READ FULL TEXT
Comments
There are no comments yet.