Rate distortion comparison of a few gradient quantizers

08/23/2021
by   Tharindu Adikari, et al.
0

This article is in the context of gradient compression. Gradient compression is a popular technique for mitigating the communication bottleneck observed when training large machine learning models in a distributed manner using gradient-based methods such as stochastic gradient descent. In this article, assuming a Gaussian distribution for the components in gradient, we find the rate distortion trade-off of gradient quantization schemes such as Scaled-sign and Top-K, and compare with the Shannon rate distortion limit. A similar comparison with vector quantizers also is presented.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset