Differentiable Product Quantization for End-to-End Embedding Compression

08/26/2019
by   Ting Chen, et al.
4

Embedding layer is commonly used to map discrete symbols into continuous embedding vectors that reflect their semantic meanings. As the number of symbols increase, the number of embedding parameter, as well as their size, increase linearly and become problematically large. In this work, we aim to reduce the size of embedding layer via learning discrete codes and composing embedding vectors from the codes. More specifically, we propose a differentiable product quantization framework with two instantiations, which can serve as an efficient drop-in replacement for existing embedding layer. Empirically, we evaluate the proposed method on three different language tasks, and show that the proposed method enables end-to-end training of embedding compression that achieves significant compression ratios (14-238×) at almost no performance cost (sometimes even better).

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset