Log In Sign Up

Post-Training 4-bit Quantization on Embedding Tables

by   Hui Guan, et al.

Continuous representations have been widely adopted in recommender systems where a large number of entities are represented using embedding vectors. As the cardinality of the entities increases, the embedding components can easily contain millions of parameters and become the bottleneck in both storage and inference due to large memory consumption. This work focuses on post-training 4-bit quantization on the continuous embeddings. We propose row-wise uniform quantization with greedy search and codebook-based quantization that consistently outperforms state-of-the-art quantization approaches on reducing accuracy degradation. We deploy our uniform quantization technique on a production model in Facebook and demonstrate that it can reduce the model size to only 13.89 neutral.


page 1

page 2

page 3

page 4


Q-Rater: Non-Convex Optimization for Post-Training Uniform Quantization

Various post-training uniform quantization methods have usually been stu...

Hybrid and Non-Uniform quantization methods using retro synthesis data for efficient inference

Existing quantization aware training methods attempt to compensate for t...

Learning Binarized Graph Representations with Multi-faceted Quantization Reinforcement for Top-K Recommendation

Learning vectorized embeddings is at the core of various recommender sys...

Q-ViT: Fully Differentiable Quantization for Vision Transformer

In this paper, we propose a fully differentiable quantization method for...

Embedding Compression with Isotropic Iterative Quantization

Continuous representation of words is a standard component in deep learn...

Clustering Embedding Tables, Without First Learning Them

To work with categorical features, machine learning systems employ embed...

AutoShard: Automated Embedding Table Sharding for Recommender Systems

Embedding learning is an important technique in deep recommendation mode...