Log In Sign Up

FBGEMM: Enabling High-Performance Low-Precision Deep Learning Inference

by   Daya Khudia, et al.

Deep learning models typically use single-precision (FP32) floating point data types for representing activations and weights, but a slew of recent research work has shown that computations with reduced-precision data types (FP16, 16-bit integers, 8-bit integers or even 4- or 2-bit integers) are enough to achieve same accuracy as FP32 and are much more efficient. Therefore, we designed fbgemm, a high-performance kernel library, from ground up to perform high-performance quantized inference on current generation CPUs. fbgemm achieves efficiency by fusing common quantization operations with a high-performance gemm implementation and by shape- and size-specific kernel code generation at runtime. The library has been deployed at Facebook, where it delivers greater than 2x performance gains with respect to our current production baseline.


Development of Quantized DNN Library for Exact Hardware Emulation

Quantization is used to speed up execution time and save power when runn...

Accelerating Deep Learning Model Inference on Arm CPUs with Ultra-Low Bit Quantization and Runtime

Deep Learning has been one of the most disruptive technological advancem...

Mixed Low-precision Deep Learning Inference using Dynamic Fixed Point

We propose a cluster-based quantization method to convert pre-trained fu...

Automating Generation of Low Precision Deep Learning Operators

State of the art deep learning models have made steady progress in the f...

RLIBM-32: High Performance Correctly Rounded Math Libraries for 32-bit Floating Point Representations

This paper proposes a set of techniques to develop correctly rounded mat...

Tuning of Mixture-of-Experts Mixed-Precision Neural Networks

Deep learning has become a useful data analysis method, however mainstre...

Accelerating Deep Convolutional Networks using low-precision and sparsity

We explore techniques to significantly improve the compute efficiency an...