Quantizing Convolutional Neural Networks for Low-Power High-Throughput Inference Engines

05/21/2018
by   Sean O. Settle, et al.
0

Deep learning as a means to inferencing has proliferated thanks to its versatility and ability to approach or exceed human-level accuracy. These computational models have seemingly insatiable appetites for computational resources not only while training, but also when deployed at scales ranging from data centers all the way down to embedded devices. As such, increasing consideration is being made to maximize the computational efficiency given limited hardware and energy resources and, as a result, inferencing with reduced precision has emerged as a viable alternative to the IEEE 754 Standard for Floating-Point Arithmetic. We propose a quantization scheme that allows inferencing to be carried out using arithmetic that is fundamentally more efficient when compared to even half-precision floating-point. Our quantization procedure is significant in that we determine our quantization scheme parameters by calibrating against its reference floating-point model using a single inference batch rather than (re)training and achieve end-to-end post quantization accuracies comparable to the reference model.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/15/2017

Quantization and Training of Neural Networks for Efficient Integer-Arithmetic-Only Inference

The rising popularity of intelligent mobile devices and the daunting com...
research
11/01/2018

Rethinking floating point for deep learning

Reducing hardware overhead of neural networks for faster or lower power ...
research
08/20/2021

Quantization Backdoors to Deep Learning Models

There is currently a burgeoning demand for deploying deep learning (DL) ...
research
04/03/2019

Progressive Stochastic Binarization of Deep Networks

A plethora of recent research has focused on improving the memory footpr...
research
07/19/2023

ZeroQuant-FP: A Leap Forward in LLMs Post-Training W4A8 Quantization Using Floating-Point Formats

In the complex domain of large language models (LLMs), striking a balanc...
research
10/23/2020

Efficient Floating-Point Givens Rotation Unit

High-throughput QR decomposition is a key operation in many advanced sig...
research
02/16/2023

With Shared Microexponents, A Little Shifting Goes a Long Way

This paper introduces Block Data Representations (BDR), a framework for ...

Please sign up or login with your details

Forgot password? Click here to reset