Scalar Arithmetic Multiple Data: Customizable Precision for Deep Neural Networks

09/27/2018
by   Andrew Anderson, et al.
0

Quantization of weights and activations in Deep Neural Networks (DNNs) is a powerful technique for network compression, and has enjoyed significant attention and success. However, much of the inference-time benefit of quantization is accessible only through the use of customized hardware accelerators or by providing an FPGA implementation of quantized arithmetic. Building on prior work, we show how to construct arbitrary bit-precise signed and unsigned integer operations using a software technique which logically embeds a vector architecture with custom bit-width lanes in universally available fixed-width scalar arithmetic. We evaluate our approach on a high-end Intel Haswell processor, and an embedded ARM processor. Our approach yields very fast implementations of bit-precise custom DNN operations, which often match or exceed the performance of operations quantized to the sizes supported in native arithmetic. At the strongest level of quantization, our approach yields a maximum speedup of 6× on the Intel platform, and 10× on the ARM platform versus quantization to native 8-bit integers.

READ FULL TEXT
research
01/31/2023

Quantized Neural Networks for Low-Precision Accumulation with Guaranteed Overflow Avoidance

We introduce a quantization-aware training algorithm that guarantees avo...
research
07/20/2022

SumMerge: an efficient algorithm and implementation for weight repetition-aware DNN inference

Deep Neural Network (DNN) inference efficiency is a key concern across t...
research
03/09/2023

Performance Characterization of using Quantization for DNN Inference on Edge Devices: Extended Version

Quantization is a popular technique used in Deep Neural Networks (DNN) i...
research
05/14/2022

A Comprehensive Survey on Model Quantization for Deep Neural Networks

Recent advances in machine learning by deep neural networks are signific...
research
01/07/2019

Efficient Winograd Convolution via Integer Arithmetic

Convolution is the core operation for many deep neural networks. The Win...
research
03/21/2022

DSP-Packing: Squeezing Low-precision Arithmetic into FPGA DSP Blocks

The number of Digital Signal Processor (DSP) resources available in Fiel...
research
02/12/2023

Quark: An Integer RISC-V Vector Processor for Sub-Byte Quantized DNN Inference

In this paper, we present Quark, an integer RISC-V vector processor spec...

Please sign up or login with your details

Forgot password? Click here to reset