FQ-Conv: Fully Quantized Convolution for Efficient and Accurate Inference

12/19/2019
by   Bram-Ernst Verhoef, et al.
0

Deep neural networks (DNNs) can be made hardware-efficient by reducing the numerical precision of the weights and activations of the network and by improving the network's resilience to noise. However, this gain in efficiency often comes at the cost of significantly reduced accuracy. In this paper, we present a novel approach to quantizing convolutional neural network. The resulting networks perform all computations in low-precision, without requiring higher-precision BN and nonlinearities, while still being highly accurate. To achieve this result, we employ a novel quantization technique that learns to optimally quantize the weights and activations of the network during training. Additionally, to enhance training convergence we use a new training technique, called gradual quantization. We leverage the nonlinear and normalizing behavior of our quantization function to effectively remove the higher-precision nonlinearities and BN from the network. The resulting convolutional layers are fully quantized to low precision, from input to output, ideal for neural network accelerators on the edge. We demonstrate the potential of this approach on different datasets and networks, showing that ternary-weight CNNs with low-precision in- and outputs perform virtually on par with their full-precision equivalents. Finally, we analyze the influence of noise on the weights, activations and convolution outputs (multiply-accumulate, MAC) and propose a strategy to improve network performance under noisy conditions.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/17/2018

Joint Training of Low-Precision Neural Network with Quantization Interval Parameters

Optimization for low-precision neural network is an important technique ...
research
03/17/2020

Efficient Bitwidth Search for Practical Mixed Precision Neural Network

Network quantization has rapidly become one of the most widely used meth...
research
07/11/2018

FINN-L: Library Extensions and Design Trade-off Analysis for Variable Precision LSTM Networks on FPGAs

It is well known that many types of artificial neural networks, includin...
research
12/20/2022

Redistribution of Weights and Activations for AdderNet Quantization

Adder Neural Network (AdderNet) provides a new way for developing energy...
research
02/03/2017

Deep Learning with Low Precision by Half-wave Gaussian Quantization

The problem of quantizing the activations of a deep neural network is co...
research
09/13/2018

High-Accuracy Inference in Neuromorphic Circuits using Hardware-Aware Training

Neuromorphic Multiply-And-Accumulate (MAC) circuits utilizing synaptic w...
research
12/18/2019

Analytic expressions for the output evolution of a deep neural network

We present a novel methodology based on a Taylor expansion of the networ...

Please sign up or login with your details

Forgot password? Click here to reset