Minimum Energy Quantized Neural Networks

11/01/2017
by   Bert Moons, et al.
0

This work targets the automated minimum-energy optimization of Quantized Neural Networks (QNNs) - networks using low precision weights and activations. These networks are trained from scratch at an arbitrary fixed point precision. At iso-accuracy, QNNs using fewer bits require deeper and wider network architectures than networks using higher precision operators, while they require less complex arithmetic and less bits per weights. This fundamental trade-off is analyzed and quantified to find the minimum energy QNN for any benchmark and hence optimize energy-efficiency. To this end, the energy consumption of inference is modeled for a generic hardware platform. This allows drawing several conclusions across different benchmarks. First, energy consumption varies orders of magnitude at iso-accuracy depending on the number of bits used in the QNN. Second, in a typical system, BinaryNets or int4 implementations lead to the minimum energy solution, outperforming int8 networks up to 2-10x at iso-accuracy. All code used for QNN training is available from https://github.com/BertMoons.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/22/2016

Quantized Neural Networks: Training Neural Networks with Low Precision Weights and Activations

We introduce a method to train Quantized Neural Networks (QNNs) --- neur...
research
01/04/2019

Dataflow-based Joint Quantization of Weights and Activations for Deep Neural Networks

This paper addresses a challenging problem - how to reduce energy consum...
research
03/03/2016

Convolutional Neural Networks using Logarithmic Data Representation

Recent advances in convolutional neural networks have considered model c...
research
05/25/2018

Heterogeneous Bitwidth Binarization in Convolutional Neural Networks

Recent work has shown that fast, compact low-bitwidth neural networks ca...
research
02/02/2021

Benchmarking Quantized Neural Networks on FPGAs with FINN

The ever-growing cost of both training and inference for state-of-the-ar...
research
06/24/2018

Deep k-Means: Re-Training and Parameter Sharing with Harder Cluster Assignments for Compressing Deep Convolutions

The current trend of pushing CNNs deeper with convolutions has created a...
research
04/04/2019

Regularizing Activation Distribution for Training Binarized Deep Networks

Binarized Neural Networks (BNNs) can significantly reduce the inference ...

Please sign up or login with your details

Forgot password? Click here to reset