WRPN: Training and Inference using Wide Reduced-Precision Networks

04/10/2017
by   Asit Mishra, et al.
0

For computer vision applications, prior works have shown the efficacy of reducing the numeric precision of model parameters (network weights) in deep neural networks but also that reducing the precision of activations hurts model accuracy much more than reducing the precision of model parameters. We study schemes to train networks from scratch using reduced-precision activations without hurting the model accuracy. We reduce the precision of activation maps (along with model parameters) using a novel quantization scheme and increase the number of filter maps in a layer, and find that this scheme compensates or surpasses the accuracy of the baseline full-precision network. As a result, one can significantly reduce the dynamic memory footprint, memory bandwidth, computational energy and speed up the training and inference process with appropriate hardware support. We call our scheme WRPN - wide reduced-precision networks. We report results using our proposed schemes and show that our results are better than previously reported accuracies on ILSVRC-12 dataset while being computationally less expensive compared to previously reported reduced-precision networks.

READ FULL TEXT

page 1

page 2

research
09/04/2017

WRPN: Wide Reduced-Precision Networks

For computer vision applications, prior works have shown the efficacy of...
research
02/08/2021

VS-Quant: Per-vector Scaled Quantization for Accurate Low-Precision Neural Network Inference

Quantization enables efficient acceleration of deep neural networks by r...
research
03/01/2018

WRPN & Apprentice: Methods for Training and Inference using Low-Precision Numerics

Today's high performance deep learning architectures involve large model...
research
05/26/2019

HadaNets: Flexible Quantization Strategies for Neural Networks

On-board processing elements on UAVs are currently inadequate for traini...
research
03/18/2021

Reduced Precision Strategies for Deep Learning: A High Energy Physics Generative Adversarial Network Use Case

Deep learning is finding its way into high energy physics by replacing t...
research
10/21/2017

Learning Discrete Weights Using the Local Reparameterization Trick

Recent breakthroughs in computer vision make use of large deep neural ne...
research
04/03/2019

Progressive Stochastic Binarization of Deep Networks

A plethora of recent research has focused on improving the memory footpr...

Please sign up or login with your details

Forgot password? Click here to reset