WRPN: Wide Reduced-Precision Networks

09/04/2017
by   Asit Mishra, et al.
0

For computer vision applications, prior works have shown the efficacy of reducing numeric precision of model parameters (network weights) in deep neural networks. Activation maps, however, occupy a large memory footprint during both the training and inference step when using mini-batches of inputs. One way to reduce this large memory footprint is to reduce the precision of activations. However, past works have shown that reducing the precision of activations hurts model accuracy. We study schemes to train networks from scratch using reduced-precision activations without hurting accuracy. We reduce the precision of activation maps (along with model parameters) and increase the number of filter maps in a layer, and find that this scheme matches or surpasses the accuracy of the baseline full-precision network. As a result, one can significantly improve the execution efficiency (e.g. reduce dynamic memory footprint, memory bandwidth and computational energy) and speed up the training and inference process with appropriate hardware support. We call our scheme WRPN - wide reduced-precision networks. We report results and show that WRPN scheme is better than previously reported accuracies on ILSVRC-12 dataset while being computationally less expensive compared to previously reported reduced-precision networks.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/10/2017

WRPN: Training and Inference using Wide Reduced-Precision Networks

For computer vision applications, prior works have shown the efficacy of...
research
07/04/2023

Learning Discrete Weights and Activations Using the Local Reparameterization Trick

In computer vision and machine learning, a crucial challenge is to lower...
research
07/03/2018

Stochastic Layer-Wise Precision in Deep Neural Networks

Low precision weights, activations, and gradients have been proposed as ...
research
09/12/2018

FINN-R: An End-to-End Deep-Learning Framework for Fast Exploration of Quantized Neural Networks

Convolutional Neural Networks have rapidly become the most successful ma...
research
01/23/2019

Backprop with Approximate Activations for Memory-efficient Network Training

Larger and deeper neural network architectures deliver improved accuracy...
research
12/08/2022

TinyKG: Memory-Efficient Training Framework for Knowledge Graph Neural Recommender Systems

There has been an explosion of interest in designing various Knowledge G...
research
05/26/2019

HadaNets: Flexible Quantization Strategies for Neural Networks

On-board processing elements on UAVs are currently inadequate for traini...

Please sign up or login with your details

Forgot password? Click here to reset