WRPN & Apprentice: Methods for Training and Inference using Low-Precision Numerics

03/01/2018
by   Asit Mishra, et al.
0

Today's high performance deep learning architectures involve large models with numerous parameters. Low precision numerics has emerged as a popular technique to reduce both the compute and memory requirements of these large models. However, lowering precision often leads to accuracy degradation. We describe three schemes whereby one can both train and do efficient inference using low precision numerics without hurting accuracy. Finally, we describe an efficient hardware accelerator that can take advantage of the proposed low precision numerics.

READ FULL TEXT

page 1

page 2

page 3

research
11/15/2017

Apprentice: Using Knowledge Distillation Techniques To Improve Low-Precision Network Accuracy

Deep learning networks have achieved state-of-the-art accuracies on comp...
research
05/26/2021

Low-Precision Hardware Architectures Meet Recommendation Model Inference at Scale

Tremendous success of machine learning (ML) and the unabated growth in M...
research
02/27/2021

ProbLP: A framework for low-precision probabilistic inference

Bayesian reasoning is a powerful mechanism for probabilistic inference i...
research
06/17/2021

How Low Can We Go: Trading Memory for Error in Low-Precision Training

Low-precision arithmetic trains deep learning models using less energy, ...
research
10/29/2022

MinUn: Accurate ML Inference on Microcontrollers

Running machine learning inference on tiny devices, known as TinyML, is ...
research
10/09/2019

QPyTorch: A Low-Precision Arithmetic Simulation Framework

Low-precision training reduces computational cost and produces efficient...
research
04/10/2017

WRPN: Training and Inference using Wide Reduced-Precision Networks

For computer vision applications, prior works have shown the efficacy of...

Please sign up or login with your details

Forgot password? Click here to reset