Training Deep Neural Networks Using Posit Number System

09/06/2019
by   Jinming Lu, et al.
0

With the increasing size of Deep Neural Network (DNN) models, the high memory space requirements and computational complexity have become an obstacle for efficient DNN implementations. To ease this problem, using reduced-precision representations for DNN training and inference has attracted many interests from researchers. This paper first proposes a methodology for training DNNs with the posit arithmetic, a type- 3 universal number (Unum) format that is similar to the floating point(FP) but has reduced precision. A warm-up training strategy and layer-wise scaling factors are adopted to stabilize training and fit the dynamic range of DNN parameters. With the proposed training methodology, we demonstrate the first successful training of DNN models on ImageNet image classification task in 16 bits posit with no accuracy loss. Then, an efficient hardware architecture for the posit multiply-and-accumulate operation is also proposed, which can achieve significant improvement in energy efficiency than traditional floating-point implementations. The proposed design is helpful for future low-power DNN training accelerators.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
01/16/2020

Shifted and Squeezed 8-bit Floating Point format for Low-Precision Training of Deep Neural Networks

Training with larger number of parameters while keeping fast iterations ...
research
04/15/2021

All-You-Can-Fit 8-Bit Flexible Floating-Point Format for Accurate and Memory-Efficient Inference of Deep Neural Networks

Modern deep neural network (DNN) models generally require a huge amount ...
research
03/13/2022

FlexBlock: A Flexible DNN Training Accelerator with Multi-Mode Block Floating Point Support

Training deep neural networks (DNNs) is a computationally expensive job,...
research
02/10/2021

Hybrid In-memory Computing Architecture for the Training of Deep Neural Networks

The cost involved in training deep neural networks (DNNs) on von-Neumann...
research
05/29/2023

Reversible Deep Neural Network Watermarking:Matching the Floating-point Weights

Static deep neural network (DNN) watermarking embeds watermarks into the...
research
10/12/2020

TUTOR: Training Neural Networks Using Decision Rules as Model Priors

The human brain has the ability to carry out new tasks with limited expe...
research
01/25/2021

CPT: Efficient Deep Neural Network Training via Cyclic Precision

Low-precision deep neural network (DNN) training has gained tremendous a...

Please sign up or login with your details

Forgot password? Click here to reset