Deep Positron: A Deep Neural Network Using the Posit Number System

12/05/2018
by   Zachariah Carmichael, et al.
0

The recent surge of interest in Deep Neural Networks (DNNs) has led to increasingly complex networks that tax computational and memory resources. Many DNNs presently use 16-bit or 32-bit floating point operations. Significant performance and power gains can be obtained when DNN accelerators support low-precision numerical formats. Despite considerable research, there is still a knowledge gap on how low-precision operations can be realized for both DNN training and inference. In this work, we propose a DNN architecture, Deep Positron, with posit numerical format operating successfully at ≤8 bits for inference. We propose a precision-adaptable FPGA soft core for exact multiply-and-accumulate for uniform comparison across three numerical formats, fixed, floating-point and posit. Preliminary results demonstrate that 8-bit posit has better accuracy that 8-bit fixed or floating-point for three different low-dimensional datasets. Moreover, the accuracy is comparable to 32-bit floating-point on a Xilinx Virtex-7 FGPA device. The trade-offs between DNN performance and hardware resources, i.e. latency, power, and resource utilization, show that posit outperforms in accuracy and latency at 8-bit and below.

READ FULL TEXT

page 1

page 3

research
03/25/2019

Performance-Efficiency Trade-off of Low-Precision Numerical Formats in Deep Neural Networks

Deep neural networks (DNNs) have been demonstrated as effective prognost...
research
06/15/2021

Development of Quantized DNN Library for Exact Hardware Emulation

Quantization is used to speed up execution time and save power when runn...
research
11/06/2017

Flexpoint: An Adaptive Numerical Format for Efficient Training of Deep Neural Networks

Deep neural networks are commonly developed and trained in 32-bit floati...
research
05/11/2017

Hardware-Software Codesign of Accurate, Multiplier-free Deep Neural Networks

While Deep Neural Networks (DNNs) push the state-of-the-art in many mach...
research
10/23/2018

Deep Neural Network inference with reduced word length

Deep neural networks (DNN) are powerful models for many pattern recognit...
research
04/15/2021

All-You-Can-Fit 8-Bit Flexible Floating-Point Format for Accurate and Memory-Efficient Inference of Deep Neural Networks

Modern deep neural network (DNN) models generally require a huge amount ...
research
04/30/2021

PositNN: Training Deep Neural Networks with Mixed Low-Precision Posit

Low-precision formats have proven to be an efficient way to reduce not o...

Please sign up or login with your details

Forgot password? Click here to reset