Deep Learning Inference on Embedded Devices: Fixed-Point vs Posit

05/22/2018
by   Seyed H. F. Langroudi, et al.
0

Performing the inference step of deep learning in resource constrained environments, such as embedded devices, is challenging. Success requires optimization at both software and hardware levels. Low precision arithmetic and specifically low precision fixed-point number systems have become the standard for performing deep learning inference. However, representing non-uniform data and distributed parameters (e.g. weights) by using uniformly distributed fixed-point values is still a major drawback when using this number system. Recently, the posit number system was proposed, which represents numbers in a non-uniform manner. Therefore, in this paper we are motivated to explore using the posit number system to represent the weights of Deep Convolutional Neural Networks. However, we do not apply any quantization techniques and hence the network weights do not require re-training. The results of this exploration show that using the posit number system outperformed the fixed point number system in terms of accuracy and memory utilization.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/08/2017

Deep Convolutional Neural Network Inference with Floating-point Weights and Fixed-point Activations

Deep convolutional neural network (CNN) inference requires significant a...
research
03/03/2016

Convolutional Neural Networks using Logarithmic Data Representation

Recent advances in convolutional neural networks have considered model c...
research
02/13/2018

Training and Inference with Integers in Deep Neural Networks

Researches on deep neural networks with discrete parameters and their de...
research
11/14/2018

QUENN: QUantization Engine for low-power Neural Networks

Deep Learning is moving to edge devices, ushering in a new age of distri...
research
11/19/2019

IFQ-Net: Integrated Fixed-point Quantization Networks for Embedded Vision

Deploying deep models on embedded devices has been a challenging problem...
research
10/02/2020

Hidden automatic sequences

An automatic sequence is a letter-to-letter coding of a fixed point of a...
research
05/10/2016

CORDIC-based Architecture for Powering Computation in Fixed-Point Arithmetic

We present a fixed point architecture (source VHDL code is provided) for...

Please sign up or login with your details

Forgot password? Click here to reset