All-You-Can-Fit 8-Bit Flexible Floating-Point Format for Accurate and Memory-Efficient Inference of Deep Neural Networks

04/15/2021
by   Cheng-Wei Huang, et al.
0

Modern deep neural network (DNN) models generally require a huge amount of weight and activation values to achieve good inference outcomes. Those data inevitably demand a massive off-chip memory capacity/bandwidth, and the situation gets even worse if they are represented in high-precision floating-point formats. Effort has been made for representing those data in different 8-bit floating-point formats, nevertheless, a notable accuracy loss is still unavoidable. In this paper we introduce an extremely flexible 8-bit floating-point (FFP8) format whose defining factors - the bit width of exponent/fraction field, the exponent bias, and even the presence of the sign bit - are all configurable. We also present a methodology to properly determine those factors so that the accuracy of model inference can be maximized. The foundation of this methodology is based on a key observation - both the maximum magnitude and the value distribution are quite dissimilar between weights and activations in most DNN models. Experimental results demonstrate that the proposed FFP8 format achieves an extremely low accuracy loss of 0.1%∼ 0.3% for several representative image classification models even without the need of model retraining. Besides, it is easy to turn a classical floating-point processing unit into an FFP8-compliant one, and the extra hardware cost is minor.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/05/2018

Deep Positron: A Deep Neural Network Using the Posit Number System

The recent surge of interest in Deep Neural Networks (DNNs) has led to i...
research
11/06/2020

Low-Cost Floating-Point Processing in ReRAM for Scientific Computing

We propose ReFloat, a principled approach for low-cost floating-point pr...
research
04/28/2022

Schrödinger's FP: Dynamic Adaptation of Floating-Point Containers for Deep Learning Training

We introduce a software-hardware co-design approach to reduce memory tra...
research
09/06/2019

Training Deep Neural Networks Using Posit Number System

With the increasing size of Deep Neural Network (DNN) models, the high m...
research
09/17/2019

K-TanH: Hardware Efficient Activations For Deep Learning

We propose K-TanH, a novel, highly accurate, hardware efficient approxim...
research
09/29/2019

AdaptivFloat: A Floating-point based Data Type for Resilient Deep Learning Inference

Conventional hardware-friendly quantization methods, such as fixed-point...
research
07/09/2019

Template-Based Posit Multiplication for Training and Inferring in Neural Networks

The posit number system is arguably the most promising and discussed top...

Please sign up or login with your details

Forgot password? Click here to reset