EFloat: Entropy-coded Floating Point Format for Deep Learning

02/04/2021
by   Rajesh Bordawekar, et al.
0

We describe the EFloat floating-point number format with 4 to 6 additional bits of precision and a wider exponent range than the existing floating point (FP) formats of any width including FP32, BFloat16, IEEE-Half precision, DLFloat, TensorFloat, and 8-bit floats. In a large class of deep learning models we observe that FP exponent values tend to cluster around few unique values which presents entropy encoding opportunities. The EFloat format encodes frequent exponent values and signs with Huffman codes to minimize the average exponent field width. Saved bits then become available to the mantissa increasing the EFloat numeric precision on average by 4 to 6 bits compared to other FP formats of equal width. The proposed encoding concept may be beneficial to low-precision formats including 8-bit floats. Training deep learning models with low precision arithmetic is challenging. EFloat, with its increased precision may provide an opportunity for those tasks as well. We currently use the EFloat format for compressing and saving memory used in large NLP deep learning models. A potential hardware implementation for improving PCIe and memory bandwidth limitations of AI accelerators is also discussed.

READ FULL TEXT
research
03/29/2021

Representation range needs for 16-bit neural network training

Deep learning has grown rapidly thanks to its state-of-the-art performan...
research
05/29/2019

A Study of BFLOAT16 for Deep Learning Training

This paper presents the first comprehensive empirical study demonstratin...
research
11/06/2020

Low-Cost Floating-Point Processing in ReRAM for Scientific Computing

We propose ReFloat, a principled approach for low-cost floating-point pr...
research
07/12/2019

Posit NPB: Assessing the Precision Improvement in HPC Scientific Applications

Floating-point operations can significantly impact the accuracy and perf...
research
04/28/2022

Schrödinger's FP: Dynamic Adaptation of Floating-Point Containers for Deep Learning Training

We introduce a software-hardware co-design approach to reduce memory tra...
research
12/04/2017

An 826 MOPS, 210 uW/MHz Unum ALU in 65 nm

To overcome the limitations of conventional floating-point number format...
research
06/28/2021

Reducing numerical precision preserves classification accuracy in Mondrian Forests

Mondrian Forests are a powerful data stream classification method, but t...

Please sign up or login with your details

Forgot password? Click here to reset