Comparative Study: Standalone IEEE 16-bit Floating-Point for Image Classification

05/18/2023
by   Juyoung Yun, et al.
0

Reducing the number of bits needed to encode the weights and activations of neural networks is highly desirable as it speeds up their training and inference time while reducing memory consumption. It is unsurprising that considerable attention has been drawn to developing neural networks that employ lower-precision computation. This includes IEEE 16-bit, Google bfloat16, 8-bit, 4-bit floating-point or fixed-point, 2-bit, and various mixed-precision algorithms. Out of these low-precision formats, IEEE 16-bit stands out due to its universal compatibility with contemporary GPUs. This accessibility contrasts with bfloat16, which needs high-end GPUs, or other non-standard fewer-bit designs, which typically require software simulation. This study focuses on the widely accessible IEEE 16-bit format for comparative analysis. This analysis involves an in-depth theoretical investigation of the factors that lead to discrepancies between 16-bit and 32-bit models, including a formalization of the concepts of floating-point error and tolerance to understand the conditions under which a 16-bit model can approximate 32-bit results. Contrary to literature that credits the success of noise-tolerated neural networks to regularization effects, our study-supported by a series of rigorous experiments-provides a quantitative explanation of why standalone IEEE 16-bit floating-point neural networks can perform on par with 32-bit and mixed-precision networks in various image classification tasks. Because no prior research has studied IEEE 16-bit as a standalone floating-point precision in neural networks, we believe our findings will have significant impacts, encouraging the adoption of standalone IEEE 16-bit networks in future neural network applications.

READ FULL TEXT
research
04/30/2021

PositNN: Training Deep Neural Networks with Mixed Low-Precision Posit

Low-precision formats have proven to be an efficient way to reduce not o...
research
04/10/2021

Fixed-Posit: A Floating-Point Representation for Error-Resilient Applications

Today, almost all computer systems use IEEE-754 floating point to repres...
research
01/30/2023

The Hidden Power of Pure 16-bit Floating-Point Neural Networks

Lowering the precision of neural networks from the prevalent 32-bit prec...
research
01/05/2021

An Investigation on Inherent Robustness of Posit Data Representation

As the dimensions and operating voltages of computer electronics shrink ...
research
09/11/2023

Compressed Real Numbers for AI: a case-study using a RISC-V CPU

As recently demonstrated, Deep Neural Networks (DNN), usually trained us...
research
07/12/2019

Posit NPB: Assessing the Precision Improvement in HPC Scientific Applications

Floating-point operations can significantly impact the accuracy and perf...
research
06/23/2023

FPGA Implementation of Convolutional Neural Network for Real-Time Handwriting Recognition

Machine Learning (ML) has recently been a skyrocketing field in Computer...

Please sign up or login with your details

Forgot password? Click here to reset