A Framework for Semi-Automatic Precision and Accuracy Analysis for Fast and Rigorous Deep Learning

02/10/2020
by   Christoph Lauter, et al.
0

Deep Neural Networks (DNN) represent a performance-hungry application. Floating-Point (FP) and custom floating-point-like arithmetic satisfies this hunger. While there is need for speed, inference in DNNs does not seem to have any need for precision. Many papers experimentally observe that DNNs can successfully run at almost ridiculously low precision. The aim of this paper is two-fold: first, to shed some theoretical light upon why a DNN's FP accuracy stays high for low FP precision. We observe that the loss of relative accuracy in the convolutional steps is recovered by the activation layers, which are extremely well-conditioned. We give an interpretation for the link between precision and accuracy in DNNs. Second, the paper presents a software framework for semi-automatic FP error analysis for the inference phase of deep-learning. Compatible with common Tensorflow/Keras models, it leverages the frugally-deep Python/C++ library to transform a neural network into C++ code in order to analyze the network's need for precision. This rigorous analysis is based on Interval and Affine arithmetics to compute absolute and relative error bounds for a DNN. We demonstrate our tool with several examples.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/15/2021

Development of Quantized DNN Library for Exact Hardware Emulation

Quantization is used to speed up execution time and save power when runn...
research
08/07/2018

Rethinking Numerical Representations for Deep Neural Networks

With ever-increasing computational demand for deep learning, it is criti...
research
03/25/2019

Performance-Efficiency Trade-off of Low-Precision Numerical Formats in Deep Neural Networks

Deep neural networks (DNNs) have been demonstrated as effective prognost...
research
10/23/2018

Deep Neural Network inference with reduced word length

Deep neural networks (DNN) are powerful models for many pattern recognit...
research
11/09/2017

Stochastic Deep Learning in Memristive Networks

We study the performance of stochastically trained deep neural networks ...
research
10/02/2016

Accelerating Deep Convolutional Networks using low-precision and sparsity

We explore techniques to significantly improve the compute efficiency an...
research
05/15/2023

Marsellus: A Heterogeneous RISC-V AI-IoT End-Node SoC with 2-to-8b DNN Acceleration and 30

Emerging Artificial Intelligence-enabled Internet-of-Things (AI-IoT) Sys...

Please sign up or login with your details

Forgot password? Click here to reset