DeepAI AI Chat
Log In Sign Up

A Framework for Semi-Automatic Precision and Accuracy Analysis for Fast and Rigorous Deep Learning

02/10/2020
by   Christoph Lauter, et al.
University of Nantes
0

Deep Neural Networks (DNN) represent a performance-hungry application. Floating-Point (FP) and custom floating-point-like arithmetic satisfies this hunger. While there is need for speed, inference in DNNs does not seem to have any need for precision. Many papers experimentally observe that DNNs can successfully run at almost ridiculously low precision. The aim of this paper is two-fold: first, to shed some theoretical light upon why a DNN's FP accuracy stays high for low FP precision. We observe that the loss of relative accuracy in the convolutional steps is recovered by the activation layers, which are extremely well-conditioned. We give an interpretation for the link between precision and accuracy in DNNs. Second, the paper presents a software framework for semi-automatic FP error analysis for the inference phase of deep-learning. Compatible with common Tensorflow/Keras models, it leverages the frugally-deep Python/C++ library to transform a neural network into C++ code in order to analyze the network's need for precision. This rigorous analysis is based on Interval and Affine arithmetics to compute absolute and relative error bounds for a DNN. We demonstrate our tool with several examples.

READ FULL TEXT

page 1

page 2

page 3

page 4

06/15/2021

Development of Quantized DNN Library for Exact Hardware Emulation

Quantization is used to speed up execution time and save power when runn...
08/07/2018

Rethinking Numerical Representations for Deep Neural Networks

With ever-increasing computational demand for deep learning, it is criti...
07/30/2019

Deep Learning Training on the Edge with Low-Precision Posits

Recently, the posit numerical format has shown promise for DNN data repr...
10/28/2021

FAST: DNN Training Under Variable Precision Block Floating Point with Stochastic Rounding

Block Floating Point (BFP) can efficiently support quantization for Deep...
05/12/2022

Adaptive Block Floating-Point for Analog Deep Learning Hardware

Analog mixed-signal (AMS) devices promise faster, more energy-efficient ...
11/09/2017

Stochastic Deep Learning in Memristive Networks

We study the performance of stochastically trained deep neural networks ...
10/02/2016

Accelerating Deep Convolutional Networks using low-precision and sparsity

We explore techniques to significantly improve the compute efficiency an...