Deep Neural Network inference with reduced word length

10/23/2018
by   Lukas Mauch, et al.
0

Deep neural networks (DNN) are powerful models for many pattern recognition tasks, yet their high computational complexity and memory requirement limit them to applications on high-performance computing platforms. In this paper, we propose a new method to evaluate DNNs trained with 32bit floating point (float32) accuracy using only low precision integer arithmetics in combination with binary shift and clipping operations. Because hardware implementation of these operations is much simpler than high precision floating point calculation, our method can be used for an efficient DNN inference on dedicated hardware. In experiments on MNIST, we demonstrate that DNNs trained with float32 can be evaluated using a combination of 2bit integer arithmetics and a few float32 calculations in each layer or only 3bit integer arithmetics in combination with binary shift and clipping without significant performance degradation.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/05/2018

Deep Positron: A Deep Neural Network Using the Posit Number System

The recent surge of interest in Deep Neural Networks (DNNs) has led to i...
research
01/08/2022

PocketNN: Integer-only Training and Inference of Neural Networks via Direct Feedback Alignment and Pocket Activations in Pure C++

Standard deep learning algorithms are implemented using floating-point r...
research
11/29/2017

Transfer Learning with Binary Neural Networks

Previous work has shown that it is possible to train deep neural network...
research
01/31/2023

Tricking AI chips into Simulating the Human Brain: A Detailed Performance Analysis

Challenging the Nvidia monopoly, dedicated AI-accelerator chips have beg...
research
03/11/2020

Compressing deep neural networks on FPGAs to binary and ternary precision with HLS4ML

We present the implementation of binary and ternary neural networks in t...
research
02/10/2020

A Framework for Semi-Automatic Precision and Accuracy Analysis for Fast and Rigorous Deep Learning

Deep Neural Networks (DNN) represent a performance-hungry application. F...
research
03/30/2020

How Not to Give a FLOP: Combining Regularization and Pruning for Efficient Inference

The challenge of speeding up deep learning models during the deployment ...

Please sign up or login with your details

Forgot password? Click here to reset