Dithered backprop: A sparse and quantized backpropagation algorithm for more efficient deep neural network training

04/09/2020
by   Simon Wiedemann, et al.
0

Deep Neural Networks are successful but highly computationally expensive learning systems. One of the main sources of time and energy drains is the well known backpropagation (backprop) algorithm, which roughly accounts for 2/3 of the computational complexity of training. In this work we propose a method for reducing the computational cost of backprop, which we named dithered backprop. It consists in applying a stochastic quantization scheme to intermediate results of the method. The particular quantisation scheme, called non-subtractive dither (NSD), induces sparsity which can be exploited by computing efficient sparse matrix multiplications. Experiments on popular image classification tasks show that it induces 92 set of models at no or negligible accuracy drop in comparison to state-of-the-art approaches, thus significantly reducing the computational complexity of the backward pass. Moreover, we show that our method is fully compatible to state-of-the-art training methods that reduce the bit-precision of training down to 8-bits, as such being able to further reduce the computational requirements. Finally we discuss and show potential benefits of applying dithered backprop in a distributed training setting, where both communication as well as compute efficiency may increase simultaneously with the number of participant nodes.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/19/2021

Logarithmic Unbiased Quantization: Practical 4-bit Training in Deep Learning

Quantization of the weights and activations is one of the main methods t...
research
09/19/2017

Compressing Low Precision Deep Neural Networks Using Sparsity-Induced Regularization in Ternary Networks

A low precision deep neural network training technique for producing spa...
research
06/15/2018

Straggler-Resilient and Communication-Efficient Distributed Iterative Linear Solver

We propose a novel distributed iterative linear inverse solver method. O...
research
01/25/2018

Investigating the Effects of Dynamic Precision Scaling on Neural Network Training

Training neural networks is a time- and compute-intensive operation. Thi...
research
08/16/2018

Deeper Image Quality Transfer: Training Low-Memory Neural Networks for 3D Images

In this paper we address the memory demands that come with the processin...
research
03/04/2023

Hierarchical Training of Deep Neural Networks Using Early Exiting

Deep neural networks provide state-of-the-art accuracy for vision tasks ...
research
05/24/2022

Semi-Parametric Deep Neural Networks in Linear Time and Memory

Recent advances in deep learning have been driven by large-scale paramet...

Please sign up or login with your details

Forgot password? Click here to reset