TaxoNN: A Light-Weight Accelerator for Deep Neural Network Training

09/06/2020
by   Kossar Pourahmadi, et al.
0

Emerging intelligent embedded devices rely on Deep Neural Networks (DNNs) to be able to interact with the real-world environment. This interaction comes with the ability to retrain DNNs, since environmental conditions change continuously in time. Stochastic Gradient Descent (SGD) is a widely used algorithm to train DNNs by optimizing the parameters over the training data iteratively. In this work, first we present a novel approach to add the training ability to a baseline DNN accelerator (inference only) by splitting the SGD algorithm into simple computational elements. Then, based on this heuristic approach we propose TaxoNN, a light-weight accelerator for DNN training. TaxoNN can easily tune the DNN weights by reusing the hardware resources used in the inference process using a time-multiplexing approach and low-bitwidth units. Our experimental results show that TaxoNN delivers, on average, 0.97% higher misclassification rate compared to a full-precision implementation. Moreover, TaxoNN provides 2.1× power saving and 1.65× area reduction over the state-of-the-art DNN training accelerator.

READ FULL TEXT
research
12/21/2020

Optimizing Deep Neural Networks through Neuroevolution with Stochastic Gradient Descent

Deep neural networks (DNNs) have achieved remarkable success in computer...
research
11/01/2020

An Embarrassingly Simple Approach to Training Ternary Weight Networks

Deep neural networks (DNNs) have achieved great successes in various dom...
research
03/23/2016

Acceleration of Deep Neural Network Training with Resistive Cross-Point Devices

In recent years, deep neural networks (DNN) have demonstrated significan...
research
05/22/2023

Adaptive Gradient Prediction for DNN Training

Neural network training is inherently sequential where the layers finish...
research
12/23/2019

Layerwise Noise Maximisation to Train Low-Energy Deep Neural Networks

Deep neural networks (DNNs) depend on the storage of a large number of p...
research
04/23/2021

Partitioning sparse deep neural networks for scalable training and inference

The state-of-the-art deep neural networks (DNNs) have significant comput...
research
06/26/2020

Is SGD a Bayesian sampler? Well, almost

Overparameterised deep neural networks (DNNs) are highly expressive and ...

Please sign up or login with your details

Forgot password? Click here to reset