TinyProp – Adaptive Sparse Backpropagation for Efficient TinyML On-device Learning

08/17/2023
by   Marcus Rüb, et al.
0

Training deep neural networks using backpropagation is very memory and computationally intensive. This makes it difficult to run on-device learning or fine-tune neural networks on tiny, embedded devices such as low-power micro-controller units (MCUs). Sparse backpropagation algorithms try to reduce the computational load of on-device learning by training only a subset of the weights and biases. Existing approaches use a static number of weights to train. A poor choice of this so-called backpropagation ratio limits either the computational gain or can lead to severe accuracy losses. In this paper we present TinyProp, the first sparse backpropagation method that dynamically adapts the back-propagation ratio during on-device training for each training step. TinyProp induces a small calculation overhead to sort the elements of the gradient, which does not significantly impact the computational gains. TinyProp works particularly well on fine-tuning trained networks on MCUs, which is a typical use case for embedded applications. For typical datasets from three datasets MNIST, DCASE2020 and CIFAR10, we are 5 times faster compared to non-sparse training with an accuracy loss of on average 1 TinyProp is 2.9 times faster than existing, static sparse backpropagation algorithms and the accuracy loss is reduced on average by 6 typical static setting of the back-propagation ratio.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/17/2020

ZORB: A Derivative-Free Backpropagation Algorithm for Neural Networks

Gradient descent and backpropagation have enabled neural networks to ach...
research
07/15/2022

POET: Training Neural Networks on Tiny Devices with Integrated Rematerialization and Paging

Fine-tuning models on edge devices like mobile phones would enable priva...
research
02/09/2023

SparseProp: Efficient Sparse Backpropagation for Faster Training of Neural Networks

We provide a new efficient version of the backpropagation algorithm, spe...
research
06/08/2015

Learning both Weights and Connections for Efficient Neural Networks

Neural networks are both computationally intensive and memory intensive,...
research
02/18/2015

Probabilistic Backpropagation for Scalable Learning of Bayesian Neural Networks

Large multilayer neural networks trained with backpropagation have recen...
research
05/28/2019

A Graph Theoretic Framework of Recomputation Algorithms for Memory-Efficient Backpropagation

Recomputation algorithms collectively refer to a family of methods that ...
research
12/22/2014

Efficient Exact Gradient Update for training Deep Networks with Very Large Sparse Targets

An important class of problems involves training deep neural networks wi...

Please sign up or login with your details

Forgot password? Click here to reset