RSO: A Gradient Free Sampling Based Approach For Training Deep Neural Networks

05/12/2020
by   Rohun Tripathi, et al.
0

We propose RSO (random search optimization), a gradient free Markov Chain Monte Carlo search based approach for training deep neural networks. To this end, RSO adds a perturbation to a weight in a deep neural network and tests if it reduces the loss on a mini-batch. If this reduces the loss, the weight is updated, otherwise the existing weight is retained. Surprisingly, we find that repeating this process a few times for each weight is sufficient to train a deep neural network. The number of weight updates for RSO is an order of magnitude lesser when compared to backpropagation with SGD. RSO can make aggressive weight updates in each step as there is no concept of learning rate. The weight update step for individual layers is also not coupled with the magnitude of the loss. RSO is evaluated on classification tasks on MNIST and CIFAR-10 datasets with deep neural networks of 6 to 10 layers where it achieves an accuracy of 99.1 the weights just 5 times, the algorithm obtains a classification accuracy of 98

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/20/2018

Revisiting Small Batch Training for Deep Neural Networks

Modern deep neural network training is typically based on mini-batch sto...
research
02/10/2020

Reducing the Computational Burden of Deep Learning with Recursive Local Representation Alignment

Training deep neural networks on large-scale datasets requires significa...
research
04/30/2022

Engineering flexible machine learning systems by traversing functionally invariant paths in weight space

Deep neural networks achieve human-like performance on a variety of perc...
research
03/11/2020

Improving the Backpropagation Algorithm with Consequentialism Weight Updates over Mini-Batches

Least mean squares (LMS) is a particular case of the backpropagation (BP...
research
01/08/2021

Infinite-dimensional Folded-in-time Deep Neural Networks

The method recently introduced in arXiv:2011.10115 realizes a deep neura...
research
12/22/2014

Efficient Exact Gradient Update for training Deep Networks with Very Large Sparse Targets

An important class of problems involves training deep neural networks wi...
research
10/19/2018

Sequenced-Replacement Sampling for Deep Learning

We propose sequenced-replacement sampling (SRS) for training deep neural...

Please sign up or login with your details

Forgot password? Click here to reset