Evolving Learning Rate Optimizers for Deep Neural Networks

03/23/2021
by   Pedro Carvalho, et al.
0

Artificial Neural Networks (ANNs) became popular due to their successful application difficult problems such image and speech recognition. However, when practitioners want to design an ANN they need to undergo laborious process of selecting a set of parameters and topology. Currently, there are several state-of-the art methods that allow for the automatic selection of some of these aspects. Learning Rate optimizers are a set of such techniques that search for good values of learning rates. Whilst these techniques are effective and have yielded good results over the years, they are general solution i.e. they do not consider the characteristics of a specific network. We propose a framework called AutoLR to automatically design Learning Rate Optimizers. Two versions of the system are detailed. The first one, Dynamic AutoLR, evolves static and dynamic learning rate optimizers based on the current epoch and the previous learning rate. The second version, Adaptive AutoLR, evolves adaptive optimizers that can fine tune the learning rate for each network eeight which makes them generally more effective. The results are competitive with the best state of the art methods, even outperforming them in some scenarios. Furthermore, the system evolved a classifier, ADES, that appears to be novel and innovative since, to the best of our knowledge, it has a structure that differs from state of the art methods.

READ FULL TEXT
research
07/08/2020

AutoLR: An Evolutionary Approach to Learning Rate Policies

The choice of a proper learning rate is paramount for good Artificial Ne...
research
06/03/2015

Cyclical Learning Rates for Training Neural Networks

It is known that the learning rate is the most important hyper-parameter...
research
05/31/2016

Adaptive Learning Rate via Covariance Matrix Based Preconditioning for Deep Neural Networks

Adaptive learning rate algorithms such as RMSProp are widely used for tr...
research
08/18/2019

Demystifying Learning Rate Polices for High Accuracy Training of Deep Neural Networks

Learning Rate (LR) is an important hyper-parameter to tune for effective...
research
03/20/2020

Event-Based Control for Online Training of Neural Networks

Convolutional Neural Network (CNN) has become the most used method for i...
research
03/25/2020

Auto-Ensemble: An Adaptive Learning Rate Scheduling based Deep Learning Model Ensembling

Ensembling deep learning models is a shortcut to promote its implementat...
research
12/21/2020

A comparison of learning rate selection methods in generalized Bayesian inference

Generalized Bayes posterior distributions are formed by putting a fracti...

Please sign up or login with your details

Forgot password? Click here to reset