Learning Rate Dropout

11/30/2019
by   Huangxing Lin, et al.
0

The performance of a deep neural network is highly dependent on its training, and finding better local optimal solutions is the goal of many optimization algorithms. However, existing optimization algorithms show a preference for descent paths that converge slowly and do not seek to avoid bad local optima. In this work, we propose Learning Rate Dropout (LRD), a simple gradient descent technique for training related to coordinate descent. LRD empirically aids the optimizer to actively explore in the parameter space by randomly setting some learning rates to zero; at each iteration, only parameters whose learning rate is not 0 are updated. As the learning rate of different parameters is dropped, the optimizer will sample a new loss descent path for the current update. The uncertainty of the descent path helps the model avoid saddle points and bad local minima. Experiments show that LRD is surprisingly effective in accelerating training while preventing overfitting.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/20/2021

Stochastic Learning Rate Optimization in the Stochastic Approximation and Online Learning Settings

In this work, multiplicative stochasticity is applied to the learning ra...
research
06/23/2020

On Compression Principle and Bayesian Optimization for Neural Networks

Finding methods for making generalizable predictions is a fundamental pr...
research
03/05/2021

Unintended Effects on Adaptive Learning Rate for Training Neural Network with Output Scale Change

A multiplicative constant scaling factor is often applied to the model o...
research
07/28/2019

ROAM: Recurrently Optimizing Tracking Model

Online updating a tracking model to adapt to object appearance variation...
research
09/12/2023

ELRA: Exponential learning rate adaption gradient descent optimization method

We present a novel, fast (exponential rate adaption), ab initio (hyper-p...
research
02/28/2022

Amortized Proximal Optimization

We propose a framework for online meta-optimization of parameters that g...
research
02/17/2023

On Equivalent Optimization of Machine Learning Methods

At the core of many machine learning methods resides an iterative optimi...

Please sign up or login with your details

Forgot password? Click here to reset