Forget the Learning Rate, Decay Loss

04/27/2019
by   Jiakai Wei, et al.
0

In the usual deep neural network optimization process, the learning rate is the most important hyper parameter, which greatly affects the final convergence effect. The purpose of learning rate is to control the stepsize and gradually reduce the impact of noise on the network. In this paper, we will use a fixed learning rate with method of decaying loss to control the magnitude of the update. We used Image classification, Semantic segmentation, and GANs to verify this method. Experiments show that the loss decay strategy can greatly improve the performance of the model

READ FULL TEXT
research
04/13/2020

k-decay: A New Method For Learning Rate Schedule

It is well known that the learning rate is the most important hyper-para...
research
03/23/2021

How to decay your learning rate

Complex learning rate schedules have become an integral part of deep lea...
research
03/20/2020

Event-Based Control for Online Training of Neural Networks

Convolutional Neural Network (CNN) has become the most used method for i...
research
11/18/2019

Feedback Control for Online Training of Neural Networks

Convolutional neural networks (CNNs) are commonly used for image classif...
research
07/14/2019

Finite-Time Performance Bounds and Adaptive Learning Rate Selection for Two Time-Scale Reinforcement Learning

We study two time-scale linear stochastic approximation algorithms, whic...
research
05/18/2018

Dynamic learning rate using Mutual Information

This paper demonstrates dynamic hyper-parameter setting, for deep neural...
research
08/21/2023

We Don't Need No Adam, All We Need Is EVE: On The Variance of Dual Learning Rate And Beyond

In the rapidly advancing field of deep learning, optimising deep neural ...

Please sign up or login with your details

Forgot password? Click here to reset