Adam with Bandit Sampling for Deep Learning

10/24/2020
by   Rui Liu, et al.
0

Adam is a widely used optimization method for training deep learning models. It computes individual adaptive learning rates for different parameters. In this paper, we propose a generalization of Adam, called Adambs, that allows us to also adapt to different training examples based on their importance in the model's convergence. To achieve this, we maintain a distribution over all examples, selecting a mini-batch in each iteration by sampling according to this distribution, which we update using a multi-armed bandit algorithm. This ensures that examples that are more beneficial to the model training are sampled with higher probabilities. We theoretically show that Adambs improves the convergence rate of Adam—O(√(log n/T)) instead of O(√(n/T)) in some cases. Experiments on various models and datasets demonstrate Adambs's fast convergence in practice.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/01/2021

Asymptotic Performance of Thompson Sampling in the Batched Multi-Armed Bandits

We study the asymptotic performance of the Thompson sampling algorithm i...
research
02/06/2016

Improved Dropout for Shallow and Deep Learning

Dropout has been witnessed with great success in training deep neural ne...
research
05/28/2021

AutoSampling: Search for Effective Data Sampling Schedules

Data sampling acts as a pivotal role in training deep learning models. H...
research
01/28/2020

Faster Activity and Data Detection in Massive Random Access: A Multi-armed Bandit Approach

This paper investigates the grant-free random access with massive IoT de...
research
08/08/2017

Stochastic Optimization with Bandit Sampling

Many stochastic optimization algorithms work by estimating the gradient ...
research
04/22/2020

Adaptive Operator Selection Based on Dynamic Thompson Sampling for MOEA/D

In evolutionary computation, different reproduction operators have vario...
research
06/25/2023

Joint Learning of Network Topology and Opinion Dynamics Based on Bandit Algorithms

We study joint learning of network topology and a mixed opinion dynamics...

Please sign up or login with your details

Forgot password? Click here to reset