Adapting Behaviour for Learning Progress

12/14/2019
by   Tom Schaul, et al.
18

Determining what experience to generate to best facilitate learning (i.e. exploration) is one of the distinguishing features and open challenges in reinforcement learning. The advent of distributed agents that interact with parallel instances of the environment has enabled larger scales and greater flexibility, but has not removed the need to tune exploration to the task, because the ideal data for the learning algorithm necessarily depends on its process of learning. We propose to dynamically adapt the data generation by using a non-stationary multi-armed bandit to optimize a proxy of the learning progress. The data distribution is controlled by modulating multiple parameters of the policy (such as stochasticity, consistency or optimism) without significant overhead. The adaptation speed of the bandit can be increased by exploiting the factored modulation structure. We demonstrate on a suite of Atari 2600 games how this unified approach produces results comparable to per-task tuning at a fraction of the cost.

READ FULL TEXT
research
12/28/2020

Lifelong Learning in Multi-Armed Bandits

Continuously learning and leveraging the knowledge accumulated from prio...
research
09/23/2020

EXP4-DFDC: A Non-Stochastic Multi-Armed Bandit for Cache Replacement

In this work we study a variant of the well-known multi-armed bandit (MA...
research
09/18/2020

HTMRL: Biologically Plausible Reinforcement Learning with Hierarchical Temporal Memory

Building Reinforcement Learning (RL) algorithms which are able to adapt ...
research
09/29/2018

Dynamic Ensemble Active Learning: A Non-Stationary Bandit with Expert Advice

Active learning aims to reduce annotation cost by predicting which sampl...
research
08/02/2023

Maximizing Success Rate of Payment Routing using Non-stationary Bandits

This paper discusses the system architecture design and deployment of no...

Please sign up or login with your details

Forgot password? Click here to reset