DeepAI AI Chat
Log In Sign Up

The Role of Exploration for Task Transfer in Reinforcement Learning

by   Jonathan C Balloch, et al.
Georgia Institute of Technology

The exploration–exploitation trade-off in reinforcement learning (RL) is a well-known and much-studied problem that balances greedy action selection with novel experience, and the study of exploration methods is usually only considered in the context of learning the optimal policy for a single learning task. However, in the context of online task transfer, where there is a change to the task during online operation, we hypothesize that exploration strategies that anticipate the need to adapt to future tasks can have a pronounced impact on the efficiency of transfer. As such, we re-examine the exploration–exploitation trade-off in the context of transfer learning. In this work, we review reinforcement learning exploration methods, define a taxonomy with which to organize them, analyze these methods' differences in the context of task transfer, and suggest avenues for future investigation.


page 1

page 2

page 3

page 4


Revisiting Exploration-Conscious Reinforcement Learning

The objective of Reinforcement Learning is to learn an optimal policy by...

ExTra: Transfer-guided Exploration

In this work we present a novel approach for transfer-guided exploration...

Online Selection of Diverse Committees

Citizens' assemblies need to represent subpopulations according to their...

ε-BMC: A Bayesian Ensemble Approach to Epsilon-Greedy Exploration in Model-Free Reinforcement Learning

Resolving the exploration-exploitation trade-off remains a fundamental p...

Dual Control for Approximate Bayesian Reinforcement Learning

Control of non-episodic, finite-horizon dynamical systems with uncertain...

Polyethism in a colony of artificial ants

We explore self-organizing strategies for role assignment in a foraging ...

Exploration With a Finite Brain

Equipping artificial agents with useful exploration mechanisms remains a...