Separation of Concerns in Reinforcement Learning

12/15/2016
by   Harm van Seijen, et al.
0

In this paper, we propose a framework for solving a single-agent task by using multiple agents, each focusing on different aspects of the task. This approach has two main advantages: 1) it allows for training specialized agents on different parts of the task, and 2) it provides a new way to transfer knowledge, by transferring trained agents. Our framework generalizes the traditional hierarchical decomposition, in which, at any moment in time, a single agent has control until it has solved its particular subtask. We illustrate our framework with empirical experiments on two domains.

READ FULL TEXT

page 6

page 7

research
05/20/2022

Task Relabelling for Multi-task Transfer using Successor Features

Deep Reinforcement Learning has been very successful recently with vario...
research
12/05/2020

Multi-agent navigation based on deep reinforcement learning and traditional pathfinding algorithm

We develop a new framework for multi-agent collision avoidance problem. ...
research
05/18/2018

Hierarchical Reinforcement Learning with Deep Nested Agents

Deep hierarchical reinforcement learning has gained a lot of attention i...
research
03/04/2019

Model Primitive Hierarchical Lifelong Reinforcement Learning

Learning interpretable and transferable subpolicies and performing task ...
research
02/07/2020

Student/Teacher Advising through Reward Augmentation

Transfer learning is an important new subfield of multiagent reinforceme...
research
05/21/2022

Co-design of Embodied Neural Intelligence via Constrained Evolution

We introduce a novel co-design method for autonomous moving agents' shap...
research
05/18/2023

Sharing Lifelong Reinforcement Learning Knowledge via Modulating Masks

Lifelong learning agents aim to learn multiple tasks sequentially over a...

Please sign up or login with your details

Forgot password? Click here to reset