Identifying Reusable Macros for Efficient Exploration via Policy Compression

11/24/2017
by   Francisco M. Garcia, et al.
0

Reinforcement Learning agents often need to solve not a single task, but several tasks pertaining to a same domain; in particular, each task corresponds to an MDP drawn from a family of related MDPs (a domain). An agent learning in this setting should be able exploit policies it has learned in the past, for a given set of sample tasks, in order to more rapidly acquire policies for novel tasks. Consider, for instance, a navigation problem where an agent may have to learn to navigate different (but related) mazes. Even though these correspond to distinct tasks (since the goal and starting locations of the agent may change, as well as the maze configuration itself), their solutions do share common properties---e.g. in order to reach distant areas of the maze, an agent should not move in circles. After an agent has learned to solve a few sample tasks, it may be possible to leverage the acquired experience to facilitate solving novel tasks from the same domain. Our work is motivated by the observation that trajectory samples from optimal policies for tasks belonging to a common domain, often reveal underlying useful patterns for solving novel tasks. We propose an optimization objective that characterizes the problem of learning reusable temporally extended actions (macros). We introduce a computationally tractable surrogate objective that is equivalent to finding macros that allow for maximal compression of a given set of sampled trajectories. We develop a compression-based approach for obtaining such macros and propose an exploration strategy that takes advantage of them. We show that meaningful behavioral patterns can be identified from sample policies over discrete and continuous action spaces, and present evidence that the proposed exploration strategy improves learning time on novel tasks.

READ FULL TEXT
research
01/06/2020

Learning Reusable Options for Multi-Task Reinforcement Learning

Reinforcement learning (RL) has become an increasingly active area of re...
research
02/03/2019

A Meta-MDP Approach to Exploration for Lifelong Reinforcement Learning

In this paper we consider the problem of how a reinforcement learning ag...
research
06/16/2020

Task-agnostic Exploration in Reinforcement Learning

Efficient exploration is one of the main challenges in reinforcement lea...
research
07/06/2020

Maximum Entropy Gain Exploration for Long Horizon Multi-goal Reinforcement Learning

What goals should a multi-goal reinforcement learning agent pursue durin...
research
03/26/2015

Universal Psychometrics Tasks: difficulty, composition and decomposition

This note revisits the concepts of task and difficulty. The notion of co...
research
01/06/2021

Learn Dynamic-Aware State Embedding for Transfer Learning

Transfer reinforcement learning aims to improve the sample efficiency of...
research
02/05/2016

Active Information Acquisition

We propose a general framework for sequential and dynamic acquisition of...

Please sign up or login with your details

Forgot password? Click here to reset