Principled Option Learning in Markov Decision Processes

09/18/2016
by   Roy Fox, et al.
0

It is well known that options can make planning more efficient, among their many benefits. Thus far, algorithms for autonomously discovering a set of useful options were heuristic. Naturally, a principled way of finding a set of useful options may be more promising and insightful. In this paper we suggest a mathematical characterization of good sets of options using tools from information theory. This characterization enables us to find conditions for a set of options to be optimal and an algorithm that outputs a useful set of options and illustrate the proposed algorithm in simulation.

READ FULL TEXT
research
11/21/2016

Options Discovery with Budgeted Reinforcement Learning

We consider the problem of learning hierarchical policies for Reinforcem...
research
10/16/2018

Finding Options that Minimize Planning Time

While adding temporally abstract actions, or options, to an agent's acti...
research
08/30/2022

Fine-Grained Liquid Democracy for Cumulative Ballots

We investigate efficient ways for the incorporation of liquid democracy ...
research
05/25/2022

Toward Discovering Options that Achieve Faster Planning

We propose a new objective for option discovery that emphasizes the comp...
research
02/10/2016

Iterative Hierarchical Optimization for Misspecified Problems (IHOMP)

For complex, high-dimensional Markov Decision Processes (MDPs), it may b...
research
12/03/2016

A Matrix Splitting Perspective on Planning with Options

We show that the Bellman operator underlying the options framework leads...

Please sign up or login with your details

Forgot password? Click here to reset