When Waiting is not an Option : Learning Options with a Deliberation Cost

09/14/2017
by   Jean Harb, et al.
0

Recent work has shown that temporally extended actions (options) can be learned fully end-to-end as opposed to being specified in advance. While the problem of "how" to learn options is increasingly well understood, the question of "what" good options should be has remained elusive. We formulate our answer to what "good" options should be in the bounded rationality framework (Simon, 1957) through the notion of deliberation cost. We then derive practical gradient-based learning algorithms to implement this objective. Our results in the Arcade Learning Environment (ALE) show increased performance and interpretability.

READ FULL TEXT
research
11/30/2017

Learnings Options End-to-End for Continuous Action Tasks

We present new results on learning temporally extended actions for conti...
research
12/22/2022

Reusable Options through Gradient-based Meta Learning

Hierarchical methods in reinforcement learning have the potential to red...
research
01/10/2013

Decision-Theoretic Planning with Concurrent Temporally Extended Actions

We investigate a model for planning under uncertainty with temporallyext...
research
12/03/2016

A Matrix Splitting Perspective on Planning with Options

We show that the Bellman operator underlying the options framework leads...
research
07/21/2018

Safe Option-Critic: Learning Safety in the Option-Critic Architecture

Designing hierarchical reinforcement learning algorithms that induce a n...
research
02/10/2023

Mitigating Decentralized Finance Liquidations with Reversible Call Options

Liquidations in Decentralized Finance (DeFi) are both a blessing and a c...
research
09/07/2016

Feasibility of Post-Editing Speech Transcriptions with a Mismatched Crowd

Manual correction of speech transcription can involve a selection from p...

Please sign up or login with your details

Forgot password? Click here to reset