Natural Option Critic

12/04/2018
by   Saket Tiwari, et al.
0

The recently proposed option-critic architecture Bacon et al. provide a stochastic policy gradient approach to hierarchical reinforcement learning. Specifically, they provide a way to estimate the gradient of the expected discounted return with respect to parameters that define a finite number of temporally extended actions, called options. In this paper we show how the option-critic architecture can be extended to estimate the natural gradient of the expected discounted return. To this end, the central questions that we consider in this paper are: 1) what is the definition of the natural gradient in this context, 2) what is the Fisher information matrix associated with an option's parameterized policy, 3) what is the Fisher information matrix associated with an option's parameterized termination function, and 4) how can a compatible function approximation approach be leveraged to obtain natural gradient estimates for both the parameterized policy and parameterized termination functions of an option with per-time-step time and space complexity linear in the total number of parameters. Based on answers to these questions we introduce the natural option critic algorithm. Experimental results showcase improvement over the vanilla gradient approach.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/16/2016

The Option-Critic Architecture

Temporal abstraction is key to scaling up learning and planning in reinf...
research
11/30/2017

Learnings Options End-to-End for Continuous Action Tasks

We present new results on learning temporally extended actions for conti...
research
12/31/2019

On the Role of Weight Sharing During Deep Option Learning

The options framework is a popular approach for building temporally exte...
research
07/21/2018

Safe Option-Critic: Learning Safety in the Option-Critic Architecture

Designing hierarchical reinforcement learning algorithms that induce a n...
research
01/01/2020

Options of Interest: Temporal Abstraction with Interest Functions

Temporal abstraction refers to the ability of an agent to use behaviours...
research
04/29/2019

DAC: The Double Actor-Critic Architecture for Learning Options

We reformulate the option framework as two parallel augmented MDPs. Unde...
research
11/20/2019

Hierarchical Average Reward Policy Gradient Algorithms

Option-critic learning is a general-purpose reinforcement learning (RL) ...

Please sign up or login with your details

Forgot password? Click here to reset