Hierarchical Actor-Critic

12/04/2017
by   Andrew Levy, et al.
0

We present a novel approach to hierarchical reinforcement learning called Hierarchical Actor-Critic (HAC). HAC aims to make learning tasks with sparse binary rewards more efficient by enabling agents to learn how to break down tasks from scratch. The technique uses of a set of actor-critic networks that learn to decompose tasks into a hierarchy of subgoals. We demonstrate that HAC significantly improves sample efficiency in a series of tasks that involve sparse binary rewards and require behavior over a long time horizon.

READ FULL TEXT

page 7

page 9

research
12/19/2018

TD-Regularized Actor-Critic Methods

Actor-critic methods can achieve incredible performance on difficult rei...
research
08/12/2021

HAC Explore: Accelerating Exploration with Hierarchical Reinforcement Learning

Sparse rewards and long time horizons remain challenging for reinforceme...
research
05/07/2020

Curious Hierarchical Actor-Critic Reinforcement Learning

Hierarchical abstraction and curiosity-driven exploration are two common...
research
12/20/2013

A Supervised Goal Directed Algorithm in Economical Choice Behaviour: An Actor-Critic Approach

This paper aims to find an algorithmic structure that affords to predict...
research
10/14/2019

Actor Critic with Differentially Private Critic

Reinforcement learning algorithms are known to be sample inefficient, an...
research
01/18/2020

Effects of sparse rewards of different magnitudes in the speed of learning of model-based actor critic methods

Actor critic methods with sparse rewards in model-based deep reinforceme...
research
06/17/2019

Hierarchical Soft Actor-Critic: Adversarial Exploration via Mutual Information Optimization

We describe a novel extension of soft actor-critics for hierarchical Dee...

Please sign up or login with your details

Forgot password? Click here to reset