Hierarchical Actor-Critic

12/04/2017
by   Andrew Levy, et al.
0

We present a novel approach to hierarchical reinforcement learning called Hierarchical Actor-Critic (HAC). HAC aims to make learning tasks with sparse binary rewards more efficient by enabling agents to learn how to break down tasks from scratch. The technique uses of a set of actor-critic networks that learn to decompose tasks into a hierarchy of subgoals. We demonstrate that HAC significantly improves sample efficiency in a series of tasks that involve sparse binary rewards and require behavior over a long time horizon.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset