DeepAI AI Chat
Log In Sign Up

DAC: The Double Actor-Critic Architecture for Learning Options

by   Shangtong Zhang, et al.
University of Oxford

We reformulate the option framework as two parallel augmented MDPs. Under this novel formulation, all policy optimization algorithms can be used off the shelf to learn intra-option policies, option termination conditions, and a master policy over options. We apply an actor-critic algorithm on each augmented MDP, yielding the Double Actor-Critic (DAC) architecture. Furthermore, we show that, when state-value functions are used as critics, one critic can be expressed in terms of the other, and hence only one critic is necessary. Our experiments on challenging robot simulation tasks demonstrate that DAC outperforms previous gradient-based option learning algorithms by a large margin and significantly outperforms its hierarchy-free counterparts in a transfer learning setting.


Cautious Actor-Critic

The oscillating performance of off-policy learning and persisting errors...

ACE: An Actor Ensemble Algorithm for Continuous Control with Tree Search

In this paper, we propose an actor ensemble algorithm, named ACE, for co...

SOAC: The Soft Option Actor-Critic Architecture

The option framework has shown great promise by automatically extracting...

Provably Convergent Off-Policy Actor-Critic with Function Approximation

We present the first provably convergent off-policy actor-critic algorit...

Soft Options Critic

The option-critic paper and several variants have successfully demonstra...

Multitask Soft Option Learning

We present Multitask Soft Option Learning (MSOL), a hierarchical multita...

Natural Option Critic

The recently proposed option-critic architecture Bacon et al. provide a ...