Learning Compositional Neural Programs for Continuous Control

07/27/2020 ∙ by Thomas Pierrot, et al. ∙ Google InstaDeep UPMC 38

We propose a novel solution to challenging sparse-reward, continuous control problems that require hierarchical planning at multiple levels of abstraction. Our solution, dubbed AlphaNPI-X, involves three separate stages of learning. First, we use off-policy reinforcement learning algorithms with experience replay to learn a set of atomic goal-conditioned policies, which can be easily repurposed for many tasks. Second, we learn self-models describing the effect of the atomic policies on the environment. Third, the self-models are harnessed to learn recursive compositional programs with multiple levels of abstraction. The key insight is that the self-models enable planning by imagination, obviating the need for interaction with the world when learning higher-level compositional programs. To accomplish the third stage of learning, we extend the AlphaNPI algorithm, which applies AlphaZero to learn recursive neural programmer-interpreters. We empirically show that AlphaNPI-X can effectively learn to tackle challenging sparse manipulation tasks, such as stacking multiple blocks, where powerful model-free baselines fail.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 21

page 23

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Deep reinforcement learning (RL) has advanced many control domains, including dexterous object manipulation (Akkaya et al., 2019; Andrychowicz et al., 2018; Nagabandi et al., 2019), agile locomotion (Tan et al., 2018) and navigation (Faust et al., 2018; Held et al., 2018). Despite these successes, several key challenges remain. Stuart Russell phrases one of these challenges eloquently in Russell (2019): “Intelligent behavior over long time scales requires the ability to plan and manage activity hierarchically, at multiple levels of abstraction” and “the main missing piece of the puzzle is a method for constructing the hierarchy of abstract actions in the first place”. If achieved, this capability would be “The most important step needed to reach human-level AIRussell (2019). This challenge is particularly daunting when temporally extended tasks are combined with sparse binary rewards. In this case the agent does not receive any feedback from the environment while having to decide on a complex course of action, and receives a non-zero reward only after having fully solved the task.

Low sample efficiency is another challenge. In the absence of demonstrations, model-free RL agents require many interactions with the environment to converge to a satisfactory policy (Akkaya et al., 2019). We argue in this paper that both challenges can be addressed by learning models and compositional neural programs. In particular, planning with a learned internal model of the world reduces the amount of necessary interactions with the environment. By imagining likely future scenarios, an agent can avoid making mistakes in the real environment and instead find a sound plan before acting on the environment.

Many real-world tasks are naturally decomposed into hierarchical structures. We hypothesize that learning a variety of skills which can be reused and composed to learn more complex skills is key to tackling long-horizon sparse reward tasks in a sample efficient manner. Such compositionality, formalised by hierarchical RL (HRL), enables agents to explore in a temporally correlated manner, improving sample efficiency by reusing previously trained lower level skills. Unfortunately, prior studies in HRL typically assume that the hierarchy is given, or learn very simple forms of hierarchy in a model-free manner.

We propose a novel method, AlphaNPI-X, to learn programmatic policies which can perform hierarchical planning at multiple levels of abstraction in sparse reward continuous control problems. We first train low-level atomic policies that can be recomposed and re-purposed, represented by a single

goal-conditioned neural network. We leverage off-policy reinforcement learning with hindsight experience replay

(Andrychowicz et al., 2017) to train these efficiently. Next, we learn a transition model over the effects of these atomic policies, to imagine likely future scenarios, removing the need to interact with the real environment. Lastly, we learn recursive compositional programs, which combine low-level atomic policies at multiple levels of hierarchy, by planning over the learnt transition models, alleviating the need to interact with the environment. This is made possible by extending the AlphaNPI algorithm Pierrot et al. (2019) which applies AlphaZero-style planning (Silver et al., 2017) in a recursive manner to learn recombinable libraries of symbolic programs.

We show that our agent can learn to successfully combine skills hierarchically to solve challenging robotic manipulation tasks through look-ahead planning, even in the absence of any further interactions with the environment and where powerful model-free baselines struggle to get off the ground.111Videos of agent behaviour are available at: https://sites.google.com/view/alphanpix

2 Related Work

Our work is motivated by the central concern of constructing a hierarchy of abstract actions to deal with multiple challenging continuous control problems with sparse rewards. In addition, we want these actions to be reused and recomposed across different levels of hierarchy through planning with a focus on sample efficiency. This brings several research areas together, namely multitask learning, hierarchical reinforcement learning (HRL) and model-based reinforcement learning (MBRL).

As we have shown, learning continuous control from sparse binary rewards is difficult because it requires the agent to find long sequences of continuous actions from very few information. Other works tackling similar block stacking problems Lanier et al. (2019); Li et al. (2019) have either used a very precisely tuned curriculum, auxiliary tasks or reward engineering to succeed. In contrast, we show that by relying on planning even in absence of interactions with the environment we can successfully learn from raw reward signals.

Multitask learning and the resulting transfer learning challenge have been extensively studied across a large variety of domains, ranging from supervised problems, to goal-reaching in robotics or multi-agent settings

(Taylor and Stone, 2009; Zhang and Yang, 2017). A common requirement is to modify an agent’s behaviour depending on the currently inferred task. Universal value functions (Schaul et al., 2015) have been identified as a general factorized representation which is amenable to goal-conditioning agents and can be trained effectively (Barreto et al., 2019), when one has access to appropriately varied tasks to train from. Hindsight Experience Replay (her) (Andrychowicz et al., 2017)

helps improve the training efficiency by allowing past experience to be relabelled retroactively and has shown great promises in variety of domains. However, using a single task conditioning vector has its limitations when addressing long horizon problems. Other works explore different ways to break long-horizon tasks into sequences of sub-goals

(Nair and Finn, 2019; Bharadhwaj et al., 2020), or leverage her and curiosity signals to tackle similar robotic manipulation tasks (Lanier et al., 2019). In our work, we leverage her to train low-level goal-conditioned policies, but additionally learn to combine them sequentially using a program-guided meta-controller.

The problem of learning to sequence sub-behaviours together has been well-studied in the context of HRL (Sutton et al., 1999; Bacon et al., 2016) and has seen consistent progress throughout the years (Levy et al., 2017, 2018; Vezhnevets et al., 2017; Nachum et al., 2018). Learning both which sub-behaviours could be useful and how to combine them still remains a hard problem, which is often addressed by pre-learning sub-behaviours using a variety of signals (e.g. rewards (Barreto et al., 2019), demonstrations (Gupta et al., 2019), state-space coverage (Eysenbach et al., 2018; Pong et al., 2019; Lee et al., 2019; Islam et al., 2019), empowerment (Gregor et al., 2016), among many others). Our work assumes we know that some sub-behaviours can be generically useful (like stacking or moving blocks (Riedmiller et al., 2018)). We learn them using goal-conditioned policies as explained above, but we expand how these can be combined by allowing sub-behaviours to call other sub-behaviours, enabling complex hierarchies of behaviours to be discovered. We share this motivation with (Levy et al., 2019) which explores learning policies with multiple levels of hierarchy.

Our work also relates to frameworks which decompose the value function of an MDP into combination of smaller MDPs, such as MAXQ (Dietterich, 2000) or the factored MDP formulation Guestrin et al. (2003), and more recent work which leverage natural language to achieve behaviour compositionality and generalisation (Jiang et al., 2019; Andreas et al., 2017). As our meta-controller uses mcts with a learned model, this also relates to advances in MBRL (Nagabandi et al., 2018; Nasiriany et al., 2019; Schrittwieser et al., 2019; Sekar et al., 2020). Our work extends this to the use of hierarchical programmatic skills. This closely connects to model-based HRL research area which has seen only limited attention so far (Silver and Ciosek, 2012; Nasiriany et al., 2019; Illanes et al., 2020).

Finally, our work capitalizes on recent advances in Neural Programmers Interpreters (NPI) (Reed and De Freitas, 2015), in particular AlphaNPI (Pierrot et al., 2019)

. NPI learns a library of program embeddings that can be recombined using a core recurrent neural network that learns to interpret arbitrary programs. AlphaNPI augments NPI with AlphaZero

(Silver et al., 2016, 2017) style mcts search and RL applied to symbolic tasks such as Tower of Hanoi and sorting. We extend AlphaNPI to challenging continuous control domains and learn a transition model over programs instead of using the environment. Our work shares similar motivation with (Xu et al., 2017) where NPI is applied to solve different robotic tasks, but we do not require execution traces or pre-trained low-level policies.

Figure 1: Illustrative example of an execution trace for the Clean_And_Stack program. This trace is not optimal as the program may be realised in fewer moves but it corresponds to one of the solutions found by AlphaNPI-X during training. Atomic program calls are shown in green and non-atomic program calls are shown in blue.

3 Problem Definition

In this paper, we aim to learn libraries of skills to solve a variety of tasks in continuous action domains with sparse rewards. Consider the task shown in Figure 1 where the agent’s goal is to take the environment from its initial state where four blocks are randomly placed to the desired final state where blocks are in their corresponding coloured zones and stacked on top of each other. We formalize skills and their combinations as programs. An example programmatic trace for solving this task is shown where the sequence of programs are called to take the environment from the initial state to the final rewarding state. We specify two distinct types of programs: Atomic programs (shown in green) are low-level goal-conditioned policies which take actions in the environment for a fixed number of steps . Non-atomic programs (shown in blue) are a combination of atomic and/or other non-atomic programs, allowing multiple possible levels of hierarchies in behaviour.


program arguments description
Stack Two blocks id Stack a block on another block.
Move_To_Zone Block id & colour Move a block to a colour zone.
Stack_All_Blocks No arguments Stack all blocks together in any order.
Stack_All_To_Zone colour Stack blocks of the same colour in a zone.
Move_All_To_Zone colour Move blocks of the same colour to a zone.
Clean_Table No arguments Move all blocks to their colour zone.
Clean_And_Stack No arguments Stack blocks of the same colour in zones.
Table 1: Programs library for the fetch arm environment. We obtain 20 atomic and 7 non atomic programs when considering all possible combinations when expending programs arguments. Please see Supp. Section A for a detailed explanation of all programs and Table 3 for all combinations when expending program arguments.

We base our experiments on a set of robotic tasks with continuous action space. Due to the lack of any long-horizon hierarchical multi-task benchmarks, we extended the OpenAI Gym Fetch environment (Brockman et al., 2016) with tasks exhibiting such requirements. We consider a target set of tasks represented by a hierarchical library of programs, see Table 1. These tasks involve controlling a robotic arm with 7-DOF to manipulate 4 coloured blocks in the environment. Tasks vary from simple block stacking to arranging all blocks into different areas depending on their colour. Initial block positions on the table and arm positions are randomized in all tasks. We consider 20 atomic programs that correspond to operating on one block at a time as well as 7 non-atomic programs that require interacting with 2 to 4 blocks. We give the full specifications of these tasks in Sup. Table 3.

We consider a continuous action space , a continuous state space , an initial state distribution and a transition function . The state vector contains the positions, rotations, linear and angular velocities of the gripper and all blocks. More precisely, a state has the form where is the position of block at time step and contains additional information about the gripper and velocities.

More formally, we aim to learn a set of programs . A program is defined by its pre-condition which assesses whether the program can start and its post-condition which corresponds to the reward function here. Each program is associated to an MDP which can start only in states such that the pre-condition is satisfied, and where is a reward function that outputs when the post-condition is satisfied and 0 otherwise.

Atomic programs are represented by a goal-conditioned neural network with continuous action space. Non-atomic programs use a modified action space: we replace the original continuous action space by a discrete action space Stop where actions call programs and the Stop action enables the current program to terminate and return the execution to its calling program. Atomic programs don’t have a Stop action, they terminate after time steps.

4 AlphaNPI-X


Figure 2: AlphaNPI-X consists of three modules: A goal setter transforms an atomic program index into a goal that is executed by a goal-conditioned policy. A self-behavioural model transforms an atomic program index and an environment state into a prediction of the environment state once the goal-conditioned policy has been rolled for time steps to execute this program. A meta-controller plans through the behavioural model to execute non-atomic programs.

AlphaNPI-X learns to solve multiple tasks by composing programs at different levels of hierarchy. Given a one-hot encoding of a program and states from the environment, the meta-controller calls either an atomic program, a non-atomic program or the

Stop action. In the next sections, we will describe how learning and inference is performed.

4.1 Training AlphaNPI-X

Learning in our system operates in three stages: first we learn the atomic programs, then we learn a transition model over their effect and finally we train the meta-controller to combine them. We provide detailed explanation of how these modules are learned below.

Learning Atomic Programs

Our atomic programs consist of two components: a) a goal-setter that, given the atomic program encoding and the current environment state, generates a goal vector representing the desired position of the blocks in the environment and b) a goal-conditioned policy that gets a “goal” as a conditioning vector, produces continuous actions and terminates after T time steps.

To execute an atomic program from an initial state satisfying the pre-condition , the goal-setter, , computes the goal : . We then roll the goal-conditioned policy for time steps to achieve the goal. In this work, the goal setter module is provided which translates the atomic program specification into the corresponding goal vector indicating the desired position of the blocks which is also used to compute rewards.

We parametrize our shared goal-conditioned policy using Universal Value Function Approximators (uvfas) (Schaul et al., 2015). uvfa

s estimate a value function that does not only generalise over states but also goals. To accelerate training of this goal-conditioned

uvfa, we leverage the "final" goal relabeling strategy introduced in her (Andrychowicz et al., 2017). Past episodes of experience are relabelled retroactively with goals that are different from the goal aimed for during data collection and instead correspond to the goal achieved in the final state of the episode. The mappings from state vector to goals is simply done via extracting the blocks positions directly from the state vector . To deal with continuous control, we arbitrarily use ddpg (Lillicrap et al., 2015) to train the goal-conditioned policy.

More formally, we define a goal space which is a subspace of the state space . The goal-conditioned policy takes as inputs a state as well as a goal . We define the function that extracts the goal from a state . This policy is trained with the reward function defined as where . The goal setter as takes an initial state and an atomic program index and returns a goal such that . We can thus express any atomic program policy as

We first train the policy with HER to achieve goals sampled uniformly in from any state sampled from

. However, the distribution of initial states encountered by each atomic program may be very different when executing programs sequentially (as will happen when non-atomic programs are introduced). Thus, after the initial training, we continue training the policy, but with probability

we do not reset the environment between episodes, to approximate the real distribution of initial states which happens when atomic programs are chained together. Hence, the initial state is either sampled randomly or kept as the last state observed in the previous episode. We later show our empirical results and analysis regarding both training phases.


Figure 3: AlphaNPI-X combines a goal setter and a goal-conditioned policy to execute atomic programs. Here we see an atomic program execution for stacking block 2 on top of block 3. The goal proposed by the goal setter is shown in red which indicated the desired position for block 2. The goal-conditioned policy takes this goal as an input and receives a positive reward only if it can successfully reach this goal.

Learning Self-Behavioural Model

After learning a set of atomic programs we learn a transition model over their effects: , parameterized with a neural network. As Figure 2-B shows, this module takes as input an initial state and an atomic program index and the output is the prediction of the environment state obtained when rolling the policy associated to this program for time steps from the initial state. We use a fully connected MLP and train it by minimizing the mean-squared error to the ground-truth final states. This enables our model to make jumpy predictions over the effect of executing an atomic program during search, hence removing the need for any interactions with the environment during planning. Learning this model enables us to imagine the state of the environment following an atomic program execution and hence to avoid any further calls to atomic programs that would each have to perform many actions in the environment.

Learning the Meta-Controller

In order to compose atomic programs together into hierarchical stacks of non-atomic programs, we use a meta-controller inspired by AlphaNPI (Pierrot et al., 2019). The meta-controller interprets and selects the next program to execute using neural-network guided Monte Carlo Tree Search (MCTS) (Silver et al., 2017; Pierrot et al., 2019), conditioned on the current program index and states from the environment (see Figure 2 for an overview of a node expansion).

We train the meta-controller using the recursive mcts strategy introduced in AlphaNPI (Pierrot et al., 2019): during search, if the selected action is non-atomic, we recursively build a new Monte Carlo tree for that program, using the same state . See Supp. Section B for a detailed description of the search process and pseudo-code.

In AlphaNPI (Pierrot et al., 2019), similar to AlphaZero, during the tree search, future scenarios were evaluated by leveraging the ground-truth environment, without any temporal abstraction. Instead in this work, we do not use the environment directly during planning, but replace it by our learnt transition model over the effects of the atomic programs, the self-behavioural model described in Section 4.1, resulting in a far more sample efficient algorithm. Besides, AlphaNPI used a hand-coded curriculum where the agent gradually learned from easy to hard programs. Instead here, we randomly sample at each iteration a program to learn from, removing the need for any supervision and hand-crafting of a curriculum. Inspired by recent work (Colas et al., 2018), we also implemented an automated curriculum strategy based on learning progress, however we found that it does not outperform random sampling (see more detailed explanations and results in Supp. Section B).

4.2 Inference with AlphaNPI-X

To infer with AlphaNPI-X, we compare three different inference strategies: 1) Rely only on the policy network, without planning. 2) Plan a whole trajectory using the learned transition model (i.e. self-behavioural model) and execute it fully (i.e. open-loop planning). 3) Use a receding horizon for planning inspired by Model Predictive Control (MPC) (Garcia et al., 1989). During execution, observed states can diverge from the predictions made by the self-behavioural model due to distribution shift, especially for long planning horizons, which deteriorates performance. To counter that, we do not commit to a plan for the full episode, but instead re-plan after executing any atomic program (non-atomic programs do not trigger re-planning). The comparison between these inference methods can be found in Table 2. We also provide a detailed example in Supp. Section C.

5 Experiments and Results

Figure 4: The goal policy is trained in two phases. During the first phase, goals and initial states are sampled uniformly. During the second phase, we do not reset the environment with probability between two consecutive episodes to approximate the real distribution of initial states. Once the goal policy is trained, we learn a model of its behaviour. Left. Comparing performance after phase 2 for different probabilities . Middle. Comparing performance between phase 1 and phase 2. Right. Comparing model predictions performance when trained on different number of episodes generated with the goal policy.

We first train the atomic policies using ddpg with her relabelling described in Section 4.1. Initial block positions on the table as well as gripper position are randomized. Additional details regarding the experimental setup is provided in Supp. Section D

. In the first phase of training, we train the agent for 100 epochs during uniform goal sampling. In the second phase we continue training for 150 epochs where we change the sampling distribution to allow no resets between episodes to mimic the desired setting where skills are sequentially executed.

We evaluate the agent performance on goals corresponding to single atomic programs as well as two multi-step sequential atomic program executions, MultiStep-Moving which requires 4 consecutive calls to move_to_zone and MultiStep-Stacking which requires 3 consecutive calls to stack (see Figure 4). We observe that while the agent obtains decent performance on executing atomic programs in the first phase, the second phase is indeed crucial to ensure success of the sequential execution of the programs. We also observe that the reset probability affects the performance and provides the better trade-off between the asymptotic training performance and the agent ability to execute programs sequentially. More details are included in Supp. Section E.

Figure 5: AlphaNPI-X performance evolution during training on non-atomic programs. The performance showed is the one predicted by the behavioural model and thus can differ from the one measured in the environment.

We then train the self-behavioural model to predict the effect of atomic programs, using three datasets respectively made of 10k, 50k and 100k episodes played with the goal-conditioned policy. As during the second phase of training, we did not reset the environment between two episodes with a probability . We trained the self-behavioural model for 500 epochs on each data set. Figure 4 shows the performance aggregated across different variations of stack and move_to_zone behaviours.

To train the meta-controller, we randomly sample from the set of non-atomic programs during training. In Figure-5, we report the performance on all non-atomic tasks during the evolution of training. We also investigated the use of an automated curriculum via learning progress but observed that it does not outperform random sampling (please see Supp. Section B.3 for details). After training for epochs, we evaluate the AlphaNPI-X agent on each non-atomic program in the library shown in Table 2. Our results indicate that removing planning significantly reduces performance, especially for stacking 4 blocks where removing planning results in complete failure. We observe that re-planning during inference significantly helps improve the agent’s performance overall.


Program No
Plan
Planning
Planning
+ Re-planning
Multitask
DDPG
Multitask
DDPG + HER
Clean_Table 0.02 0.54 0.63 0.0 0.0
Clean_And_Stack 0.01 0.25 0.38 0.0 0.0
Stack_All_Blocks 0.02 0.19 0.31 0.0 0.0
Stack_All_To_Zone 0.82 0.92 0.94 0.0 0.0
Move_All_To_Zone 0.68 0.95 0.95 0.0 0.0
Table 2: We compare the performance of AlphaNPI-X in different inference settings described in Section 4.2 as well as against 2 model-free baselines. Each program is executed 100 times with randomized environment configuration. For programs with arguments, the performance is averaged over all possible argument combinations. The full list of programs is provided in Supp. Section A.

We compare our method to two baselines to illustrate the difficulty for standard RL methods to solve tasks with sparse reward signals. First, we implemented a multitask ddpg (m-ddpg) that takes as input the environment state and a one hot encoding of a non-atomic program. At each iteration, m-ddpg selects randomly a program index and plays one episode in the environment. Second, we implemented a m-ddpg + her agent which leverages a goal setter for non-atomic tasks described in Section 4.1. This non atomic programs goal setter is not available to AlphaNPI-X. This agent has more information than its m-ddpg counterpart. Instead of receiving a program index, it receives a goal vector representing the desired end state of the blocks. As in her, goals are relabelled during training. We observe that m-ddpg is unable to learn any non-atomic program. It is also the case for m-ddpg + her despite having access to the additional goal representations. This shows that standard exploration mechanisms in model-free agents such as in ddpg, where Gaussian noise is added to the actions, is very unlikely to lead to rewarding sequences and hence learning is hindered.

6 Conclusion

In this paper, we proposed AlphaNPI-X, a novel method for constructing a hierarchy of abstract actions in a rich object manipulation domain with sparse rewards and long horizons. Several ingredients proved critical in the design of AlphaNPI-X. First, by learning a self-behavioural model, we can leverage the power of recursive AlphaZero-style look-ahead planning across multiple levels of hierarchy, without ever interacting with the real environment. Second, planning with a receding horizon at inference time resulted in more robust performance. We also observed that our approach does not require a carefully designed curriculum commonly used in NPI so random task sampling can be simply used instead. Experimental results demonstrated that AlphaNPI-X, using an abstract imagination-based reasoning, can simultaneously solve multiple complex tasks involving dexterous object manipulation beyond the reach of model-free methods.

A limitation of our work is that similar to AlphaNPI, we pre-specified a hierarchical library of the programs to be learned. While it enables human interpretability, this requirement is still quite strong. Thus, a natural next step would be to extend our algorithm to discover these programmatic abstractions during training so that our agent can specify and learn new skills hierarchically in a fully unsupervised manner.

Broader Impact

Our paper presents a sample efficient technique for learning in sparse-reward temporally extended settings. It does not require any human demonstrations and can learn purely from sparse reward signals. Moreover, after learning low-level skills, it can learn to combine them to solve challenging new tasks without any requirement to interact with the environment. We believe this has a positive impact in making reinforcement learning techniques more accessible and applicable in settings where interacting with the environment is costly or even dangerous such as robotics. Due to the compositional nature of our method, the interpretability of the agent’s policy is improved as high-level programs explicitly indicate the intention of the agent over multiple steps of behaviour. This participates in the effort of building more interpretable and explainable reinforcement learning agents in general. Furthermore, by releasing our code and environments we believe that we help efforts in reproducible science and allow the wider community to build upon and extend our work in the future.

References

  • I. Akkaya, M. Andrychowicz, M. Chociej, M. Litwin, B. McGrew, A. Petron, A. Paino, M. Plappert, G. Powell, R. Ribas, et al. (2019) Solving rubik’s cube with a robot hand. arXiv preprint arXiv:1910.07113. Cited by: §1, §1.
  • J. Andreas, D. Klein, and S. Levine (2017) Modular multitask reinforcement learning with policy sketches. In

    Proceedings of the 34th International Conference on Machine Learning-Volume 70

    ,
    pp. 166–175. Cited by: §2.
  • M. Andrychowicz, B. Baker, M. Chociej, R. Jozefowicz, B. McGrew, J. Pachocki, A. Petron, M. Plappert, G. Powell, A. Ray, et al. (2018) Learning dexterous in-hand manipulation. arXiv preprint arXiv:1808.00177. Cited by: §1.
  • M. Andrychowicz, F. Wolski, A. Ray, J. Schneider, R. Fong, P. Welinder, B. McGrew, J. Tobin, P. Abbeel, and W. Zaremba (2017) Hindsight experience replay. CoRR abs/1707.01495. External Links: 1707.01495 Cited by: Appendix C, §E.4, §1, §2, §4.1.
  • P. Bacon, J. Harb, and D. Precup (2016) The option-critic architecture. CoRR abs/1609.05140. External Links: 1609.05140 Cited by: §2.
  • A. Barreto, D. Borsa, S. Hou, G. Comanici, E. Aygün, P. Hamel, D. Toyama, S. Mourad, D. Silver, D. Precup, et al. (2019) The option keyboard: combining skills in reinforcement learning. In Advances in Neural Information Processing Systems, pp. 13031–13041. Cited by: §2, §2.
  • H. Bharadhwaj, A. Garg, and F. Shkurti (2020) Dynamics-aware latent space reachability for exploration in temporally-extended tasks. arXiv preprint arXiv:2005.10934. Cited by: §2.
  • G. Brockman, V. Cheung, L. Pettersson, J. Schneider, J. Schulman, J. Tang, and W. Zaremba (2016) OpenAI gym. CoRR abs/1606.01540. External Links: 1606.01540 Cited by: §D.1, §3.
  • C. Colas, P. Fournier, O. Sigaud, and P. Oudeyer (2018) CURIOUS: intrinsically motivated multi-task, multi-goal reinforcement learning. CoRR abs/1810.06284. External Links: 1810.06284 Cited by: §B.3, §E.4, §4.1.
  • T. G. Dietterich (2000) Hierarchical reinforcement learning with the maxq value function decomposition.

    Journal of artificial intelligence research

    13, pp. 227–303.
    Cited by: §2.
  • B. Eysenbach, A. Gupta, J. Ibarz, and S. Levine (2018) Diversity is all you need: learning skills without a reward function. arXiv preprint arXiv:1802.06070. Cited by: §2.
  • A. Faust, K. Oslund, O. Ramirez, A. Francis, L. Tapia, M. Fiser, and J. Davidson (2018) PRM-rl: long-range robotic navigation tasks by combining reinforcement learning and sampling-based planning. In 2018 IEEE International Conference on Robotics and Automation (ICRA), pp. 5113–5120. Cited by: §1.
  • C. E. Garcia, D. M. Prett, and M. Morari (1989) Model predictive control: theory and practice—a survey. Automatica 25 (3), pp. 335–348. Cited by: §4.2.
  • K. Gregor, D. J. Rezende, and D. Wierstra (2016) Variational intrinsic control. arXiv preprint arXiv:1611.07507. Cited by: §2.
  • C. Guestrin, D. Koller, R. Parr, and S. Venkataraman (2003) Efficient solution algorithms for factored mdps. Journal of Artificial Intelligence Research 19, pp. 399–468. Cited by: §2.
  • A. Gupta, V. Kumar, C. Lynch, S. Levine, and K. Hausman (2019) Relay policy learning: solving long-horizon tasks via imitation and reinforcement learning. arXiv preprint arXiv:1910.11956. Cited by: §2.
  • D. Held, X. Geng, C. Florensa, and P. Abbeel (2018) Automatic goal generation for reinforcement learning agents. arXiv preprint arXiv:1705.06366. Cited by: §1.
  • L. Illanes, X. Yan, R. T. Icarte, and S. A. McIlraith (2020) Symbolic plans as high-level instructions for reinforcement learning. In Proceedings of the International Conference on Automated Planning and Scheduling, Vol. 30, pp. 540–550. Cited by: §2.
  • R. Islam, Z. Ahmed, and D. Precup (2019) Marginalized state distribution entropy regularization in policy optimization. arXiv preprint arXiv:1912.05128. Cited by: §2.
  • Y. Jiang, S. Gu, K. Murphy, and C. Finn (2019) Language as an abstraction for hierarchical deep reinforcement learning. CoRR abs/1906.07343. External Links: 1906.07343 Cited by: §2.
  • J. B. Lanier, S. McAleer, and P. Baldi (2019) Curiosity-driven multi-criteria hindsight experience replay. arXiv preprint arXiv:1906.03710. Cited by: §E.4, §2, §2.
  • L. Lee, B. Eysenbach, E. Parisotto, E. Xing, S. Levine, and R. Salakhutdinov (2019) Efficient exploration via state marginal matching. arXiv preprint arXiv:1906.05274. Cited by: §2.
  • A. Levy, R. Platt, and K. Saenko (2017) Hierarchical actor-critic. arXiv preprint arXiv:1712.00948. Cited by: §2.
  • A. Levy, R. Platt, and K. Saenko (2018) Hierarchical reinforcement learning with hindsight. arXiv preprint arXiv:1805.08180. Cited by: §2.
  • A. Levy, R. Platt, and K. Saenko (2019) Hierarchical reinforcement learning with hindsight. In International Conference on Learning Representations, Cited by: §2.
  • R. Li, A. Jabri, T. Darrell, and P. Agrawal (2019) Towards practical multi-object manipulation using relational reinforcement learning. arXiv preprint arXiv:1912.11032. Cited by: §2.
  • T. P. Lillicrap, J. J. Hunt, A. Pritzel, N. Heess, T. Erez, Y. Tassa, D. Silver, and D. Wierstra (2015) Continuous control with deep reinforcement learning. arXiv preprint arXiv:1509.02971. Cited by: §4.1.
  • O. Nachum, S. Gu, H. Lee, and S. Levine (2018) Data-efficient hierarchical reinforcement learning. CoRR abs/1805.08296. External Links: 1805.08296 Cited by: §2.
  • A. Nagabandi, G. Kahn, R. S. Fearing, and S. Levine (2018) Neural network dynamics for model-based deep reinforcement learning with model-free fine-tuning. In 2018 IEEE International Conference on Robotics and Automation (ICRA), pp. 7559–7566. Cited by: §2.
  • A. Nagabandi, K. Konoglie, S. Levine, and V. Kumar (2019) Deep dynamics models for learning dexterous manipulation. arXiv preprint arXiv:1909.11652. Cited by: §1.
  • S. Nair and C. Finn (2019)

    Hierarchical foresight: self-supervised learning of long-horizon tasks via visual subgoal generation

    .
    arXiv preprint arXiv:1909.05829. Cited by: §2.
  • S. Nasiriany, V. Pong, S. Lin, and S. Levine (2019) Planning with goal-conditioned policies. In Advances in Neural Information Processing Systems, pp. 14814–14825. Cited by: §2.
  • T. Pierrot, G. Ligner, S. E. Reed, O. Sigaud, N. Perrin, A. Laterre, D. Kas, K. Beguir, and N. de Freitas (2019) Learning compositional neural programs with recursive tree search and planning. In Advances in Neural Information Processing Systems 32, pp. 14646–14656. Cited by: Figure 6, §1, §2, §4.1, §4.1, §4.1.
  • V. H. Pong, M. Dalal, S. Lin, A. Nair, S. Bahl, and S. Levine (2019) Skew-fit: state-covering self-supervised reinforcement learning. arXiv preprint arXiv:1903.03698. Cited by: §2.
  • S. Reed and N. De Freitas (2015) Neural programmer-interpreters. arXiv preprint arXiv:1511.06279. Cited by: §2.
  • M. A. Riedmiller, R. Hafner, T. Lampe, M. Neunert, J. Degrave, T. V. de Wiele, V. Mnih, N. Heess, and J. T. Springenberg (2018) Learning by playing - solving sparse reward tasks from scratch. CoRR abs/1802.10567. External Links: 1802.10567 Cited by: §2.
  • S. Russell (2019) Human compatible: artificial intelligence and the problem of control. Penguin Publishing Group. External Links: ISBN 9780525558620, LCCN 2019029689, Link Cited by: §1.
  • T. Schaul, D. Horgan, K. Gregor, and D. Silver (2015) Universal value function approximators. In Proceedings of the 32nd International Conference on Machine Learning, F. Bach and D. Blei (Eds.), Proceedings of Machine Learning Research, Vol. 37, Lille, France, pp. 1312–1320. Cited by: §2, §4.1.
  • J. Schrittwieser, I. Antonoglou, T. Hubert, K. Simonyan, L. Sifre, S. Schmitt, A. Guez, E. Lockhart, D. Hassabis, T. Graepel, et al. (2019) Mastering atari, go, chess and shogi by planning with a learned model. arXiv preprint arXiv:1911.08265. Cited by: §2.
  • R. Sekar, O. Rybkin, K. Daniilidis, P. Abbeel, D. Hafner, and D. Pathak (2020) Planning to explore via self-supervised world models. arXiv preprint arXiv:2005.05960. Cited by: §2.
  • D. Silver and K. Ciosek (2012) Compositional planning using optimal option models. arXiv preprint arXiv:1206.6473. Cited by: §2.
  • D. Silver, A. Huang, C. J. Maddison, A. Guez, L. Sifre, G. Van Den Driessche, J. Schrittwieser, I. Antonoglou, V. Panneershelvam, M. Lanctot, et al. (2016) Mastering the game of go with deep neural networks and tree search. nature 529 (7587), pp. 484. Cited by: §2.
  • D. Silver, T. Hubert, J. Schrittwieser, I. Antonoglou, M. Lai, A. Guez, M. Lanctot, L. Sifre, D. Kumaran, T. Graepel, et al. (2017) Mastering chess and shogi by self-play with a general reinforcement learning algorithm. arXiv preprint arXiv:1712.01815. Cited by: §1, §2, §4.1.
  • R. S. Sutton, D. Precup, and S. Singh (1999) Between mdps and semi-mdps: a framework for temporal abstraction in reinforcement learning. Artificial intelligence 112 (1-2), pp. 181–211. Cited by: §2.
  • J. Tan, T. Zhang, E. Coumans, A. Iscen, Y. Bai, D. Hafner, S. Bohez, and V. Vanhoucke (2018) Sim-to-real: learning agile locomotion for quadruped robots. arXiv preprint arXiv:1804.10332. Cited by: §1.
  • M. E. Taylor and P. Stone (2009) Transfer learning for reinforcement learning domains: a survey. Journal of Machine Learning Research 10 (Jul), pp. 1633–1685. Cited by: §2.
  • A. S. Vezhnevets, S. Osindero, T. Schaul, N. Heess, M. Jaderberg, D. Silver, and K. Kavukcuoglu (2017) Feudal networks for hierarchical reinforcement learning. arXiv preprint arXiv:1703.01161. Cited by: §2.
  • D. Xu, S. Nair, Y. Zhu, J. Gao, A. Garg, L. Fei-Fei, and S. Savarese (2017) Neural Task Programming: Learning to Generalize Across Hierarchical Tasks. arXiv e-prints, pp. arXiv:1710.01813. External Links: 1710.01813 Cited by: §2.
  • Y. Zhang and Q. Yang (2017) A survey on multi-task learning. arXiv preprint arXiv:1707.08114. Cited by: §2.

Supplementary material

Appendix A Programs library

Our program library includes 20 atomic programs and 7 non-atomic ones. The full list of these programs can be viewed in Table 3. In the AlphaNPI paper, the program library contained only five atomic programs and 3 non-atomic programs. Thus, the branching factor in the tree search in AlphaNPI-X is on average much greater than in the AlphaNPI paper. Furthermore, in this work, these atomic programs are learned and therefore might not always execute as expected while in AlphaNPI the five atomic programs are hard-coded in the environment and thus execute successfully anytime they are called.

While in this study we removed the need for a hierarchy of programs during learning, we still defined programs levels for two purposes: (i) to control the dependencies between programs, programs of lower levels cannot call programs of higher levels; (ii) to facilitate the tree search by relying on the level balancing term introduced in the original AlphaNPI P-UCT criterion. In this context, we defined Clean_Table and Clean_And_Stack as level 2 programs, Stack_All_Blocks, Move_All_To_Zone and Move_All_To_Zone as level 1 programs. The atomic programs are defined as level 0 programs. Interestingly, the natural order learned during training (when sampling random program indices to learn from) matches this hierarchy, see Figure 11.


program description
Stack_0_1 Stack block number 0 on block number 1.
Stack_0_2 Stack block number 0 on block number 2.
Stack_0_3 Stack block number 0 on block number 3.
Stack_1_0 Stack block number 1 on block number 0.
Stack_1_2 Stack block number 1 on block number 2.
Stack_1_3 Stack block number 1 on block number 3.
Stack_2_0 Stack block number 2 on block number 0.
Stack_2_1 Stack block number 2 on block number 1.
Stack_2_3 Stack block number 2 on block number 3.
Stack_3_0 Stack block number 3 on block number 0.
Stack_3_1 Stack block number 3 on block number 1.
Stack_3_2 Stack block number 3 on block number 2.
Move_To_Zone_0_ORANGE Move block number 0 to orange zone.
Move_To_Zone_1_ORANGE Move block number 1 to orange zone.
Move_To_Zone_2_ORANGE Move block number 2 to orange zone.
Move_To_Zone_3_ORANGE Move block number 3 to orange zone.
Move_To_Zone_0_BLUE Move block number 0 to blue zone.
Move_To_Zone_1_BLUE Move block number 1 to blue zone.
Move_To_Zone_2_BLUE Move block number 2 to blue zone.
Move_To_Zone_3_BLUE Move block number 3 to blue zone.
Stack_All_To_Zone_ORANGE Stack orange blocks in the orange zone.
Stack_All_To_Zone_BLUE Stack blue blocks in the blue zone.
Move_All_To_Zone_ORANGE Move orange blocks to the orange zone.
Move_All_To_Zone_BLUE Move blue blocks to the orange zone.
Stack_All_Blocks Stack all blocks together in any order.
Clean_Table Move all blocks to their colour zone.
Clean_And_Stack Stack blocks of the same colour in zones.
Table 3: Programs library for the fetch arm environment. We show the flat list of all program possibilities by expending their arguments.

Appendix B Details of the AlphaNPI-X method

b.1 Table of symbols

Here we provide the list of symbols used in our method section:

Name Symbol Notes
Action space is continuous
State space is continuous
Goal space is a sub-space of
Initial state distribution
Reward obtained at time
Discount factor
Program If then is atomic
Program index selected at time
Environment state
Program pre-condition
Program post-condition
AlphaNPI-X policy
Program policy
Goal setter
her mapping function extract a goal from a state vector
Goal conditioned policy
Self-behavioural model
Table 4: AlphaNPI-X table of symbols

b.2 AlphaNPI

Here we provide some further details on the AlphaNPI method, which we use and extend for our meta-controller. The AlphaNPI agent uses a recursion augmented Monte Carlo Tree Search (MCTS) algorithm to learn libraries of hierarchical programs from sparse reward signals. The tree search is guided by an actor-critic network inspired from the Neural Programmers-Interpreter (NPI) architecture. A stack is used to handle programs as in a standard program execution: when a non-atomic program calls another program, the current NPI network’s hidden state is saved in a stack and the next program execution starts. When it terminates, the execution goes back to the previous program and the network gets back its previous hidden state. The mechanism is also applied to MCTS itself: when the current search decides to execute another program, the current tree is saved along with the network’s hidden state in the stack and a new search starts to execute the desired program.

The neural network takes as input an environment state and a program index and returns a vector of probabilities over the next programs to call, , as well as a prediction for the value . The network is composed of five modules. An encoder encodes the environment state into a vector and a program embedding matrix contains learnable embedding for each non-atomic programs. The -th row of the matrix contains the embedding of the program referred by the index such that . An LSTM core takes the encoded state and the program embedding as input and returns its hidden state . Finally, a policy head and a value head take the hidden state as input and return respectively and , see Figure 6.

Figure 6: The AlphaNPI architecture, slightly modified from Pierrot et al. [2019].

The guided MCTS process is used to generate data to train the AlphaNPI neural network. A search returns a sequence of transitions where corresponds to the tree policy at time and is the final episode reward. The tree policy is computed from the tree nodes visit counts. These transitions are stored in a replay buffer. At training time, batches of trajectories are sampled and the network is trained to minimize the loss

(1)

Note that in AlphaNPI-X, in contrast with the original AlphaNPI work, proper back-propagation through time (BPTT) over batches of trajectories is performed instead of gradient descent over batches of transitions, resulting in a more stable updates.

AlphaNPI can explore and exploit. In exploration mode, many simulations per node are performed, Dirichlet noise is added to the priors to help exploration and actions are chosen by sampling the tree policy vectors. In exploitation mode, significantly less simulations are used, no noise is added and actions are chosen by taking the argmax of the tree policy vectors. The tree is used in exploration mode to generate data for training and in exploitation mode at inference time.

b.3 Improvements over AlphaNPI

Distributed training and BPTT

We improved AlphaNPI training speed and stability in distributing the algorithm. We use 10 actors in parallel. Each actor has a copy of the neural networks weights. It uses it to guide its tree searches to collect experiences. In each epoch, the experience collected by all the actors is sent to a centralized replay buffer. A learner, that has a copy of the network weights, samples batches in the buffer, computes the losses and performs gradient descent to update the network weights. When the learner is done, it sends the new weights to all the actors. We use the MPI (Message Passing Interface) paradigm through its python package MPI4py to implement the processes in parallel. Each actor uses 1 CPU.

Another improvement over the standard AlphaNPI is leveraging back-propagation through time (BPTT). The AlphaNPI agent uses an LSTM. During tree search, the LSTM states are stored inside the tree nodes. However, in the original AlphaNPI, transitions are stored independently and gradient descents are performed on batches of uncorrelated transitions. While it worked on the examples presented in the original paper, we found that implementing BPTT improves training stability.

Curriculum Learning

At each training iteration, the agent selects the programs to train on. In the original AlphaNPI paper, the agents select programs following a hard-coded curriculum based on program complexity. In this work, we select programs randomly instead and thus remove the need for extra supervision. We also implemented an automated curriculum learning paradigm based on learning progress signal Colas et al. [2018] and observed that it does not improve over random sampling. We compare both strategies in Figure 5. We detail below the learning progress based curriculum we used as a comparison.

The curriculum based on the program levels from the original AlphaNPI might be replaced by a curriculum based on learning progress. The agent focuses on programs for which its learning progress is maximum. This strategy requires less hyper-parameters and enables the agent to discover the same hierarchy implicitly. The learning progress for a program is defined as the derivative over time of the agent performance on this program. At the beginning of each training iteration , the agent attempts every non atomic program times in exploitation mode. We note the average performance, computed as mean reward over the episodes, on program at iteration . Rewards are still assumed to be binary: 0 or 1. We compute the learning progress for program as

(2)

where the absolute derivative over time of the agent performance is approximated as the first order. Finally, we compute the probability of choosing to train on the program at iteration as

(3)

where

is an hyperparameter and

is the total number of non atomic programs. The term is used to balance exploration and exploitation, it ensures that the agent tries all programs with a non-zero probability.

b.4 Pseudo-codes

Goal policy training

for num_epoch do

        for num_cycle do
               for num_ep_per_cycle do
                      Reset the environment in a state Sample uniformly in the goal space a goal Play one episode with the goal policy from Store the trajectory transitions in a replay buffer
               end for
              for num_sgd_per_cycle do
                      Sample a batch of transitions in the replay buffer Resample according to her strategy Compute the gradient on this batch Update weights with the gradient averaged over all actors
               end for
              
        end for
       
end for
Algorithm 1 Goal policy training: First phase

for num_epoch do

        for num_cycle do
               for num_ep_per_cycle do
                      Choose uniformly a motion program With probability 0.5 reset the environment Otherwise, sample a state such that in the state buffer and reset to this state Compute with the goal setter a goal corresponding to this program Play one episode with the goal policy from Store the trajectory in a replay buffer Store the final state in the state buffer
               end for
              for num_sgd_per_cycle do
                      Sample a batch of transitions in the replay buffer Resample according to her strategy Compute the gradient on this batch Update weights with the gradient averaged over all actors
               end for
              
        end for
       
end for
Algorithm 2 Goal policy training: Second phase

Self-behavioural model training

Data generation

for num_episode do

        Choose uniformly a motion program With probability 0.5 reset the environment Otherwise, sample a state , such that , in the state buffer and reset to this state Compute with the goal setter a goal Play one episode with the goal policy from Store the initial state , the final state and the program index in a dataset
end for

Training for num_epochs do
        for num_sgd_epoch do
               Sample a batch of tuples Update to minimize
        end for
       
end for
Algorithm 3 Learning the behavioral model of the APEM

AlphaNPI training

for num_epoch do

        for num_task_per_epoch do
               Sample program according to the curriculum strategy for num_ep_per_task do
                      Sample an initial state until Run an AlphaNPI search using the self behavioral model to obtain a final state Compute the reward Store the trajectory in a replay buffer
               end for
              
        end for
       for num_sgd_per_epoch do
               Sample a minibatch of transitions in the replay buffer Train the AlphaNPI neural net with it
        end for
       for every program do
               Play episodes with the tree in exploitation mode Record the averaged score
        end for
       
end for
Algorithm 4 AlphaNPI training

AlphaNPI inference

Input: program index and an initial state

while  do

        Run with AlphaNPI relying on the self behavioral model from current state Compute tree policy with visit counts Select the next program to call if  then
               Roll for timesteps to get new state
       else
               if  STOP then
                      Stop the search 
              else
                      Run a new search recursively to execute the program
               end if
              
        end if
       
end while
Algorithm 5 AlphaNPI execution with MPC during inference

Appendix C Example execution of the programs

Let us imagine we want to run the atomic program Stack_1_2. The state space is and the goal space is . A state contains all positions, angles, velocities and angular velocities of the blocks and of the robot articulations at time . A goal is the concatenation of desired final positions for each block. Let us consider atomic program Stack_1_2 has index 0. In this case, we compute the goal corresponding to this program as where is the block radius. This goal corresponds to positions where block 1 is stacked on block 2 and where other blocks have not been moved. The pre-condition of this program verifies in state that no other block is already on top of block so that stacking is possible. If the pre-condition is verified, the goal policy is rolled for time steps to execute this goal. When the goal policy terminates, the program post-condition is called on the final state and verifies for all blocks that their final position is in a sphere of radius around their expected final position described by goal . The value we use for is the standard value used in her [Andrychowicz et al., 2017].

We train the goal-conditioned policy so that it can reach goals that corresponds to the atomic programs for any initial position, as long as the initial position verifies the atomic program pre-condition. Once trained, we learn its self behavioural model . This model takes an environment state and an atomic program index and predicts the environment state obtained when the goal policy has been rolled for timesteps conditioned on goal and initial environment state. Thus, this model performs jumpy predictions, it does not capture instantaneous effects but rather global effects such as block has been moved, block has remained at its initial position.

When the model has been trained, we use it to learn non-atomic programs with our extended AlphaNPI. Let us imagine that we want to execute the Clean_And_Stack program. This program is expected to stack, in any order, both orange blocks in the orange zone and both blue blocks in the blue zone. Its pre-condition is always true since it can be called from any initial position. Its post-condition looks at a state and returns if it finds an orange block with its center of gravity in the orange zone and with an orange block stacked in top of it. It verifies the same for the blue blocks. The post-condition returns otherwise. The stacking test is performed as explained above looking at a sphere of radius around an ideal position.

A possible execution trace for this program is:

Figure 7: Illustrative example of an execution trace for the Clean_And_Stack program. This trace is not optimal as the program may be realised in fewer moves but it corresponds to one of the solutions found by AlphaNPI-X during training. Atomic program calls are shown in green and non-atomic program calls are shown in blue.

In this trace, the non-atomic programs are shown in blue and the atomic programs are shown in green. Non-atomic program execution happens as in the original AlphaNPI. The execution of an atomic program happens as described before: when the atomic program is called, the current environment state is transformed into a goal vector that is realised by the goal-conditioned policy. In the trace above, the goal policy has been called times with different goal vectors.

Appendix D Experimental setting

d.1 Experiment setup


Figure 8: Our multi-task fetch arm environment. The agent first learns to manipulate all blocks on the table. Then it learns to perform a large set of manipulation tasks. These tasks require abstraction as well as precision. The agent learns to move blocks on zones depending of the block color. It learns to stack all blocks together and to stack blocks of the same color in the corresponding zone.

We base our experiments on a set of robotic tasks with continuous action space. Due to the lack of any long-horizon hierarchical multi-task benchmarks, we extended the OpenAI Gym Fetch environment Brockman et al. [2016] with tasks exhibiting such requirements. We reused the core of the Pick And Place environment and added two colored zones as well as four colored blocks that are indexed. We did not modify the environment physics. The action space is , it corresponds to continuous command to the arm gripper. In this setting, the observation space is the same as the state space and contains all blocks and robot joint positions together with their linear and angular velocities. It is of dimension , with blocks ordered in the observations removing the need to provide their colour or index. More precisely, a state has the form where is the position of block at time step and contains additional information about the gripper and velocities. When executing atomic programs, we assume that we always aim to move the block which is at the first position in the observation vector. Then, to move any block , we apply a circular permutation to the observation vector such that block arrives at the first position in the observation.

We consider two different initial state distributions in the environment. During goal-conditioned policy training, the block at the first position in the observation vector has a probability to start in the gripper, a probability to start under the gripper and a probability to start elsewhere uniformly on the table. All other blocks may start anywhere on the table. Once the first block is positioned, we place the other blocks in a random order. When a block is placed, it has a probability to be placed on top of another block and to be placed on the table. In this case, its position is sampled uniformly on the table until there is no collision. During training, we anneal these probabilities. By the end of training all blocks are always initialised at random positions on the table.

d.2 Computational resources

We ran all experiments on a 30 CPU cores. We used 28 CPUs to train the goal-conditioned policy and 10 CPU cores to train the meta-controller. We trained the goal-conditioned policy for 2 days (counting both phases) and the meta-controller for also 2 days.

d.3 Hyper-parameters

We provide all hyper-parameters used in our experiments for our method together with baselines.

Notation Description Value
Atomic Programs
DDPG actor hidden layers size 256/256
DDPG critic hidden layers size 256/256
discount factor 0.99
batch size batch size (number of transitions) 256
actor learning rate
critic learning rate
replay_k ratio to resample transitions in HER 4
number of actors used in parallel 28
number of cycles per epoch 40
number of updates per cycle 40
buffer size replay buffer size (number of transitions)
number of epochs for phase 1 100
number of epochs for phase 2 150
Behavioural Model
Model layers hidden layers size 512/512
Dataset size number of episodes collected to create dataset 50000
number of epochs 500
AlphaNPI
Observation Module hidden layer size 128
program embedding dimension 256
LSTM hidden state dimension 128
observation encoding dimension 128
discount factor to penalize long traces reward 0.97
number of simulations in the tree in exploration mode 100
number of simulations in the tree in exploitation mode 5
number of gradient descent per episode played 2
number of episodes at each iteration 20
number of episodes for validation 10
number of iterations 700
coefficient to encourage choice of higher level programs 3.0
number of actors used in parallel 10
coefficient to balance exploration/exploitation in mcts 0.5
batch size batch size (number of trajectories) 16
maximum size of buffer memory (nb of full trajectories) 100
probability to draw positive reward experience in buffer 0.5
learning rate
tree policies temperature coefficient 1.3
AlphaZero Dirichlet noise fraction 0.25
AlphaZero Dirichlet distribution parameter 0.03
Table 5: Hyperparameters

Appendix E Additional Results

e.1 Results of Goal Policy Training

We train the goal-conditioned policy in two consecutive phases. In the first phase we sample the initial states and goal-vectors randomly. During the second phase, we change the initial and goal state distributions to ensure that our goal-conditioned policy still performs well when called sequentially by the meta-controller. For initial states, with probability we do not reset the initial state of the environment between episodes. This means that the initial state of some episodes are the final state achieved in the previous episode. To simplify this process in a distributed setting, we store the final states of all episodes in a buffer and in each episode, with probability , we sample a state from this buffer to be our initial state in this episode. Additionally, instead of random goal-vector sampling we randomly select an atomic program , then given the initial state we compute the goal vector using the goal-setter. We study the impact of the reset probability as well as the usefulness of the second phase in Figure 9.

Figure 9: Left We compare the goal-conditioned policy’s performance on executing atomic programs stack and movetozone as well as two multi-step sequential atomic program executions after the completion of second phase with different probabilities . Right We compare the performance of goal-conditioned policy in the first phase compared to the second phase. While the agent achieves good performance on atomic programs, the second phase is required to be able to sequence them (as observed in performance on multi-step tasks). Also, the reset probability can affect performance on multi-step tasks. We observe that gives the better trade-off between the asymptotic training performance and the agent ability to execute programs sequentially.

e.2 Results of self-behavioural model training

We represent the self-behavioural model with a 2 layer MLP that takes as input the initial environment state and the atomic program and predicts the environment state after steps. Learning this model enables us to imagine the state of the environment following an atomic program execution and hence to avoid any further calls to atomic programs that would each have to perform many actions in the environment.

To train the model, we play episodes with goal-conditioned policy in the environment on uniformly sampled atomic programs and record initial and final environment states in a dataset. Then, we train the model to minimize its prediction error, computed as a mean square error over this dataset. We study the impact of in Figure 10.


Figure 10: To train the self-behavioural model , we construct a dataset. We record initial and final states for episodes of goal-conditioned policy’s behaviour after training. At the beginning of each episode, an atomic program is randomly chosen. In this figure, we study the impact of the number of episodes .

e.3 Curriculum learning comparisons

We compare training the meta-controller with random program sampling compared to an automated curriculum using a learning progress signal detailed in section B.3. We observe in Figure 11 that both random and automated curriculum gives rise to a natural hierarchy for programs. We observe that the MoveAllToZone and StackAllToZone are learned first. The traces generated by AlphaNPI-X after training confirms that these programs are used to execute the more complex programs CleanAndStack and CleanTable. Given the results shown in Figure 11 we cannot conclude that the learning progress based curriculum outperforms random sampling.


Figure 11: We represent the evolution of AlphaNPI-X performance during training and compare two curriculum strategies. Solid lines correspond to an agent trained with randomly sampled programs while dotted lines correspond to the programs selected according to a learning progress based automatic curriculum.

e.4 Sample efficiency

We compare our method’s sample efficiency with other works on the Fetch environment. For training the goal-conditioned policy, 28 ddpg actors sample 100 episodes per epoch in parallel. Thus, 2800 episodes are sampled per epoch. In total, we train the agent for 250 epochs resulting in episodes. To train the self-behavioural model, we show that sampling episodes is enough to perform well. Thus, in total we only use episodes to master all tasks. Note that training the meta-controller does not require any interaction with the environment. In comparison, in the her original paper [Andrychowicz et al., 2017], each task alone requires around episodes to be mastered. In curious [Colas et al., 2018], training the agent requires episodes. Finally, in [Lanier et al., 2019] where the agent is trained only to perform the task Stack_All_Blocks, around episodes have been sampled.