Deep reinforcement learning offers a promising framework for enabling agents to autonomously acquire complex skills, and has demonstrated impressive performance on continuous control problems [31, 50] and games such as Atari  and Go . However, the ability to learn a variety of compositional, long-horizon skills while generalizing to novel concepts remain an open challenge. Long-horizon tasks demand sophisticated exploration strategies and structured reasoning, while generalization requires suitable representations. In this work, we consider the question: how can we leverage the compositional structure of language for enabling agents to perform long-horizon tasks and systematically generalize to new goals?
To do so, we build upon the framework of hierarchical reinforcement learning (HRL), which offers a potential solution for learning long-horizon tasks by training a hierarchy of policies. However, the abstraction between these policies is critical for good performance. Hard-coded abstractions often lack modeling flexibility and are task-specific [57, 29, 24, 41], while learned abstractions often find degenerate solutions without careful tuning [4, 22]. One possible solution is to have the higher-level policy generate a sub-goal state and have the low-level policy try to reach that goal state [36, 32]. However, using goal states still lacks some degree of flexibility (e.g. in comparison to goal regions or attributes), is challenging to scale to visual observations naively, and does not generalize systematically to new goals. In contrast to these prior approaches, language is a flexible representation for transferring a variety of ideas and intentions with minimal assumptions about the problem setting; its compositional nature makes it a powerful abstraction for representing combinatorial concepts and for transferring knowledge .
In this work, we propose to use language as the interface between high- and low-level policies in hierarchical RL. With a low-level policy that follows language instructions (Figure 1), the high-level policy can produce actions in the space of language, yielding a number of appealing benefits. First, the low-level policy can be re-used for different high-level objectives without retraining. Second, the high-level policies are human-interpretable as the actions correspond to language instructions, making it easier to recognize and diagnose failures. Third, language abstractions can be viewed as a strict generalization of goal states, as an instruction can represent a region of states that satisfy some abstract criteria, rather than the entirety of an individual goal state. Finally, studies have also suggested that humans use language as an abstraction for reasoning and planning [20, 43]. In fact, the majority of knowledge learning and skill acquisition we do are through languages throughout our life.
While language is an appealing choice as the abstraction for hierarchical RL, training a low-level policy to follow language instructions is highly non-trivial [19, 5] as it involves learning from binary rewards that indicate completion of the instruction. To address this problem, we generalize prior work on goal relabeling to the space of language instructions (which instead operate on regions of state space, rather than a single state), allowing the agent to learn from many language instructions at once.
To empirically study the role of language abstractions for long-horizon tasks, we introduce a new environment inspired by the CLEVR engine  that consists of procedurally-generated scenes of objects that are paired with programatically-generated language descriptions. The low-level policy’s objective is to manipulate the objects in the scene such that a description or statement is satisfied by the arrangement of objects in the scene. We find that our approach is able to learn a variety of vision-based long-horizon manipulation tasks such as object reconfiguration and sorting, while outperforming state-of-the-art RL and hierarchical RL approaches. Further, our experimental analysis finds that HRL with non-compositional abstractions struggles to learn the tasks, even when the non-compositional abstraction is derived from language instructions themselves, demonstrating the critical role of compositionality in learning. Lastly, we find that our instruction-following agent is able to generalize to instructions that are systematically different from those seen during training.
In summary, the main contribution of our work is three-fold:
a framework for using language abstractions in HRL, with which we find that the structure and flexibility of language enables agents to solve challenging long-horizon control problems
an open-source continuous control environment for studying compositional, long-horizon tasks, integrated with language instructions inspired by the CLEVR engine 
empirical analysis that studies the role of compositionality in learning long-horizon tasks and achieving systematic generalization
2 Related Work
Designing, discovering and learning meaningful and effective abstractions of MDPs has been studied extensively in hierarchical reinforcement learning (HRL) [15, 40, 57, 16, 4]. Classically, the work on HRL has focused on learning only the high-level policy given a set of hand-engineered low-level policies [54, 33, 8], or more generically options policies with flexible termination conditions [57, 46].
Recent HRL works have begun to tackle more difficult control domains with both large state spaces and long planning horizons [24, 30, 59, 17, 36, 37]. These works can typically be categorized into two approaches. The first aims to learn effective low-level policies end-to-end directly from final task rewards with minimal human engineering, such as through the option-critic architecture [4, 22] or multi-task or meta learning [18, 52]. While nice in theory, this end-to-end approach relies solely on final task rewards and is shown to scale poorly to complex domains [4, 36], unless distributions of tasks are carefully designed . The second approach instead augments the low-level learning with auxiliary rewards that can bring better inductive bias. These rewards include mutual information-based diversity rewards [12, 17], hand-crafted rewards based on domain knowledge [29, 24, 30, 59], and goal-oriented rewards [15, 49, 3, 62, 36, 37]. Goal-oriented rewards have been shown to balance sufficient inductive bias for effective learning with minimal domain-specific engineering, and achieve performance gains on a range of domains [62, 36, 37]. Our work is a generalization on these lines of work, representing goal regions using language instructions, rather than individual goal states. Here, region refers to the sets of states (possibly disjoint and far away from each other) that satisfy more abstract criteria (e.g. “red ball in front of blue cube" can be satisfied by infinitely many states that are drastically different from each other in the pixel space) rather than a simple -ball around a single goal state that is only there for creating a reachable non-zero measure goal. Further, our experiments demonstrate significant empirical gains over these prior approaches.
Since our low-level policy training is related to goal-conditioned HRL, we can benefit from algorithmic advances in multi-goal reinforcement learning [26, 49, 3, 44]. In particular, we extend the recently popularized goal relabeling strategy [26, 3] to instructions, allowing us to relabel based on achieving a language statement that describes a region of state space, rather than relabeling based on reaching an individual state.
Lastly, there are a number of prior works that study how language can guide or improve reinforcement learning [27, 5, 19, 11, 9]. While prior work has made use of language-based sub-goal policies in hierarchical RL [51, 13], the instruction representation used lack the needed diversity to benefit from the compositionality of language over one-hot goal representations. In a concurrent work, Wu et al.  show that language can help with learning difficult tasks where more naive goal representations lead to poor performance, even with hindsight goal relabeling. While we are also interested in using language to improve learning of challenging tasks, we focus on the use of language in the context of hierarchical RL, demonstrating that language can be further used to compose complex objectives for the agent. Andreas et al. 
leverage language descriptions to rapidly adapt to unseen environments through structured policy search in the space of language; each environment is described by one sentence. In contrast, we show that a high-level policy can effectively leverage the combinatorially many sub-policies induced by language by generating a sequence of instructions for the low-level agent. Further, we use language not only for adaptation but also for learning the lower level control primitives, without the need for imitation learning from an expert. Another line of work focuses on RL for textual adventure games where the state is represented as language descriptions and the actions are either textual actions available at each state or all possible actions  (even though not every action is applicable to all states). In general, these works look at text-based games with discrete 1-bit actions, while we consider continuous actions in physics-based environments. One may view the latter as a high-level policy with oracular low-level policies that are specific to each state; the discrete nature of these games entails limited complexity of interactions with the environment.
Standard reinforcement learning. The typical RL problem considers a Markov decision process (MDP) defined by the tuple where where is the state space,
is the action space, the unknown transition probabilityrepresents the probability density of reaching from by taking the action , is the discount factor, and the bounded real-valued function represents the reward of each transition. We further denote and as the state marginal and the state-action marginal of the trajectory induced by policy . The objective of reinforcement learning is to find a policy such that the expected discounted future reward is maximized.
Goal conditioned reinforcement learning. In goal-conditioned RL, we work with an Augmented Markov Decision Process, which is defined by the tuple . Most elements represent the same quantities as a standard MDP. The additional tuple element is the space of all possible goals, and the reward function represents the reward of each transition under a given goal. Similarly, the policy is now conditioned on . Finally, represents a distribution over . The objective of goal directed reinforcement learning is to find a policy such that the expected discounted future reward
is maximized. While this objective can be expressed with a standard MDP by augmenting the state vector with a goal vector, the policy does not change the goal; making the distinction between goal and state explicit facilitates discussion later.
Q-learning. Q-learning is a large class of off-policy reinforcement learning algorithms that focuses on learning the Q-function, , which represents the expected total discounted reward that can be obtained after taking action in state assuming the agent acts optimally thereafter. It can be recursively defined as:
The optimal policy learned can be recovered through . In high-dimensional spaces, the Q-function is usually represented with function approximators and fit using on transition tuples, , which are stored in a replay buffer .
Hindsight experience replay (HER). HER  is a data augmentation technique for off-policy goal conditioned reinforcement learning. For simplicity, assume that the goal is specified in the state space directly. A trajectory can be transformed into a sequence of goal augmented transition tuples . We can relabel each tuple’s with or other future states visited in the trajectory and adjust to be the appropriate value. This makes the otherwise sparse reward signal much denser. This technique can also been seen as generating an implicit curriculum of increasing difficulty for the agent as it learns to interact with the environment more effectively.
4 Hierarchical Reinforcement Learning with Language Abstractions
In this section, we present our framework for training a 2-layer hierarchical policy with compositional language as the abstraction between the high-level policy and the low-level policy. We open the exposition with formalizing the problem of solving temporally extended task with language, including our assumptions regarding the availability of supervision. We will then discuss how we can efficiently train the low-level policy, conditioned on language instructions in Section 4.2, and how a high-level policy, , can be trained using such a low-level policy in Section 4.3. We refer to this framework as Hierarchical Abstraction with Language (HAL, Figure 2, Appendix B.1).
4.1 Problem statement
We are interested in learning temporally-extended tasks by leveraging the compositionality of language. Thus, in addition to the standard reinforcement learning assumptions laid out in Section 3, we also need some form of grounded language supervision in the environemnt during training. To this end, we also assume the access to a conditional density that maps observation to a distribution of language statements that describes . This distribution can take the form of a supervised image captioning model, a human supervisor, or a functional program that is executed on similar to CLEVR. Further, we define to be the support of . Moreover, we assume the access to a function that maps a state and an instruction to a single boolean value which indicates whether the instruction is satisfied by the , i.e. . Once again, can be a VQA model, a human supervisor or a program. Note that any goal specified in the state space can be easily expressed by a Boolean function of this form by checking if two states are close to each other up to some threshold parameter. can effectively act as the reward for the low-level policy.
An example for the high-level tasks can be arranging objects in the scene according to a specific spatial relationship. This can be putting the object in specific arrangement or ordering the object according the the colors (Figure 3) by pushing the objects around (Figure 1). Details of these high-level tasks are described in section 5. These tasks are complex but can be naturally decomposed into smaller sub-tasks, giving rise to a naturally defined hierarchy, making it an ideal testbed for HRL algorithms. Problems of similar nature including organizing a cluttered table top or stacking LEGO blocks to construct structures such as a castle or a bridge.
We train the low-level policy to solve an augmented MDP in Section 3. For simplicity, we assume that ’s output is uniform over . The low-level policy receives supervision from and by completing instructions. The high-level policy is trained to solve a standard MDP whose state space is the same as the low-level policy, action space is . In this case, the high-level policy’s supervision comes from the reward function of the environment which may be highly sparse.
We separately train the high-level policy and low-level policy, so the low-level policy is agnostic to the high-level policy. Since the policies share the same , the low-level policy can be reused for different high-level policies (Appendix B.3). Jointly fine-tuning the low-level policy with a specific high-level policy is certainly a possible direction for future work (Appendix B.1).
4.2 Training a language-conditioned low-level policy
To train a goal conditioned low-level policy, we need to define a suitable reward function for training such a policy and a mechanism for sampling language instructions. A straightforward way to represent the reward for the low level policy would be or, to ensure that is inducing the reward:
However, optimizing with this reward directly is difficult because the reward signal is only non-zero when the goal is achieved. Unlike prior work (e.g. HER ), which uses a state vector or a task-relevant part of the state vector as the goal, it is difficult to define meaningful distance metrics in the space of language statements [7, 47, 55], and, consequently, difficult to make the reward signal smooth by assigning partial credit (unlike, e.g., the norm of the difference between 2 states). To overcome these difficulties, we propose a trajectory relabeling technique for language instructions: Instead of relabeling the trajectory with states reached, we relabel states in the the trajectory with the elements of as the goal instruction using a relabeling strategy . We refer to this procedure as hindsight instruction relabeling (HIR). The details of is located in Algorithm 4 in Appendix B.4. Pseudocode for the method can be found in Algorithm 2 in Appendix B.2 and an illustration of the process can be found in Figure 1.
The proposed relabeling scheme, HIR, is reminiscent of HER . In HER, the goal is often the state or a simple function of the state, such as a masked representation. However, with high-dimensional observation spaces such as images, there is excessive information in the state that is irrelevant to the goal, while task-relevant information is not readily accessible. While one can use HER with generative models of images [48, 45, 38], the resulting representation of the state may not effectively capture the relevant aspects of the desired task. Language can be viewed as an alternative, highly-compressed representation of the state that explicitly exposes the task structure, e.g. the relation between objects. Thus, we can readily apply HIR to settings with image observations.
4.3 Acting in language with the high-level policy
We aim to learn a high-level policy for long-horizon tasks that can explore and act in the space of language by providing instructions to the low-level policy . The use of language abstractions through allows the high-level policy to structurally explore with actions that are semantically meaningful and span multiple low-level actions.
In principle, the high-level policy, , can be trained with any reinforcement learning algorithm, given a suitable way to generate sentences for the goals. However, generating coherent sequences of discrete tokens is generally a difficult tasks and training such a generative model with existing reinforcement learning algorithms is unlikely to achieve favorable results. Fortunately, while the the size of the instructions space scales exponentially with the size of the vocabulary, the elements of are naturally structured and redundant – many elements correspond to effectively the same underlying instruction with different synonyms or grammar. While the low-level policy understands all the different instructions, in many cases, the high-level policy only needs to generate instruction from a much smaller subset of to direct the low-level policy. We denote such subsets of as .
If is relatively small, the problem can be recast as a discrete-action RL problem, where one action choice corresponds to an instruction, and can be solved with algorithms such as DQN . We adopt this simple approach in this work. We note that learning a high-level policy that integrates a recurrent or auto-regressive decoders [56, 14] could better capture the entire space of language instructions, a direction we leave for future work. As the instruction often represents a sequence of low-level actions, we take actions with the low-level policy for every high-level instruction. can be a fixed number of steps, or computed dynamically by a terminal policy learned by the low-level policy like the option framework. We found that simply using a fixed to be sufficient in our experiments.
5 The Environment and Implementation
Environment. To empirically study how compositional languages can aid in long-horizon reasoning and generalization, we need an environment that will test the agent’s ability to do so. While prior works have studied the use of language in navigation , instruction following in a grid-world, and compositionality in question-answering, we aim to develop an physical simulation environment where the agent must interact with and change the environment in order to accomplish long-horizon, compositional tasks. These criteria are particularly appealing for robotic applications, and, to the best of our knowledge, none of the existing benchmarks simultaneously fulfills all of them. To this end, we developed a new environment using the MuJoCo physics engine and the CLEVR language engine, that tests an agents ability to manipulate and rearrange objects of various shapes and colors. To succeed in this environment, the agent must be able to handle varying number of objects with diverse visual and physical properties. Two versions of the environment of varying complexity are illustrated in Figures 3 and 1 and further details are in Appendix A.1.
High-level tasks. We evaluate our framework 6 challenging temporally-extended tasks across two environments, all illustrated in Figure 3: (a) object arrangement: manipulate objects such that 10 pair-wise constraints are satisfied, (b) object ordering: order objects by color, (c) object sorting: arrnage 4 objects around an central object, and in a more diverse environment: (d) color ordering: order objects by color irrespective of shape, (e) shape ordering: order objects by shape irrespective of color, and (f) color & shape ordering: order objects by both shape and color. In all cases, the agent receives a binary reward only if all constraints are satisfied. Consequently, this makes obtaining meaningful signal in these tasks extremely challenging as only very small number of sequences of action will yield non-zero signal. For more details, see Appendix A.2.
Action and observation parameterization. The state-based observation is that represents the location of each object and which corresponds to picking an object and pushing it in one of the eight cardinal directions. The image-based observation is which is the rendering of the scene and which corresponds to picking a location in a grid and pushing in one of the eight cardinal directions. For more details, see Appendix A.1.
The low-level policy encodes the instruction with a GRU and feeds the result, along with the state, into a neural network that predicts the q-value of each action. The high-level policy is also a neural network Q-function. Both use Double DQN for training. The high-level policy uses sets of 80 and 240 instructions as the action space in the standard and diverse environments respectively, a set that sufficiently covers relationships between objects. We roll out the low-level policy for steps for every high-level instruction. For details, see Appendix A.3.
To evaluate our framework, and study the role of compositionality in RL in general, we design the experiments to answer the following questions: (1) as a representation, how does language compare to alternative representations, such as those that are not explicitly compositional? (2) How well does the framework scale with the diversity of instruction and dimensionality of the state (e.g. vision-based observation)? (3) Can the policy generalize in systematic ways by leveraging the structure of language? (4) Overall, how does our approach compare to state-of-the-art hierarchical RL approaches, along with learning flat, homogeneous policies?
To this end, we first evaluate and analyze training of effective low-level policies, which are critical for effective learning of long-horizon tasks. Then, in Section 4.3, we evaluate the full HAL method on challenging temporally-extended tasks. For details on the set-up and analysis, see Appendix C and D.
6.1 Low-level Policy
Role of compositionality and relabeling. We start by evaluating the fidelity of the low-level instruction-following policy, in isolation, with a variety of representations for the instruction. For these experiments, we use state-based observations. We start with a set of 600 instructions, which we paraphrase and substitute synonyms to obtain more than 10,000 total instructions which allows us to answer the first part of (2). We evaluate the performance all low-level policies by the average number of instructions it can successfully achieve each episode (100 steps), measured over 100 episodes. To answer (1), and evaluate the importance of compositionality, we compare to:
a non-compositional representation with identical information content. To acquire this representation, we train an sequence auto-encoder on sentences, which achieves 0 reconstruction error and is hence a lossless non-compositional representation of the instructions (see Appendix C.2)
In the first comparison, we observe that while one-hot encoded representation works on-par with or better than the language in the regime where the number of instructions is small, its performance quickly deteriorates as the number of instruction increases (Figure 4 middle). On the other hand, language representation of instruction can leverage the structures shared by different instructions and does not suffer from increasing number of instructions (Figure 4 right blue); in fact, an improvement in performance is observed. This suggests, perhaps unsurprisingly, that one-hot representations and state-based relabeling scale poorly to large numbers of instructions, even when the underlying number of instructions does not change, while, with instruction relabeling (HIR), the policy acquires better, more successful representations as the number of instructions increases.
In the second comparison, we observe that the agent is unable to make meaningful progress with this representation despite receiving identical amount of supervision as language. This indicates that the compositionality of language is critical for effective learning. Finally, we find that relabeling is critical for good performance, since without it (no HIR), the reward signal is significantly sparser.
Vision-based observations. To answer the second part of (2), we extend our framework to pixel observations. The agent reaches the same performance as the state-based model, albeit requiring longer convergence time with the same hyper-parameters. On the other hand, the one-hot representation reaches much worse relative performance with the same amount of experience (Figure 4 right).
Visual generalization. One of the most appealing aspect of language is the promise of combinatorial generalization  which allows for extrapolation rather than simple interpolation over the training set. To evaluate this (i.e. (3)), we design the training and test instruction sets that are systematically distinct. We evaluate the agent’s ability to perform such generalization by splitting the 600 instruction sets through the following procedure: (i) standard: random 70/30 split of the instruction set; (ii) systematic: the training set only consists of instructions that do not contain the words red in the first half of the instructions and the test set contains only those that have red in the first half of the instructions. We emphasize that the agent has never seen the words red in the first part of the sentence in training; in other words, the task is zero-shot as the training set and the test set are disjoint (i.e. the distributions do not share support). From a pure statistical learning theoretical perspective, the agent should not do better than chance on such a test set. Remarkably, we observe that the agent generalizes better with language than with non-compositional representation (table 1). This suggests that the agent recognizes the compositional structure of the language, and achieves systematic generalization through such understanding.
|Standard train||Standard test||Standard gap||Systematic train||Systematic test||Systematic gap|
low-level policy is used for all 3 tasks). In all settings, HAL demonstrates faster learning than DDQN. Means and standard deviations of 3 random seeds are plotted.
6.2 High-level policy
steps) so we start the x-axis there. Means and standard deviations of 3 seeds are plotted (Near 0 variance for DDQN).Right: Results of HRL on the proposed 3 diverse tasks (d-e). In this case, the low-level policy used is trained on image observation for (~ steps). 3 random seed are plotted and training has not converged.
Now that we have analyzed the low-level policy performance, we next evaluate the full HAL algorithm. To answer (4), we compare our framework in the state space against a non-hierarchical baseline DDQN and two representative hierarchical reinforcement learning frameworks HIRO  and Option-Critic (OC)  on the proposed high-level tasks with sparse rewards (Section 5). We observe that neither HRL algorithms are able to learn a reasonable policy while DDQN is able to solve only 2 of the 3 tasks, likely due to the sparse reward. HAL is able to solve all 3 tasks consistently and with much lower variance and better asymptotic performance (Figure 5). Then we show that our framework successfully transfers to high-dimensional observation (i.e. images) in all 3 tasks without loss of performance whereas even the non-hierarchical DDQN fails to make progress. Finally, we apply the method to 3 additional diverse tasks (Figure 6). In the diverse settings, we observed that the high-level policy has difficulty to learn from pixels alone, likely due to the visual diversity and the simplified high-level policy parameterization. As such the high-level policy for diverse setting receives state observation but the low-level policy uses the raw-pixel observation. For more details, please refer to Appendix A.3.
We demonstrate that language abstractions can serve as an efficient, flexible, and human-interpretable representation for solving a variety of long-horizon control problems in HRL framework. Through our proposed hindsight instruction relabeling and the inherent compositionality of language, we show that low-level, language-conditioned policies can be trained efficiently without commonly-engineered reward shaping and with exceedingly large numbers of instructions, while exhibiting strong generalizations. Our framework HAL thereby can leverage these policies to solve a range of difficult sparse-reward manipulation tasks with greater success and sample efficiency than training from scratch without language abstractions. Inspired by the CLEVR visual-question answering dataset  and using the MuJoCo physics simulator , our benchmark environment also offers a novel platform for research at the intersection of language understanding and control.
While our method demonstrates promising results, one limitation is that the current method relies on instructions provided by a language supervisor which has access to the instructions that describe a scene. However, the language supervisor can, in principle, be replaced with an image-captioning model and question-answering model such that it can be deployed on real image observations for robotic control tasks, an exciting direction for future work. Another limitation is that the instruction set used is specific to our problem domain, providing a substantial amount of pre-defined structure to the agent. It remains an open question on how to enable an agent to follow a much more diverse instruction set, that is not specific to any particular domain, or learn compositional abstractions without the supervision of language. Our experiments suggest that both would likely yield an HRL method that requires minimal domain-specific supervision, while yielding significant empirical gains over existing domain-agnostic works, indicating an exciting direction for future work. Overall, we believe this work represents a step towards RL agents that can effectively reason using compositional language to perform complex tasks, and hope that our empirical analysis will inspire more research in compositionality at the intersection of language and reinforcement learning.
We would like to thank Jacob Andreas, Justin Fu, Sergio Guadarrama, Ofir Nachum, Vikash Kumar, Allan Zhou, Archit Sharma, and other colleagues at Google Research for helpful discussion and feedback on the draft of this work.
- Anderson et al.  Peter Anderson, Qi Wu, Damien Teney, Jake Bruce, Mark Johnson, Niko Sünderhauf, Ian Reid, Stephen Gould, and Anton van den Hengel. Vision-and-language navigation: Interpreting visually-grounded navigation instructions in real environments. In
- Andreas et al.  Jacob Andreas, Dan Klein, and Sergey Levine. Learning with latent language, 2017.
- Andrychowicz et al.  Marcin Andrychowicz, Filip Wolski, Alex Ray, Jonas Schneider, Rachel Fong, Peter Welinder, Bob McGrew, Josh Tobin, OpenAI Pieter Abbeel, and Wojciech Zaremba. Hindsight experience replay. In Advances in Neural Information Processing Systems, pages 5048–5058, 2017.
- Bacon et al.  Pierre-Luc Bacon, Jean Harb, and Doina Precup. The option-critic architecture. In AAAI, pages 1726–1734, 2017.
- Bahdanau et al.  Dzmitry Bahdanau, Felix Hill, Jan Leike, Edward Hughes, Pushmeet Kohli, and Edward Grefenstette. Learning to understand goal specifications by modelling reward, 2018.
- Battaglia et al.  Peter W Battaglia, Jessica B Hamrick, Victor Bapst, Alvaro Sanchez-Gonzalez, Vinicius Zambaldi, Mateusz Malinowski, Andrea Tacchetti, David Raposo, Adam Santoro, Ryan Faulkner, et al. Relational inductive biases, deep learning, and graph networks. arXiv preprint arXiv:1806.01261, 2018.
- Callison-Burch et al.  Chris Callison-Burch, Miles Osborne, and Philipp Koehn. Re-evaluation the role of bleu in machine translation research. In 11th Conference of the European Chapter of the Association for Computational Linguistics, 2006.
- Chentanez et al.  Nuttapong Chentanez, Andrew G Barto, and Satinder P Singh. Intrinsically motivated reinforcement learning. In Advances in neural information processing systems, pages 1281–1288, 2005.
- Chevalier-Boisvert et al.  Maxime Chevalier-Boisvert, Dzmitry Bahdanau, Salem Lahlou, Lucas Willems, Chitwan Saharia, Thien Huu Nguyen, and Yoshua Bengio. BabyAI: First steps towards grounded language learning with a human in the loop. In International Conference on Learning Representations, 2019. URL https://openreview.net/forum?id=rJeXCo0cYX.
Cho et al. 
Kyunghyun Cho, Bart van Merrienboer, Dzmitry Bahdanau, and Yoshua Bengio.
On the properties of neural machine translation: Encoder-decoder approaches, 2014.
- Co-Reyes et al.  John D Co-Reyes, Abhishek Gupta, Suvansh Sanjeev, Nick Altieri, John DeNero, Pieter Abbeel, and Sergey Levine. Meta-learning language-guided policy learning. In International Conference on Learning Representations, 2019. URL https://openreview.net/forum?id=HkgSEnA5KQ.
- Daniel et al.  Christian Daniel, Gerhard Neumann, and Jan Peters. Hierarchical relative entropy policy search. In Artificial Intelligence and Statistics, pages 273–281, 2012.
- Das et al.  Abhishek Das, Georgia Gkioxari, Stefan Lee, Devi Parikh, and Dhruv Batra. Neural modular control for embodied question answering. arXiv preprint arXiv:1810.11181, 2018.
- Dauphin et al.  Yann N. Dauphin, Angela Fan, Michael Auli, and David Grangier. Language modeling with gated convolutional networks, 2016.
- Dayan and Hinton  Peter Dayan and Geoffrey E Hinton. Feudal reinforcement learning. In Advances in neural information processing systems, pages 271–278, 1993.
- Dietterich  Thomas G Dietterich. Hierarchical reinforcement learning with the maxq value function decomposition. Journal of Artificial Intelligence Research, 13:227–303, 2000.
- Florensa et al.  Carlos Florensa, Yan Duan, and Pieter Abbeel. Stochastic neural networks for hierarchical reinforcement learning. arXiv preprint arXiv:1704.03012, 2017.
- Frans et al.  Kevin Frans, Jonathan Ho, Xi Chen, Pieter Abbeel, and John Schulman. Meta learning shared hierarchies. International Conference on Learning Representations (ICLR), 2018.
- Fu et al.  Justin Fu, Anoop Korattikara, Sergey Levine, and Sergio Guadarrama. From language to goals: Inverse reinforcement learning for vision-based instruction following. In International Conference on Learning Representations, 2019. URL https://openreview.net/forum?id=r1lq1hRqYQ.
- Gleitman and Papafragou  Lila Gleitman and Anna Papafragou. Language and thought. Cambridge handbook of thinking and reasoning, pages 633–661, 2005.
- Grice  H Paul Grice. Logic and conversation. 1975, pages 41–58, 1975.
- Harb et al.  Jean Harb, Pierre-Luc Bacon, Martin Klissarov, and Doina Precup. When waiting is not an option: Learning options with a deliberation cost. arXiv preprint arXiv:1709.04571, 2017.
- He et al.  Ji He, Jianshu Chen, Xiaodong He, Jianfeng Gao, Lihong Li, Li Deng, and Mari Ostendorf. Deep reinforcement learning with a natural language action space. arXiv preprint arXiv:1511.04636, 2015.
- Heess et al.  Nicolas Heess, Greg Wayne, Yuval Tassa, Timothy Lillicrap, Martin Riedmiller, and David Silver. Learning and transfer of modulated locomotor controllers. arXiv preprint arXiv:1610.05182, 2016.
- Johnson et al.  Justin Johnson, Bharath Hariharan, Laurens van der Maaten, Li Fei-Fei, C Lawrence Zitnick, and Ross Girshick. Clevr: A diagnostic dataset for compositional language and elementary visual reasoning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 2901–2910, 2017.
Leslie Pack Kaelbling.
Hierarchical learning in stochastic domains: Preliminary results.
Proceedings of the tenth international conference on machine learning, volume 951, pages 167–173, 1993.
- Kaplan et al.  Russell Kaplan, Christopher Sauer, and Alexander Sosa. Beating atari with natural language guided reinforcement learning, 2017.
- Kingma and Ba  Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.
- Konidaris and Barto  George Konidaris and Andrew G Barto. Building portable options: Skill transfer in reinforcement learning. In IJCAI, volume 7, pages 895–900, 2007.
- Kulkarni et al.  Tejas D Kulkarni, Karthik Narasimhan, Ardavan Saeedi, and Josh Tenenbaum. Hierarchical deep reinforcement learning: Integrating temporal abstraction and intrinsic motivation. In Advances in neural information processing systems, pages 3675–3683, 2016.
- Levine et al.  Sergey Levine, Chelsea Finn, Trevor Darrell, and Pieter Abbeel. End-to-end training of deep visuomotor policies. The Journal of Machine Learning Research, 17(1):1334–1373, 2016.
- Levy et al.  Andrew Levy, Robert Platt, and Kate Saenko. Hierarchical reinforcement learning with hindsight. In International Conference on Learning Representations, 2019. URL https://openreview.net/forum?id=ryzECoAcY7.
- Mannor et al.  Shie Mannor, Ishai Menache, Amit Hoze, and Uri Klein. Dynamic abstraction in reinforcement learning via clustering. In Proceedings of the twenty-first international conference on Machine learning, page 71. ACM, 2004.
- Metz et al.  Luke Metz, Julian Ibarz, Navdeep Jaitly, and James Davidson. Discrete sequential prediction of continuous actions for deep rl. arXiv preprint arXiv:1705.05035, 2017.
- Mnih et al.  Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Andrei A Rusu, Joel Veness, Marc G Bellemare, Alex Graves, Martin Riedmiller, Andreas K Fidjeland, Georg Ostrovski, et al. Human-level control through deep reinforcement learning. Nature, 518(7540):529, 2015.
- Nachum et al.  Ofir Nachum, Shixiang Gu, Honglak Lee, and Sergey Levine. Data-efficient hierarchical reinforcement learning, 2018.
- Nachum et al.  Ofir Nachum, Shixiang Gu, Honglak Lee, and Sergey Levine. Near-optimal representation learning for hierarchical reinforcement learning. International Conference on Learning Representations (ICLR), 2019.
- Nair et al.  Ashvin Nair, Vitchyr Pong, Murtaza Dalal, Shikhar Bahl, Steven Lin, and Sergey Levine. Visual reinforcement learning with imagined goals. CoRR, abs/1807.04742, 2018. URL http://arxiv.org/abs/1807.04742.
- Narasimhan et al.  Karthik Narasimhan, Tejas Kulkarni, and Regina Barzilay. Language understanding for text-based games using deep reinforcement learning. arXiv preprint arXiv:1506.08941, 2015.
- Parr and Russell  Ronald Parr and Stuart J Russell. Reinforcement learning with hierarchies of machines. In Advances in neural information processing systems, pages 1043–1049, 1998.
- Peng et al.  Xue Bin Peng, Glen Berseth, KangKang Yin, and Michiel van de Panne. Deeploco: Dynamic locomotion skills using hierarchical deep reinforcement learning. ACM Transactions on Graphics (Proc. SIGGRAPH 2017), 36(4), 2017.
- Perez et al.  Ethan Perez, Florian Strub, Harm De Vries, Vincent Dumoulin, and Aaron Courville. Film: Visual reasoning with a general conditioning layer. In Thirty-Second AAAI Conference on Artificial Intelligence, 2018.
- Piantadosi et al.  Steven T Piantadosi, Joshua B Tenenbaum, and Noah D Goodman. Bootstrapping in a language of thought: A formal model of numerical concept learning. Cognition, 123(2):199–217, 2012.
- Pong et al.  Vitchyr Pong, Shixiang Gu, Murtaza Dalal, and Sergey Levine. Temporal difference models: Model-free deep rl for model-based control, 2018.
- Pong et al.  Vitchyr H. Pong, Murtaza Dalal, Steven Lin, Ashvin Nair, Shikhar Bahl, and Sergey Levine. Skew-fit: State-covering self-supervised reinforcement learning. CoRR, abs/1903.03698, 2019. URL http://arxiv.org/abs/1903.03698.
- Precup  Doina Precup. Temporal abstraction in reinforcement learning. University of Massachusetts Amherst, 2000.
- Reiter  Ehud Reiter. A structured review of the validity of bleu. Computational Linguistics, 44(3):393–401, 2018.
- Sahni et al.  Himanshu Sahni, Toby Buckley, Pieter Abbeel, and Ilya Kuzovkin. Visual hindsight experience replay. CoRR, abs/1901.11529, 2019. URL http://arxiv.org/abs/1901.11529.
- Schaul et al.  Tom Schaul, Daniel Horgan, Karol Gregor, and David Silver. Universal value function approximators. In International Conference on Machine Learning, pages 1312–1320, 2015.
- Schulman et al.  John Schulman, Philipp Moritz, Sergey Levine, Michael Jordan, and Pieter Abbeel. High-dimensional continuous control using generalized advantage estimation. arXiv preprint arXiv:1506.02438, 2015.
- Shu et al.  Tianmin Shu, Caiming Xiong, and Richard Socher. Hierarchical and interpretable skill acquisition in multi-task reinforcement learning. arXiv preprint arXiv:1712.07294, 2017.
- Sigaud and Stulp  Olivier Sigaud and Freek Stulp. Policy search in continuous action domains: an overview. arXiv preprint arXiv:1803.04706, 2018.
- Silver et al.  David Silver, Julian Schrittwieser, Karen Simonyan, Ioannis Antonoglou, Aja Huang, Arthur Guez, Thomas Hubert, Lucas Baker, Matthew Lai, Adrian Bolton, et al. Mastering the game of go without human knowledge. Nature, 550(7676):354, 2017.
- Stolle and Precup  Martin Stolle and Doina Precup. Learning options in reinforcement learning. In International Symposium on abstraction, reformulation, and approximation, pages 212–223. Springer, 2002.
- Sulem et al.  Elior Sulem, Omri Abend, and Ari Rappoport. Bleu is not suitable for the evaluation of text simplification. arXiv preprint arXiv:1810.05995, 2018.
- Sutskever et al.  Ilya Sutskever, Oriol Vinyals, and Quoc V. Le. Sequence to sequence learning with neural networks, 2014.
- Sutton et al.  Richard S Sutton, Doina Precup, and Satinder Singh. Between mdps and semi-mdps: A framework for temporal abstraction in reinforcement learning. Artificial intelligence, 112(1-2):181–211, 1999.
- Tavakoli et al.  Arash Tavakoli, Fabio Pardo, and Petar Kormushev. Action branching architectures for deep reinforcement learning. In Thirty-Second AAAI Conference on Artificial Intelligence, 2018.
- Tessler et al.  Chen Tessler, Shahar Givony, Tom Zahavy, Daniel J Mankowitz, and Shie Mannor. A deep hierarchical approach to lifelong learning in minecraft. In AAAI, volume 3, page 6, 2017.
- Todorov et al.  Emanuel Todorov, Tom Erez, and Yuval Tassa. Mujoco: A physics engine for model-based control. In Intelligent Robots and Systems (IROS), 2012 IEEE/RSJ International Conference on, pages 5026–5033. IEEE, 2012.
- van Hasselt et al.  Hado van Hasselt, Arthur Guez, and David Silver. Deep reinforcement learning with double q-learning, 2015.
- Vezhnevets et al.  Alexander Sasha Vezhnevets, Simon Osindero, Tom Schaul, Nicolas Heess, Max Jaderberg, David Silver, and Koray Kavukcuoglu. Feudal networks for hierarchical reinforcement learning. arXiv preprint arXiv:1703.01161, 2017.
- Wu et al.  Yuhuai Wu, Harris Chan, Jamie Kiros, Sanja Fidler, and Jimmy Ba. ACTRCE: Augmenting experience via teacher’s advice, 2019. URL https://openreview.net/forum?id=HyM8V2A9Km.
Appendix A Environment, Model Architectures, and Implementation
In this section, we describe the environment and tasks we designed, and various implementation details such as the architectures of the policy networks.
a.1 CLEVR-Robot Environment
We want an environment where we can evaluate the agent’s ability to learn policies for long-horizon tasks that have compositional structure. In robotics, manipulating and rearranging objects is a fundamental way through which robots interact with the environment, which is often cluttered and unstructured. Humans also tend to decompose complex tasks into smaller sub-tasks in the environment (e.g. putting together a complicated model or write a piece of program). While previous works have studied the use of language in navigation domain, we aim to develop an environment where the agent can physically interact with and change the environment. To that end, we designed a new environment for language and manipulation tasks in MuJoCo where the agents must interact with the objects in the scene. To succeed in this environment, the agents must be able to handle different number of objects with diverse visual and physical properties. Since our environment is inspired by the CLEVR dataset, we name our environment CLEVR-Robot Environment.
We will refer to all the elements in the environment collectively as the world state. The environment can contain up to 5 object. Each object is represented by that contains the 3d coordinate, , of its center of mass, and a one-hot representation of its 4 properties: color, shape, size, and material. The environment keeps an internal relation graph for all the objects currently in the scene. The relation graph is stored as an adjacency list whose entry is a nested array storing ’s neighbors in 4 cardinal directions left, right, front and behind. The criterion for to be the neighbor of in certain direction is if and the angle between and the cardinal vector of that vector is smaller than . After every interaction between the agent and the environment, and the relation graph are updated to reflect the current world state.
The agent takes the form of a point mass that can teleport around, which is a mild assumption for standard robotic arms (Other agents are possible as well). Before each interaction, the environment stores a set of language statements that are not satisfied by the current world state. These statements are re-evaluated after the interaction. The statements whose values change to True during the interaction can be used as the goals or instructions for relabeling the trajectories (cf. pre and post conditions used in classical AI planning ). Assuming the low-level policy only follows a single instruction at any given instant, the reward for every transition is if the goal is achieved and otherwise. The action space we use in this work consists of a point mass agent pushing one object in 1 of the 8 cardinal directions for a fixed number of frames, so the discrete action space has size , where is the number of objects.
The high-level policy’s reward function can be tailored towards the task of interests, where we propose three difficult benchmark tasks with extremely sparse rewards.
a.2.1 Five Object Settings (Standard)
In this setting, we have a fixed set of 5 spheres of different colors cyan, purple, green, blue, red.
The first task we consider is object arrangement. We sample a random set of statements that can be simultaneously satisfied and, at every time step, the agent receives a reward of -10.0 if at least 1 statement is not satisfied satisfied and 0.0 only if all statements are satisfied. At the beginning of every episode, we reset the environment until none of the statements is satisfied. The exact arrangement constraints are: (1) red ball to the right of purple ball; (2) green ball to the right of red ball; (3) green ball to the right of cyan ball; (4) purple ball to the left of cyan ball; (5) cyan ball to the right of purple ball; (6) red ball in front of blue ball; (7) red ball to the left of green ball; (8) green ball in front of blue ball; (9) purple ball to the left of cyan ball; (10) blue ball behind the red ball
The second task is object ordering. An example of such a task is “arrange the objects so that their colors range from red to blue in the horizontal direction, and keep the objects close vertically". In this case, the configuration can be specified with 4 pair-wise constraint between the objects. We reset the environment until at most 1 pair-wise constraint is satisfied involving the x-coordinate and the y-coordinate. At every time step, the agent receives a reward of -10.0 if at least 1 statement is not satisfied satisfied and 0.0 only if all statements are satisfied. The ordering of color is: cyan, purple, green, blue, red from left to right.
The third task is object sorting. In this task, the agent needs to sort 4 object around a central object; further, the 4 objects cannot be too far away from the central object. Once again, the agent receives a reward of -10.0 if at least 1 constraint is violated, and 0.0 only if all constraints are satisfied and environment is reset until at most 1 constraint is satisfied. Images of end goal for each high-level tasks are show in Figure 3.
a.2.2 Diverse Object Settings
Here, instead of 5 fixed objects, we introduce 3 different shapes cube, sphere and cylinder in combinations with 5 colors. Both colors and shapes can repeat but the same combination of color and shape does not repeat. In this setting, there are possible object configurations. In this setting, we define the color hierarchy to be red, green, blue, cyan, purple from left to right and the shape hierarchy to be sphere, cube, cylinder from left to right. Sample goal states of each task are shown in 3.
The first task is color ordering where the agent needs to manipulate the objects such that their colors are in ascending order.
The second task is shape ordering where the agent needs to manipulate the object such that their shapes are in ascending order.
Finally, the last task is color & shape ordering where the agent needs to manipulate the object such that the color needs to be in ascending order, and within each color group the shapes are also in ascending order.
Like in the fixed object setting, the agent only receives 0 reward when the objects are completely ordered; otherwise, the reward is always -10.
a.3 Implementation details
Language supervisor. In this work, each language statement generated by the environment is associated with a functional program that can be executed on the environment’s relation graph to yield an answer that reflects the value of that statement on the current scene. The functional programs are built from simple elementary operation such as querying the property of objects in the scene, but they can represent a wide range of statements of different nature and can be efficiently executed on the relation graph. This scheme for generating language statements is reminiscent of the CLEVR dataset  whose code we drew on and modified for our use case. Note that a language statement that can be evaluated is equivalent to a question, and the instructions we use also take the form of questions. For simplicity and computational efficiency, we use a smaller subset of question family defined in CLEVR that only involves pair-wise relationships (one-hop) between the objects. We plan to scale up to full and beyond CLEVR scale in future works.
State based low-level policy. When we have access to the ground truth states of the objects in the scene, we use an object-centric representation of states by assuming , where is the state representation of object , and is the number of objects (which can change over time). We also assume , where each acts on individual the object .
We implement a specialized universal value function approximator  for this case. To handle a variable number of relations between the different objects, and their changing properties, we built a goal-conditioned self attention policy network. Given a set of object , we first create pair-wise concatenation of the objects, . Then we transform every pair-wise vectors with a single neural network into
. A recurrent neural network with GRU, , embeds the instruction into a real valued vector . We use the embedding to attend over every pair of object to compute weights . We then compute a weighted combination of all where the weights are equal to the softmax weights . This combination transforms the elements of are combined into a single vector of fixed size. Each is concatenated with and into . Then each is transformed with the another neural network whose output is of dimension . The final output is in which represents all state-action values of the state. Illustration of the architecture is shown in Figure 7.
Image based low-level policy. In reality, we often do not have access to the state representation of the scene. For many application, a natural alternative is images. We assume is the available image representation of the scene (in all experiemnts, W=64, H=64, C=3). Further, we need to adopt a more general action space since we no longer have access to the state representation (e.g. coordinates of the location). To that end, we discretize the 2D space in to grids and an action involves picking an starting location out of the available grid and a direction out of the the 8 cardinal direction to push. This induces an dimensional discrete action space.
It is well-known that reinforcement learning from raw image observation is difficult, and even off-policy methods require millions of interaction on Atari games where the discrete action space is small. Increasing the action space would understandably make the already difficult exploration problem harder. A potential remedy is found in the fact that these high dimensional action space can often be factorized into semantically meaningful groups (e.g. the pushing task can be break down into discrete bins of the and axes as well as a pushing direction). Previous works attempted to leverage this observation by using auto-regressive models or assuming conditional independence between the groups [34, 58]. We offer a new approach that aims to make the least assumptions. Following the group assumption, we assume there exists groups and each group consists of discrete sub-actions (i.e. ). Following this definition, we can build a bijective look-up map between to tuples of sub-actions:
We overload the notion to the action feature of conditioned on the state and goal and to be a tuple of the corresponding action features. Then the value function each action can be represented as:
where is a single neural network parameterized by that is shared between all
. This model does not require picking an order like the auto-regressive model and does not assume conditional independence between the groups. Most importantly, the number of parameter scales sublinearly with the dimension of the actions. The trade-off of this model is that it can be memory-expensive to model the full joint distribution of actions at the same time; however, we found that this model performs empirically well for the pushing tasks considered in this work. We will refer to this operation asTensor Concatenation.
The overall architecture for the UVFA is as follows: the image input is fed through 3 convolution layers with kernel size
, stride, and channel ; each convlution block is FiLM’d  with the instruction embedding. Then the activation is flattened spatially to and projected to . This is further split into 3 action group of sizes and fed through tensor concatenation. is parameterized with 2-layer MLP with 512 hidden units at each layer and output dimension of 1 which is the Q-value.
Both policy networks are trained with HIR. Training mini-batches are uniformly sampled from the replay buffer. Each episode lasts for 100 steps. When the current instruction is accomplished, a new one that is currently not satisfied will be sampled. To accelerate the initial training and increase the diversity of instructions, we put a 10 step time limit on each instruction, so the policy does not get stuck if it is unable to finish the current instruction.
High-level policy. For simplicity, we use a Double DQN  to train the high-level policy. We uses an instruction set that consists of 80 instructions ( for standard and for diverse) that can sufficiently cover all relationships between the objects. We roll out the low-level policy for 5 steps for every high-level instruction (). Training mini batches are uniformly sampled from the replay buffer. The state-based high-level policy uses the same observation space as the low-level policy; the image-based high-level policy uses extracted visual features from the low-level policy and then extract salient spatial points with spatial softmax . The convolutional weights are frozen during training. This design choice makes natural sense since humans also use a single visual cortex to process all initial visual signals, but training from scratch is certain possible, if not ideal, should computation budget not matter. For the diverse visual tasks, we found that using the convolutional features with spatial softmax could not yield a good representation for the downstream tasks. Due tot time constraints, the experiments shown for the diverse high-level tasks use the ground truth state, namely position and one-hot encoded colors and shapes for the high-level policy; however, the low-level policy only has access to the image. We believe a learned convolutional layer would solve this problem. Finally, we note that the high-level policy picks each sentence independent and therefore does not leverage the structure of language. While generating language is generally hard, a generative model would have more natural resemblance to thought. We think this is an extremely important and exciting direction for future work.
Appendix B Algorithms
In this section we elaborate on our proposed algorithm and lay out finer details.
b.1 Overall algorithm
The overall hierarchical algorithm is as follows:
b.2 Training low-level policy
For both state-based and vision-based experiments, we use DDQN as . For the state-based experiments, the agent receives binary reward based on whether the action taken completes the instruction; for the vision-based experiments, we found it instrumental to add a object movements bonus, i.e. if the agents change the position of the objects by some minimum threshold, the agetn receives a 0.25 reward. This alleviates the exploration problem in high dimensional action space (), but the algorithm is capable of learning without the exploration bonus. (Algorithm 2
). We adopt the similar setting as HER where unit of time consists of epochs, cycles and episode. Each cycle consists of 50 episode and each episode consists of 100 steps. While we set the number of epoch to 50, in practice we never actually reach there. We adopt an epsilon greedy exploration strategy where every cycle we decrease the exploration by a factor of 0.993, starting from 1 but at the beginning we 10 cylces to populate the buffer. The minimum epsilon is 0.1. We useand replay buffer of size 2e6. The target network we use is a 0.95 moving average of the model parameters, updated at the beginning of every cycle. Every episode, we update the network 100 steps with the Adam Optimizer with minibatch randomly sampled from the replay buffer.
b.3 Training high-level policy
is also a DDQN. One single set of hyperparameter is used for standard experiments and another for the diverse experiments. DDQN is trained for 2e6 steps, uses uniform replay buffer of size 1e5, linearly anneal epsilon from 1 to 0.05 in 3e5 steps, Adam and batch size 256.for all all our experiments, but the experience the network sees is equivalent of 1 step. For the diverse settings, do to time constraint, we use priority replay buffer of size 1e6 for all diverse experiment including the DDQN baselines. (Algorithm 3) We use the Adam Optimizer  with initial learning rate of 0.0001. Discounte factor . Learning starts after 100 episode each of which lasts for 100 steps / 100 high-level actions.
b.4 Relabeling Strategy
HER  demonstrated that the relabeling strategy for trajectories can have significant impacts on the performance of the policy. The most successful relabeling strategy is the “k-future" strategy where the goal state and the reward are relabeled with states in the trajectories that are reached after the current time step and the reward is discounted based on the discount factor and how far away the current state is from the future state in distance. We modify this strategy for relabeling a language conditioned policy. One challenge with language instruction is that the notion of distance is not well defined as the instruction is under-determined and only captures a part of the information about the actual state. As such, conventional metrics for describing distance between sequences of tokens (e.g. edit distance) do not actually capture the information we are interested in. Instead, we adopt a more “greedy" approach to relabeling by putting more focus on 1-step transition where the instruction is actually fulfilled. Namely, we store all transition tuples in to the replay buffer (Algorithm 2). For future relabeling, we simply use the reward discounted by time steps into the future to relabel the trajectory. While the discounted reward does not usually capture the “optimal" or true discounted reward, we found it to provide sufficient learning signal. Detailed steps are shown below (Algorithm 4). In our experiments, we use .
In additon to relabeling, if an object is moved (using 800 dimensional action space), we add to replay buffer a transition where the instruction is the name of the object such as “large rubber red ball" and the reward is . We found this helps the agent to learn the concept of objects. We refer to this operation as Unary Relabeling.
Appendix C Experimental Details
c.1 One-hot encoded representation
We assign each instruction a varying number of bins in the one-hot vector. Concretely, we give each instruction of the 600 instruction 1, 4, 10, and 20 bins in the one-hot vector, which means the effective size of the one-hot vector is 600, 2400, 6000 and 12000. When sampling goals, each goal is uniformly dropped into one of its corresponding bins.
c.2 Non-compositional representation
To faithfully evaluate the importance of compositionality, we want a representation that carries the identical information as the language instruction but without the explicit compositional property (but perhaps still to some degree compositional). To this end, we use a Seq2Seq autoencoder with 64 hidden units to compress the 600 instructions into real-valued continuous vectors. The original tokens are fully recovered which indicates that the compression is lossless and the latent’s information content is the same as the original instruction. This embedding is used in place of the GRU instruction embedding. We also observed that adding regularization to the autoencoder decreases the performance of the resulting representation. For example, decreasing the bottleneck size leads to worse performance, so does adding dropout. Figure 4 uses an autoencoder with dropout of 0.5 while 1 uses one with no dropout. As you can see, the performance without dropout is better than the one with. We hypothesis adding regularization decreases the compositionality of the representation.
c.3 Non-hierarchical baseline
We use Double DQN implementation from OpenAI baselines (https://github.com/openai/baselines/tree/master/baselines/deepq). We use a 2 layer MLP with 512 hidden units at each layer with the respective action dimension as the output as the policy.
c.4 HRL baselines
In general, we note that it is difficult to compare different HRL algorithms in an apple-to-apple manner.
HIRO assumes a continuous goal space so we modified the goal to be an vector representing the locations of each object rather than using language. In this regime, we observed HIRO was unable to make good progress. We hypothesize that the highly sparse reward might be the culprit. It is also worth noting that HIRO uses a goal space in for navigation (which is by itself a choice of abstraction because the actual agent state space is much higher) while ours is of higher dimensionality. The Tensorflow implementation of HIRO we use can be found at
for navigation (which is by itself a choice of abstraction because the actual agent state space is much higher) while ours is of higher dimensionality. The Tensorflow implementation of HIRO we use can be found athttps://github.com/tensorflow/models/tree/master/research/efficient-hrl. (This is the implementation from the original author).
Option-critic aims to learn everything in a complete end-to-end manner which means it does not use the supervision from language, which makes it perhaps not as surprising that the sparse tasks do not provide sufficient signal for OC. In other words, while our method enjoys the benefit of a flexible but fixed abstraction while OC needs to learn such abstraction. We tried 8, 16, and 32 options for OC but our method has much more sub-policies due to the combinatorial nature of language. The OC implementation in Tensorflow we used can be found at https://github.com/yadrimz/option-critic.
c.5 Hardware Specs and Training time
All our experiments are performed on a single Nvidia Tesla V100. We are unable to verify the specs of the virtual CPU. The low-level policy for state-based observation takes about 2 days to train and, for image-based observation, 6 days. The high-level policies for the state-based observation takes about 2 days to train and 3 days for image-based observations (wall-clock time). The implementations are not deliberately optimized for performance (major bottleneck is actually the language supervisor) so it is very likely the time could be dramatically shortened.
Appendix D More Experimental Results
d.1 Low-level policy for diverse environment
Figure 9 shows the training instruction per episode on the diverse environment. We see that the performance is worse than fixed number of objects with the same amount of experience. This is perhaps not surprising considering the visual tasks are much more diverse and hence more challenging.
d.2 Why is the proposed environment difficult?
While DDQN worked on 2 cases in the state-based environemnt, it is unable to solve any of the problems in visual domain. We hypothesize that the pixel observations and increase in action space ( increase) makes the exploration difficult for DDQN. The difficulty of the tasks–in particular the 3 standard tasks–is reflected in the fact that the reward from non-hierarchical random action is stably 0 with small variance, meaning that under the sparse reward setting the agent rarely visits the goal state. On the other hand, the random exploration reward is much higher for our method as the exploration in the space of language is structured as the exploration is aware of the task-relevant structure of the environment.