Dynamics-Aware Unsupervised Discovery of Skills

07/02/2019 ∙ by Archit Sharma, et al. ∙ Google 0

Conventionally, model-based reinforcement learning (MBRL) aims to learn a global model for the dynamics of the environment. A good model can potentially enable planning algorithms to generate a large variety of behaviors and solve diverse tasks. However, learning an accurate model for complex dynamical systems is difficult, and even then, the model might not generalize well outside the distribution of states on which it was trained. In this work, we combine model-based learning with model-free learning of primitives that make model-based planning easy. To that end, we aim to answer the question: how can we discover skills whose outcomes are easy to predict? We propose an unsupervised learning algorithm, Dynamics-Aware Discovery of Skills (DADS), which simultaneously discovers predictable behaviors and learns their dynamics. Our method can leverage continuous skill spaces, theoretically, allowing us to learn infinitely many behaviors even for high-dimensional state-spaces. We demonstrate that zero-shot planning in the learned latent space significantly outperforms standard MBRL and model-free goal-conditioned RL, can handle sparse-reward tasks, and substantially improves over prior hierarchical RL methods for unsupervised skill discovery.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 9

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Deep reinforcement learning (RL) enables autonomous learning of diverse and complex tasks with rich sensory inputs, temporally extended goals, and challenging dynamics, such as discrete game-playing domains  (Mnih et al., 2013; Silver et al., 2016), and continuous control domains including locomotion (Schulman et al., 2015; Heess et al., 2017) and manipulation (Rajeswaran et al., 2017; Kalashnikov et al., 2018; Gu et al., 2017). Most of the deep RL approaches learn a Q-function or a policy that are directly optimized for the training task, which limits their generalization to new scenarios. In contrast, MBRL methods (Li and Todorov, 2004; Deisenroth and Rasmussen, 2011; Watter et al., 2015) can acquire dynamics models that may be utilized to perform unseen tasks at test time. While this capability has been demonstrated in some of the recent works  (Levine et al., 2016; Nagabandi et al., 2018; Chua et al., 2018b; Kurutach et al., 2018; Ha and Schmidhuber, 2018), learning an accurate global model that works for all state-action pairs can be exceedingly challenging, especially for high-dimensional system with complex and discontinuous dynamics. The problem is further exacerbated as the learned global model has limited generalization outside of the state distribution it was trained on and exploring the whole state space is generally infeasible. Can we retain the flexibility of model-based RL, while using model-free RL to acquire proficient low-level behaviors under complex dynamics?

While learning a global dynamics model that captures all the different behaviors for the entire state-space can be extremely challenging, learning a model for a specific behavior that acts only in a small part of the state-space can be much easier. For example, consider learning a model for dynamics of all gaits of a quadruped versus a model which only works for a specific gait. If we can learn many such behaviors and their corresponding dynamics, we can leverage model-predictive control to plan in the behavior space, as opposed to planning in the action space. The question then becomes: how do we acquire such behaviors, considering that behaviors could be random and unpredictable? To this end, we propose Dynamics-Aware Discovery of Skills (DADS), an unsupervised RL framework for learning low-level skills using model-free RL with the explicit aim of making model-based control easy. Skills obtained using DADS are directly optimized for predictability, providing a better representation on top of which predictive models can be learned. Crucially, the skills do not require any supervision to learn, and are acquired entirely through autonomous exploration. This means that the repertoire of skills and their predictive model are learned before the agent has been tasked with any goal or reward function. When a task is provided at test-time, the agent utilizes the previously learned skills and model to immediately perform the task without any further training.

The key contribution of our work is an unsupervised reinforcement learning algorithm, DADS, grounded in mutual-information-based exploration. We demonstrate that our objective can embed learned primitives in continuous spaces, which allows us to learn a large, diverse set of skills. Crucially, our algorithm also learns to model the dynamics of the skills, which enables the use of model-based planning algorithms for downstream tasks. We adapt the conventional model predictive control algorithms to plan in the space of primitives, and demonstrate that we can compose the learned primitives to solve downstream tasks without any additional training.

2 Preliminaries

Mutual information has been used as an objective to encourage exploration in reinforcement learning (Houthooft et al., 2016; Mohamed and Rezende, 2015). According to its definition, , optimizing mutual information with respect to amounts to maximizing the entropy of while minimizing the conditional entropy . If is a function of the state and represents actions, this objective encourages the state entropy to be high, causing the underlying policy to be exploratory. Recently, multiple works (Eysenbach et al., 2018; Gregor et al., 2016; Achiam et al., 2018) apply this idea to learn diverse skills which maximally cover the state space.

To leverage planning-based control, MBRL estimates the true dynamics of the environment by learning a model

. This allows it to predict a trajectory of states resulting from a sequence of actions without any additional interaction with the environment. A similar simulation of the trajectory can be carried out using a model parameterized as , where denotes the skill that is being executed. This modification to MBRL not only mandates the existence of a policy executing the actual actions in environment, but more importantly, the policy to execute these actions in a way that maintains predictability under . In this setup, skills are effectively an abstraction for the actions that are executed in the environment. This scheme forgoes a much harder task of learning a global model , in exchange of a collection of potentially simpler models of behavior-specific dynamics. In addition, the planning problem becomes easier as the planner is searching over a skill space that can act on longer horizons than granular actions .

These seemingly unrelated ideas can be combined into a single optimization scheme, where we first discover skills (and their models) without any extrinsic reward and then compose these skills to optimize for the task defined at test time using model-based planning. At train time, we assume a Markov Decision Process (MDP)

. The state space and action space are assumed to be continuous, and the bounded. We assume the transition dynamics to be stochastic, such that . We learn a skill-conditioned policy , where the skills belongs to the space , detailed in Section 3. We assume that the skills are sampled from a prior over . We simultaneously learn a skill-conditioned transition function , coined as skill-dynamics, which predicts the transition to the next state from the current state for the skill under the given dynamics . At test time, we assume an MDP , where match those defined in , and the reward function . We plan in using to compose the learned skills for optimizing in , which we detail in Section 4.

3 Dynamics-Aware Discovery of Skills (DADS)

Initialize ;
while not converged do
        Sample a skill every episode;
        Collect new on-policy samples;
        Update using steps of gradient descent on transitions;
        Compute for transitions;
        Update using any RL algorithm;
       
end while
Algorithm 1 Dynamics-Aware Discovery of Skills (DADS)
Figure 2: The agent interacts with the environment to produce a transition

. Intrinsic reward is computed by computing the transition probability under

for the current skill , compared to random samples from the prior . The agent maximizes the intrinsic reward computed for a batch of episodes, while maximizes the log-probability of the actual transitions of .

We now establish a connection between mutual-information-based exploration and model-based RL by deriving an intrinsic reward that reflects predictability under skill-dynamics. For an episodic setting with horizon , we aim to maximize:

(1)

for an arbitrary constant upper bound . The proposed objective encodes the intuition that every skill should be maximally informative about the resulting sequence of states in the MDP , while being minimally informative about the sequence of actions used. For clarity of discussion, we defer a more rigorous justification for this information-bottleneck-style (Tishby et al., 2000; Alemi et al., 2016) objective to the Appendix B. We simplify Eq. 1:

(2)
(3)

by using the chain rule of mutual information to obtain Eq. 

2, and the Markovian assumption of the to obtain Eq. 3. Returning to Eq. 1, we obtain our objective as:

(4)

where we formulate the dual objective using the Lagrangian multiplier (and ignore the constant ). Using the definition of mutual information, the resulting objective is given by:

(5)
(6)
(7)

where represents the stationary state-action distribution under the skill . For Eq. 6, we use the non-negativity of KL-divergence, that is , to replace the marginal over the policy with the uniform prior over the bounded action space . Similarly, we use to introduce skill-dynamics as a parametric variational approximation . Ignoring the constant , we get our objective in Eq. 7.

Maximizing immediately suggests an alternating optimization scheme that is summarized in Figure 2. Note that the gradient for can be expressed as:

(8)

which is simply maximizing the likelihood of the transitions generated by the current policy.

The optimization of the policy can be interpreted as entropy-regularized RL with a reward function . Unfortunately, is intractable to compute so we need to resort to approximations. We choose to re-use the skill dynamics model to approximate , where is sampled from the prior . The final reward function can be written as:

(9)

In practice, we often re-use the sample in the denominator in Eq. 9 amongst the samples , to obtain a softmax-like construction, providing a smoother optimization landscape. For the actual algorithm, we collect a large on-policy batch of data in every iteration, so that it contains experience collected from different skills. In order to take multiple gradient steps on the same batch of data, we use soft actor-critic (Haarnoja et al., 2018a, b) as the optimization algorithm for the policy (although our method is agnostic to the choice of the RL algorithm used to update the policy). The exact implementation details are discussed in the Appendix A.

4 Planning using Skill Dynamics

;
Initialize parameters ;
for  to  do
        for  to  do
               ;
               Compute for ;
               Update ;
              
        end for
       Sample from ;
        Execute for steps;
       
end for
Algorithm 2 Latent Space Planner
Figure 3: At test time, the planner executes simulates the transitions in environment using skill-dynamics , and updates the distribution of plans according to the computed reward on the simulated trajectories. After a few updates to the plan, the first primitive is executed in the environment using the learned agent .

Given the learned skills and their respective skill-transition dynamics , we can perform model-based planning in the latent space to optimize for a reward that is given to an agent at test time. Note, that this essentially allows us to perform zero-shot planning given the unsupervised pre-training procedure described in Section 3.

In order to perform planning, we employ the model-predictive-control (MPC) paradigm Garcia et al. (1989), which in a standard setting generates a set of action plans for a planning horizon . The MPC plans can be generated due to the fact that the planner is able to simulate the trajectory assuming access to the transition dynamics . In addition, each plan computes the reward for its trajectory according to the reward function that is provided for the test-time task. Following the MPC principle, the planner selects the best plan according to the reward function and executes its first action . The planning algorithm repeats this procedure for the next state iteratively until it achieves its goal.

We use a similar strategy to design an MPC planner to exploit previously-learned skill-transition dynamics . Note that unlike conventional model-based RL, we generate a plan in the latent space as opposed to the action space that would be used by a standard planner. Since the primitives are temporally meaningful, it is beneficial to hold a primitive for a horizon , unlike actions which are usually held for a single step. Thus, effectively, the planning horizon for our latent space planner is , enabling longer-horizon planning using fewer primtiives. Similar to the standard MPC setting, the latent space planner simulates the trajectory and computes the reward . After a small number of trajectory samples, the planner selects the first latent action of the best plan, executes it for steps in the environment, and the repeats the process until goal completion.

The latent planner maintains a distribution of latent plans, each of length

. Each element in the sequence represents the distribution of the primitive to be executed at that time step. For continuous spaces, each element of the sequence can be modelled using a normal distribution,

. We refine the planning distributions for steps, using samples of latent plans , and compute the for the simulated trajectory . The update for the parameters follows that in Model Predictive Path Integral (MPPI) controller Williams et al. (2016):

(10)

While we keep the covariance matrix of the distributions fixed, it is possible to update that as well as shown in Williams et al. (2016). We show an overview of the planning algorithm in Figure 3, and provide more implementation details in Appendix A.

5 Related Work

Central to our method is the concept of skill discovery via mutual information maximization. This principle, proposed in prior work that utilized purely model-free unsupervised RL methods (Daniel et al., 2012; Florensa et al., 2017; Eysenbach et al., 2018; Gregor et al., 2016; Warde-Farley et al., 2018), aims to learn diverse skills via a discriminability objective: a good set of skills is one where it is easy to distinguish the skills from each other, which means they perform distinct tasks and cover the space of possible behaviors. Building on this prior work, we distinguish our skills based on how they modify the original uncontrolled dynamics of the system. This simultaneously encourages the skills to be both diverse and predictable. We also demonstrate that constraining the skills to be predictable makes them more amenable for hierarchical composition and thus, more useful on downstream tasks.

Another line of work that is conceptually close to our method copes with intrinsic motivation (Oudeyer and Kaplan, 2009; Oudeyer et al., 2007; Schmidhuber, 2010) which is used to drive the agent’s exploration. Examples of such works include empowerment Klyubin et al. (2005); Mohamed and Rezende (2015), count-based exploration Bellemare et al. (2016); Oh et al. (2015); Tang et al. (2017); Fu et al. (2017), information gain about agent’s dynamics Stadie et al. (2015) and forward-inverse dynamics models Pathak et al. (2017). While our method uses an information-theoretic objective that is similar to these approaches, it is used to learn a variety of skills that can be directly used for model-based planning, which is in contrast to learning a better exploration policy for a single skill. We provide a discussion on the connection between empowerment and DADS in Appendix C.

The skills discovered using our approach can also provide extended actions and temporal abstraction, which enable more efficient exploration for the agent to solve various tasks, reminiscent of hierarchical RL (HRL) approaches. This ranges from the classic option-critic architecture  (Sutton et al., 1999; Stolle and Precup, 2002; Perkins et al., 1999) to some of the more recent work  (Bacon et al., 2017; Vezhnevets et al., 2017; Nachum et al., 2018; Hausman et al., 2018). However, in contrast to end-to-end HRL approaches (Heess et al., 2016; Peng et al., 2017), we can leverage a stable, two-phase learning setup. The primitives learned through our method provide action and temporal abstraction, while planning with skill-dynamics enables hierarchical composition of these primitives, bypassing many problems of end-to-end HRL.

In the second phase of our approach, we use the learned skill-transition dynamics models to perform model-based planning - an idea that has been explored numerous times in the literature. Model-based reinforcement learning has been traditionally approached with methods that are well-suited for low-data regimes such as Gaussian Processes (Rasmussen, 2003) showing significant data-efficiency gains over model-free approaches (Deisenroth et al., 2013; Kamthe and Deisenroth, 2017; Kocijan et al., 2004; Ko et al., 2007)

. More recently, due to the challenges of applying these methods to high-dimensional state spaces, MBRL approaches employs Bayesian deep neural networks 

(Nagabandi et al., 2018; Chua et al., 2018b; Gal et al., 2016; Fu et al., 2016; Lenz et al., 2015) to learn dynamics models. In our approach, we take advantage of the deep dynamics models that are conditioned on the skill being executed, simplifying the modelling problem. In addition, the skills themselves are being learned with the objective of being predictable, further assists with the learning of the dynamics model. There also have been multiple approaches addressing the planning component of MBRL including linear controllers for local models (Levine et al., 2016; Kumar et al., 2016; Chebotar et al., 2017), uncertainty-aware (Chua et al., 2018b; Gal et al., 2016) or deterministic planners (Nagabandi et al., 2018) and stochastic optimization methods (Williams et al., 2016). The main contribution of our work lies in discovering model-based skill primitives that can be further combined by a standard model-based planner, therefore we take advantage of an existing planning approach - Model Predictive Path Integral (Williams et al., 2016) that can leverage our pre-trained setting.

6 Experiments

Through our experiments, we aim to demonstrate that: (a) DADS as a general purpose skill discovery algorithm can scale to high-dimensional problems; (b) discovered skills are amenable to hierarchical composition and; (c) not only is planning in the learned latent space feasible, but it is competitive to strong baselines. In Section 6.1, we provide visualizations and qualitative analysis of the skills learned using DADS. We demonstrate in Section 6.2 and Section 6.4 that optimizing the primitives for predictability renders skills more amenable to temporal composition that can be used for Hierarchical RL.We benchmark against state-of-the-art model-based RL baseline in Section 6.3, and against goal-conditioned RL in Section 6.5.

6.1 Qualitative Analysis

Figure 4: Skills learned on different MuJoCo environments in the OpenAI gym. DADS can discover diverse skills without any extrinsic rewards, even for problems with high-dimensional state and action spaces.

In this section, we provide a qualitative discussion of the unsupervised skills learned using DADS. We use the MuJoCo environments (Todorov et al., 2012) from the OpenAI gym as our test-bed (Brockman et al., 2016). We find that our proposed algorithm can learn diverse skills without any reward, even in problems with high-dimensional state and actuation, as illustrated in Figure 4. DADS can discover primitives for Half-Cheetah to run forward and backward with multiple different gaits, for Ant to navigate the environment using diverse locomotion primitives and for Humanoid to walk using stable locomotion primitives with diverse gaits and direction. The videos of the discovered primitives are available at: https://sites.google.com/view/dads-skill

Qualitatively, we find the skills discovered by DADS to be predictable and stable, in line with implicit constraints of the proposed objective. While the Half-Cheetah will learn to run in both backward and forward directions, DADS will disincentivize skills which make Half-Cheetah flip owing to the reduced predictability on landing. Similarly, skills discovered for Ant rarely flip over, and tend to provide stable navigation primitives in the environment. This also incentivizes the Humanoid, which is characteristically prone to collapsing and extremely unstable by design, to discover gaits which are stable for sustainable locomotion.

One of the significant advantages of the proposed objective is that it is compatible with continuous skill spaces, which has not been shown in prior work on skill discovery Eysenbach et al. (2018)

. Not only does this allow us to embed a large and diverse set of skills into a compact latent space, but also the smoothness of the learned space allows us to interpolate between behaviors generated in the environment. We demonstrate this on the Ant environment (Figure

5), where we learn two-dimensional continuous skill space with a uniform prior over in each dimension, and compare it to a discrete skill space with a uniform prior over 20 skills. Similar to Eysenbach et al. (2018), we restrict the observation space of the skill-dynamics to the cartesian coordinates . We hereby call this the x-y prior, and discuss its role in Section 6.2.

Trajectories in Discrete Skill Space Trajectories in Continuous Skill Space Orientation of Ant Trajectory
Figure 5: (Left, Centre) X-Y traces of Ant skills and (Right) Heatmap to visualize the learned continuous skill space. Traces demonstrate that the continuous space offers far greater diversity of skills, while the heatmap demonstrates that the learned space is smooth, as the orientation of the X-Y trace varies smoothly as a function of the skill.

In Figure 5, we project the trajectories of the learned Ant skills from both discrete and continuous spaces onto the Cartesian plane. From the traces of the skills, it is clear that the continuous latent space can generate more diverse trajectories. We demonstrate in Section 6.3, that continuous primitives are more amenable to hierarchical composition and generally perform better on downstream tasks. More importantly, we observe that the learned skill space is semantically meaningful. The heatmap in Figure 5 shows the orientation of the trajectory (with respect to the x-axis) as a function of the skill , which varies smoothly as is varied, with explicit interpolations shown in Appendix D.

6.2 Skill Variance Analysis

Standard Deviation of Trajectories DADS without x-y prior DIAYN with x-y prior DADS with x-y prior
Figure 6: (Top-Left) Standard deviation of Ant’s position as a function of steps in the environment, averaged over multiple skills and normalized by the norm of the position. (Top-Right to Bottom-Left Clockwise) X-Y traces of skills learned using DIAYN with x-y prior, DADS with x-y prior and DADS without x-y prior, where the same color represents trajectories resulting from the execution of the same skill

in the environment. High variance skills from DIAYN offer limited utility for hierarchical control.

In an unsupervised skill learning setup, it is important to optimize the primitives to be diverse. However, we argue that diversity is not sufficient for the learned primitives to be useful for downstream tasks. Primitives must exhibit low-variance behavior, which enables long-horizon composition of the learned skills in a hierarchical setup. We analyze the variance of the x-y trajectories in the environment, where we also benchmark the variance of the primitives learned by DIAYN (Eysenbach et al., 2018). For DIAYN, we use the x-y prior for the skill-discriminator, which biases the discovered skills to diversify in the x-y space. This step was necessary for that baseline to obtain a competitive set of navigation skills. Figure 6 (Left) demonstrates that DADS, which optimizes the primitives for predictability and diversity, yields significantly lower-variance primitives when compared to DIAYN, which only optimizes for diversity. This is starkly demonstrated in the plots of X-Y traces of skills learned in different setups. Skills learned by DADS show significant control over the trajectories generated in the environment, while skills from DIAYN exhibit high variance in the environment, which limits their utility for hierarchical control. This is further demonstrated quantitatively in Section 6.4.

While optimizing for predictability already significantly reduces the variance of the trajectories generated by a primitive, we find that using the x-y prior with DADS brings down the skill variance even further. For quantitative benchmarks in the next sections, we assume that the Ant skills are learned using an x-y prior on the observation space, for both DADS and DIAYN.

6.3 Model-Based Reinforcement Learning

The key utility of learning a parametric model

is to be enable use of planning algorithms for downstream tasks, which can be extremely sample-efficient. In our setup, we can solve test-time tasks in zero-shot, that is without any learning on the downstream task. We compare with the state-of-the-art model-based RL method (Chua et al., 2018a), which learns a dynamics model parameterized as , on the task of the Ant navigating to a specified goal with a dense reward. Given a goal , reward at any position is given by . We benchmark our method against the following variants:

  • Random-MBRL (rMBRL): We train the model on randomly collected trajectories, and test the zero-shot generalization of the model on a distribution of goals.

  • Weak-oracle MBRL (WO-MBRL): We train the model on trajectories generated by the planner to navigate to a goal, randomly sampled in every episode. The distribution of goals during training matches the distribution at test time.

  • Strong-oracle MBRL (SO-MBRL): We train the model on a trajectories generated by the planner to navigate to a specific goal, which is fixed for both training and test time.

Amongst the variants, only the rMBRL matches our assumptions of having an unsupervised task-agnostic training. Both WO-MBRL and SO-MBRL benefit from goal-directed exploration during training, a significant advantage over DADS, which only uses mutual-information-based exploration.

We use as the metric, which represents the distance to the goal averaged over the episode (with the same fixed horizon for all models and experiments), normalized by the initial distance to the goal . Therefore, lower indicates better performance and (assuming the agent goes closer to the goal). The test set of goals is fixed for all the methods, sampled from .

Figure 7 demonstrates that the zero-shot planning significantly outperforms all model-based RL baselines, despite the advantage of the baselines being trained on the test goal(s). For the experiment depicted in Figure 7 (Right), DADS has an unsupervised pre-training phase, unlike SO-MBRL which is training directly for the task. A comparison with Random-MBRL shows the significance of mutual-information-based exploration, especially with the right parameterization and priors. This experiment also demonstrates the advantage of learning a continuous space of primitives, which outperforms planning on discrete primitives.

Figure 7: (Left) The results of the MPPI controller on skills learned using DADS-c (continuous primitives) and DADS-d (discrete primitives) significantly outperforms state-of-the-art model-based RL. (Right) Planning for a new task does not require any additional training and outperforms model-based RL being trained for the specific task.

6.4 Hierarchical Control with Unsupervised Primitives

Figure 8: (Left) A RL-trained meta-controller is unable to compose primitive learned by DIAYN to navigate Ant to a goal, while it succeeds to do so using the primitives learned by DADS. (Right) Goal-Conditioned RL (GCRL-dense/sparse) does not generalize outside its training distribution, while MPPI controller on learned skills (DADS-dense/sparse) experiences significantly smaller degrade in performance.

We benchmark hierarchical control for primitives learned without supervision, against our proposed scheme using an MPPI based planner on top of DADS-learned skills. We persist with the task of Ant-navigation as described in 6.3. We benchmark against Hierarchical DIAYN (Eysenbach et al., 2018), which learns the skills using the DIAYN objective, freezes the low-level policy and learns a meta-controller that outputs the skill to be executed for the next steps. We provide the x-y prior to the DIAYN’s disciminator while learning the skills for the Ant agent. The performance of the meta-controller is constrained by the low-level policy, however, this hierarchical scheme is agnostic to the algorithm used to learn the low-level policy. To contrast the quality of primitives learned by the DADS and DIAYN, we also benchmark against Hierarchical DADS, which learns a meta-controller the same way as Hierarchical DIAYN, but learns the skills using DADS.

From Figure 8 (Left) We find that the meta-controller is unable to compose the skills learned by DIAYN, while the same meta-controller can learn to compose skills by DADS to navigate the Ant to different goals. This result seems to confirm our intuition described in Section 6.2, that the high variance of the DIAYN skills limits their temporal compositionality. Interestingly, learning a RL meta-controller reaches similar performance to the MPPI controller, taking an additional samples per goal.

6.5 Goal-conditioned RL

To demonstrate the benefits of our approach over model-free RL, we benchmark against goal-conditioned RL on two versions of Ant-navigation: (a) with a dense reward and (b) with a sparse reward if , else 0. We train the goal-conditioned RL agent using soft actor-critic, where the state variable of the agent is augmented with , the position delta to the goal. The agent gets a randomly sampled goal from at the beginning of the episode.

In Figure 8 (Right), we measure the average performance of the all the methods as a function of the initial distance of the goal, ranging from 5 to 30 metres. For dense reward navigation, we observe that while model-based planning on DADS-learned skills degrades smoothly as the initial distance to goal to increases, goal-conditioned RL experiences a sudden deterioration outside the goal distribution it was trained on. Even within the goal distribution observed during training of goal-conditioned RL model, skill-space planning performs competitively to it. With sparse reward navigation, goal-conditioned RL is unable to navigate, while MPPI demonstrates comparable performance to the dense reward up to about 20 metres. This highlights the utility of learning task-agnostic skills, which makes them more general while showing that latent space planning can be leveraged for tasks requiring long-horizon planning.

7 Conclusion

We have proposed a novel unsupervised skill learning algorithm that is amenable to model-based planning for hierarchical control on downstream tasks. We show that our skill learning method can scale to high-dimensional state-spaces, while discovering a diverse set of low-variance skills. In addition, we demonstrated that, without any training on the specified task, we can compose the learned skills to outperform competitive model-based baselines that were trained with the knowledge of the test tasks. We plan to extend the algorithm to work with off-policy data, potentially using relabelling tricks (Andrychowicz et al., 2017; Nachum et al., 2018) and explore more nuanced planning algorithms. We plan to apply the hereby-introduced method to different domains, such as manipulation and enable skill/model discovery directly from images, culminating into unsupervised skill discovery on robotic setups.

8 Acknowledgements

We would like to thank Evan Liu, Ben Eysenbach, Anusha Nagabandi for their help in reproducing the baselines for this work. We are thankful to Ben Eysenbach for their comments and discussion on the initial drafts. We would also like to acknowledge Ofir Nachum, Alex Alemi, Daniel Freeman, Yiding Jiang, Allan Zhou and other colleagues at Google Brain for their helpful feedback and discussions at various stages of this work. We are also thankful to Michael Ahn and others in Adept team for their support, especially with the infrastructure setup and scaling up the experiments.

References

Appendix A Implementation Details

All of our models are written in the open source Tensorflow-Agents [Sergio Guadarrama, Anoop Korattikara, Oscar Ramirez, Pablo Castro, Ethan Holly, Sam Fishman, Ke Wang, Ekaterina Gonina, Chris Harris, Vincent Vanhoucke, Eugene Brevdo, 2018], based on Tensorflow [Abadi et al., 2015].

a.1 Skill Spaces

When using discrete spaces, we parameterize

as one-hot vectors. These one-hot vectors are randomly sampled from the uniform prior

, where is the number of skills, usually between 20 and 128. For continuous spaces, we sample . We generally vary from 2 (Ant learnt with x-y prior) to 5 (Humanoid on full observation spaces). The skills are sampled once in the beginning of the episode and fixed for the rest of the episode. However, it is possible to resample the skill from the prior within the episode, which allows for every skill to experience a different distribution than the initialization distribution and encourage skills which are temporally compositional. However, the re-sampling frequency should be such that it happens maximally once or twice per episode, so that every skill has sufficient time to act.

a.2 Agent

We use SAC as the optimizer for our agent , in particular, EC-SAC [Haarnoja et al., 2018b]. The input to the policy generally excludes global co-ordinates (x, y) of the centre-of-mass, available for a lot of enviroments in OpenAI gym, which helps produce skills agnostic to the location of the agent. We restrict to two hidden layers for our policy and critic networks. However, to improve the expressivity of skills, it is beneficial to increase the capacity of the networks. The hidden layer sizes can vary from (128, 128) for Half-Cheetah to (1024, 1024) for Humanoid. The critic is similarly parameterized. The target function for critic is updated every iteration using a soft updates with co-efficient of . We use Adam [Kingma and Ba, 2014] optimizer with a fixed learning rate of , and a fixed entropy co-efficient . While the policy is parameterized as a normal distribution where is a diagonal covariance matrix, it undergoes through tanh transformation, to transform the output to the range and constrain to the action bounds.

a.3 Skill-Dynamics

Skill-dynamics, denoted by , is parameterized by a deep neural network. A common trick in model-based RL is to predict the , rather than the full state . Hence, the prediction network is . Note, both parameterizations can represent the same set of functions. However, the latter will be easy to learn as will be centred around 0. While the global co-ordinates are excluded from the input to , it is useful to predict , because reward functions for goal-based navigation generally rely on the position prediction from the model. The skill-dynamics has the same capacity as the agent/critic with the same hidden layer sizes. The output distribution is modelled as a Mixture-of-Experts [Jacobs et al., 1991]

, where expert is a diagonal state-dependent gaussian, and every expert has weight dependent on the input. The number of experts is 4. Batch-normalization was found to be useful for learning skill-dynamics. However, it is important to turn off the learnable parameters for the last layer for sanity of the learning process.

a.4 Other Hyperparameters

The episode horizon is generally kept shorter for stable agents like Ant (200 usually), while longer for unstable agents like Humanoid (500-1000). For Ant, longer episodes do not add value, but Humanoid can benefit from longer episodes as it helps it filter skills which are unstable. The optimization scheme is on-policy, and generally about 1000-4000 steps are collected in one iteration. The idea is to get about episodes of 5-10 skills in a batch. Re-sampling skills within episodes can be useful if working with longer episodes. Once a batch of episodes is collected, the skill-dynamics is updated using Adam optimizer with a fixed learning rate of 3e-4. The batch size is 128, and generally 20-50 steps of gradient descent are carried out. To compute the intrinsic reward, we need to resample the prior for computing the denominator. For continuous spaces, we set between 50 to 500. For discrete spaces, we can marginalize over all skills. After the intrinsic reward is computed, the policy and critic networks are updated for 64-128 steps on batch size of 128. This is to ensure that every sample in the batch is seen about 3-4 times, in expectation.

a.5 Planning and Evaluation Setups

For evaluation, we fix the episode horizon to 200 for all models in all evaluation setups. Depending upon the size of the latent space and planning horizon, the number of samples from the planning distribution is varied between 10-200. The co-efficient for MPPI is set to 10. We generally found that setting and worked well, in which case set the number of refine steps . However, for sparse reward navigation it is important to have a longer horizon planning, in which case we set with a higher number of samples from the planning distribution. Also, when using longer planning horizons, we found that smoothing the sampled plans help. Thus, if the sampled plan is , we smooth the plan to make and so on. The is generally kept high between 0.8-0.95.

For hierarchical controllers learning on top of low-level unsupervised primitives, we use PPO [Schulman et al., 2017] for discrete action skills, while we use SAC for continuous skills. We keep the number of steps after which the meta-action is decided as 10 (that is ). The hidden layer sizes of the meta-controller are (128, 128). We use a learning rate of 1e-4 for PPO and 3e-4 for SAC.

Appendix B Graphical models, Information Bottleneck and Unsupervised Skill Learning

We now present a novel perspective on unsupervised skill learning, motivated from the literature on Information Bottleneck. This section takes inspiration from [Alemi and Fischer, 2018], which helps us provide a rigorous justification for our objective proposed earlier. To obtain our unsupervised RL objective, we setup a graphical model as shown in Figure 10, which represents the distribution of trajectories generated by a given policy

. The joint distribution is given by:

(11)

Figure 9: Graphical model for the world in which the trajectories are generated while interacting with the environment. Shaded nodes represent the distributions we optimize.

Figure 10: Graphical model for the world which is the desired representation of the world.

We setup another graphical model , which represents the desired model of the world. In particular, we are interested in approximating , which represents the transition function for a particular primitive. This abstraction helps us get away from knowing the exact actions, enabling model-based planning in behavior space (as discussed in the main paper). The joint distribution for shown in Figure 10 is given by:

(12)

The goal of our approach is to optimize the distribution in the graphical model to minimize the distance between the two distributions, when transforming to the representation of the graphical model . In particular, we are interested in minimizing the KL divergence between and - . However, since is not known apriori, we setup the objective as , which is the reverse information projection  [Csiszár and Matus, 2003]. An alternate way to understand the objective is to optimize the distribution to optimally project onto the graphical model . Note, if had the same structure as , the information lost in projection would be for any valid . Interestingly, it was shown in Friedman et al. [2001] that:

(13)

where and represents the multi-information for distribution on the respective graphical models. The multi-information [Slonim et al., 2005] for a graphical model with nodes is defined as:

(14)

where denotes the nodes upon which has conditional dependence in G. Using this definition, we can compute the multi-information terms:

(15)

Here, is constant as we assume the underlying dynamics to be fixed (and unknown), and we can safely ignore this term. The final objective to be maximized is given by:

(16)
(17)
(18)

Here, we have used the non-negativity of mutual information, that is . This yields us the objective that we proposed to begin with. This results in an unsupervised skill learning objective that explicitly fits a model for transition behaviors, while providing a grounded connection with probabilistic graphical models. Note, unlike the setup of control as inference  [Levine, 2018, Ziebart et al., 2008] which casts policy learning as variational inference, the policy here is assumed to be part of the generative model itself (and thus the resulting difference in the direction of ).

Figure 11: Graphical model for the world P representing the stationary state, action distribution. Shaded nodes represent the distributions we optimize.

Figure 12: Graphical model for the world Q using which we is the representation we are interested in.

We can carry out the exercise for the reward function in Diversity is All You Need (DIAYN) Eysenbach et al. [2018] to provide a graphical model interpretation of the objective used in the paper. To conform with objective in the paper, we assume to be sampling to be state-action pairs from skill-conditioned stationary distributions in the world P, rather than trajectories. Again, the objective to be maximized is given by

(19)
(20)
(21)
(22)

where we have used the variational inequalities to replace with and with a uniform prior over bounded actions (which is ignored as a constant).

Appendix C Interpretation as Empowerment in the Latent Space

Recall, the empowerment objective Mohamed and Rezende [2015] can be stated as

(23)

where the we are are learning a flat policy , and using the variational approximation for the true action-posterior . We can connect our objective with empowerment if we assume a latent-conditioned policy and optimize , which can be interpreted as empowerment in the latent space . There are two ways to decompose this objective:

(24)
(25)

Using the first decomposition, we can construct a an objective using a variational lower bound which learns the network . This is an inference network, which learns to discriminate skills based on the transitions they generate in the environment and not the state-distribution induced by each skill. However, we are interested in learning the network , which is why we work with the second decomposition. But, again we are stuck with marginal transition entropy, which is intractable to compute. We can handle it in a couple of ways:

(26)
(27)

where represents the distribution of transitions from the state . Note, we are using the approximation . Our use of encodes the intuition that the should represent the distribution of transitions from under different primitives, and thus the marginal of over should approximately represent . However, this procedure does not yield entropy-regularized RL by itself, but arguments similar to those provided for Information Maximization algorithm by Mohamed and Rezende [2015] can be made here to justify it in this empowerment perspective.

Note, this procedure makes an assumption when approximating . While every skill is expected to induce a different state-distribution in principle, this is not a bad assumption to make as we often times expect skills to be almost state-independent (consider locomotion primitives, which can essentially be activated from the state-distribution of any other locomotion primitive). The impact of this assumption can be further attenuated if skills are randomly re-sampled from the prior within an episode of interaction with the environment. Irrespective, we can avoid making this assumption if we use the variational lower bounds from Agakov [2004], which is the second way to learn for . We use the following inequality, used in Hausman et al. [2018]:

(28)

where is a variational approximation to the posterior .

(29)
(30)
(31)

where we have used the inequality for to introduce the variational posterior for skill inference besides the conventional variational lower bound to introduce . Further decomposing the leftover entropy:

Reusing the variational lower bound for marginal entropy from Agakov [2004], we get:

(32)
(33)
(34)

Since, the choice of posterior is upon us, we can choose

to induce a uniform distribution for the bounded action space. For

, notice that the underlying dynamics are independent of , but the actions do depend upon . Therefore, this corresponds to entropy-regularized RL when the dynamics of the system are deterministic. Even for stochastic dynamics, the analogy might be a good approximation , assuming the underlying dynamics are not very entropic. The final objective (making this low-entropy dynamics assumption) can be written as:

(35)

We defer experimentation with this objective to future work.

Appendix D Interpolation in Continuous Latent Space

Figure 13: Interpolation in the continuous primitive space learned using DADS on the Ant environment corresponds to interpolation in the trajectory space. (Left) Interpolation from (solid blue) to (dotted cyan); (Middle) Interpolation from (solid blue) to (dotted cyan); (Right) Interpolation from (solid blue) to (dotted cyan).

Appendix E Model Prediction

Figure 14: (Left) Prediction error in the Ant’s co-ordinates (normalized by the norm of the actual position) for Skill-Dynamics. (Right) X-Y traces of actual trajectories (colored) compared to trajectories predicted by Skill-Dynamics (dotted-black) for different skills.

From Figure 14, we observe that skill-dynamics can provide robust state-predictions over long planning horizons. When learning skill-dynamics with x-y prior, we observe that the error in prediction rises slower with horizon as compared to the norm of the actual position. This provides strong evidence of cooperation between the primitives and skill-dynamics learned using DADS with x-y prior. As the error-growth for skill-dynamics learned on full-observation space is sub-exponential, similar argument can be made for DADS without x-y prior as well (albeit to a weaker extent).