A key challenge for reinforcement learning (RL) consists of learning in environments with sparse extrinsic rewards. In contrast to current RL methods, humans are able to learn new skills with little or no reward by using various forms of intrinsic motivation. We propose AMIGo, a novel agent incorporating a goal-generating teacher that proposes Adversarially Motivated Intrinsic Goals to train a goal-conditioned "student" policy in the absence of (or alongside) environment reward. Specifically, through a simple but effective "constructively adversarial" objective, the teacher learns to propose increasingly challenging—yet achievable—goals that allow the student to learn general skills for acting in a new environment, independent of the task to be solved. We show that our method generates a natural curriculum of self-proposed goals which ultimately allows the agent to solve challenging procedurally-generated tasks where other forms of intrinsic motivation and state-of-the-art RL methods fail.READ FULL TEXT VIEW PDF
This repository contains code for the method and experiments of the paper "Learning with AMIGo: Adversarially Motivated Intrinsic Goals".
Deep Reinforcement Learning (RL) has been tremendously successful and continues to show impressive results on a wide range of tasks (e.g. Mnih et al., 2016; Silver et al., 2016, 2017; Vinyals et al., 2019). However, this success has so far been mostly confined to scenarios with reasonably dense rewards, or to those where a perfect model of the environment can be used for search (for example, in the game of Go (Silver et al., 2016; Duan et al., 2016; Moravcík et al., 2017)). Many real-world environments offer extremely sparse rewards, if any at all. In such environments, random exploration, which underpins many current RL approaches, is likely to not yield sufficient reward signal to train an agent, or be very sample inefficient as it requires the agent to stumble onto novel rewarding states by chance.
In contrast, humans are capable of dealing with rewards that are sparse and lie far in the future. For example, to a child, the future adult life involving education, work, or marriage provides no useful reinforcement signal. Instead, children devote much of their time to play, generating objectives and posing challenges to themselves as a form of intrinsic motivation. Solving such self-proposed tasks encourages them to explore, experiment, and invent; sometimes, as in many games and fantasies, without any direct link to reality or to any source of extrinsic reward. This kind of intrinsic motivation is related to Piaget’s idea of the child as an active learner (Schulz, 2012) and might be a crucial feature to enable learning in real-world environments.
To address this discrepancy between naïve deep RL exploration strategies and human capabilities, we present a novel method which learns to propose Adversarially Motivated Intrinsic Goals (AMIGo). AMIGo is composed of a goal-generating teacher and a goal-conditioned student policy. The teacher acts as a constructive adversary to the student: the teacher is incentivized to propose goals that are not too easy for the student to achieve, but not impossible either. This results in a natural curriculum of increasingly harder intrinsic goals that challenge the agent and encourage learning about the dynamics of a given environment.
As advocated in recent work (Cobbe et al., 2019; Zhong et al., 2020; Risi and Togelius, 2019; Küttler et al., 2020), we evaluate AMIGo for procedurally-generated environments instead of trying to learn to perform a specific grounded task. Procedurally-generated environments are challenging since agents have to deal with a parameterized family of tasks, resulting in large observation spaces where memorizing trajectories is infeasible. Instead, agents have to learn policies that generalize across different environment layouts and transition dynamics (Rajeswaran et al., 2017; Machado et al., 2018; Foley et al., 2018; Zhang et al., 2018).
We evaluate AMIGo on MiniGrid (Chevalier-Boisvert et al., 2018), a suite of fast-to-run procedurally-generated environments with a symbolic/discrete (expressed in terms of objects like walls, doors, keys, chests and balls) observation space which isolates the problem of exploration from that of visual perception. Furthermore, (Raileanu and Rocktäschel, 2020) found that MiniGrid presents a particular challenge for existing state-of-the-art intrinsic motivation approaches. Here, AMIGo sets a new state-of-the-art on some of the hardest MiniGrid environments (Chevalier-Boisvert et al., 2018), being the only method based on intrinsic motivation capable of successfully obtaining extrinsic reward on some of them.
In summary, we make the following contributions: (i) we propose Adversarially Motivated Intrinsic Goals—an approach for learning a teacher that generates increasingly harder goals, (ii) we show, through 85 experiments on 6 challenging exploration tasks in procedurally generated environments, that agents trained with AMIGo gradually learn to interact with the environment and solve tasks which are too difficult for state-of-the-art methods, and (iii) we perform an extensive qualitative analysis and ablation study.
Intrinsic motivation is a well-studied topic in RL (Oudeyer et al., 2007; Oudeyer and Kaplan, 2009; Schmidhuber, 1991) and several recent methods have proven effective for various hard-exploration tasks (Mnih et al., 2016; Pathak et al., 2017; Bellemare et al., 2016). One prominent form is the use of novelty
, which in its simplest form can be estimated with state visitation counts(Strehl and Littman, 2008) and has been extended to high-dimensional state spaces (Bellemare et al., 2016; Burda et al., 2019b; Ostrovski et al., 2017). Other sophisticated versions of curiosity (Schmidhuber, 1991) guide the agent to learn about environment dynamics by encouraging it to take actions that reduce its uncertainty (Stadie et al., 2015; Burda et al., 2019b) or to take actions whose consequences are more unpredictable in some way (Burda et al., 2019a; Raileanu and Rocktäschel, 2020; Pathak et al., 2017).
Other forms of intrinsic motivation include empowerment (Klyubin et al., 2005) which encourages control of the environment by the agent, and goal diversity (Pong et al., 2019) which encourages maximizing the entropy of the goal distribution. Marino et al. (2019) use the difference between subsequent steps to generate intrinsic rewards, while Zhang et al. (2019) use the difference between successor features. In Lair et al. (2019), intrinsic goals are discovered from language supervision. In Savinov et al. (2018), novelty is combined with effort as measured by the number of steps required to reach previously visited states. Bahdanau et al. (2019) train reward models from expert examples to improve learning of instruction-conditioned agents. Pinto et al. (2017) use an adversarial framework to perturb the environment and induce robustness.
Curriculum learning (Bengio et al., 2009)
is widely used in machine learning but the curricula are typically handcrafted which can be time consuming. In our work, the curriculum is generated automatically in an unsupervised way. In a different approach to automatic curriculum learning proposed bySchmidhuber (2011), an agent constantly searches the space of problems for the next solvable one. Similarly, Matiisen et al. (2017) train a teacher to select tasks in which the student is improving the most or in which the student’s performance is decreasing to avoid forgetting. Florensa et al. (2017) generate a curriculum by increasing the distance of the starting-point to a goal. Jabri et al. (2019) generate a curriculum of task distributions for a meta-learner by training a density-model of the agent’s trajectories. Self-Paced Learning approaches optimize the trade-off between exposing the learner to desired tasks and selecting tasks in which the algorithm performs well (Klink et al., 2019, 2020; Jiang et al., 2015). Racaniere et al. (2019) train a goal-conditioned policy and a goal-setter network in a non-adversarial way to propose feasible, valid and diverse goals. Their feasibility criteria is similar to ours, but requires training an additional discriminator to rank the difficulty of the goals, while our teacher is directly trained to generate goals with an appropriate level of difficulty.111Unfortunately, both the code for their method—which is far from simple—and for their experimental settings have not been made available by the authors. Therefore we not only cannot run a fair implementation of their approach against our setting for comparison, we cannot be guaranteed to successfully reimplement it ourselves as there is no way of reproducing their results in their setting without the code for the latter.
Our approach is loosely inspired by generative adversarial networks (GANs, Goodfellow et al., 2014), where a generative model is trained to fool a discriminator, adversarially trained to differentiate between the generated and the original examples. In contrast with GANs, AMIGo does not require a discriminator, and is only “constructively adversarial”, in that the goal-generating teacher is incentivized by its objective to propose goals which are hard for the policy while ensuring they remain feasible.
Closer to our work, Sukhbaatar et al. (2017) use an adversarial framework but require two modules that independently act and learn in the environment, where one module is encouraged to propose challenges to the other. This setup can be costly and is restricted to only proposing goals which have already been reached by the policy. In contrast, our method requires only one agent acting in the environment, and the teacher is less constrained in the space of goals it can propose.
Florensa et al. (2018) present GoalGAN, a generator that proposes goals with the appropriate level of difficulty as determined by a learned discriminator. While their work is similar in spirit with ours, there are several key differences. First, GoalGAN was created for and tested on locomotion tasks with continuous goals, whereas our method is designed for discrete action and goal spaces. While not impossible, adapting it to our setting is not trivial due to the GAN objective. Second, the authors do not condition the generator on the observation which is necessary in procedurally-generated environments that change with each episode. GoalGAN generates goals from a buffer, but previous goals can be unfeasible or nonsensical for the current episode. Hence, GoalGAN cannot be easily adapted to procedurally-generated environments.
A concurrent effort (Zhang et al., 2020) complements ours, but in the context of continuous control, by also generating a curriculum of goals which are neither too hard nor too easy using a measure of epistemic uncertainty based on an ensemble of value functions. Comparison of our methods on their respective settings will hopefully be the subject of future work.
AMIGo is composed of two subsystems: a goal-conditioned student policy and a goal-generating teacher (see Figure 1). The teacher proposes goals and is rewarded only when the student reaches the goal after a certain number of steps. The student receives reward for reaching the goal proposed by the teacher (discounted by the number of steps needed to reach the goal). The two components are trained adversarially in that the student maximizes reward by reaching goals as fast as possible while the teacher maximizes reward by proposing goals which the student can reach, though not too quickly. In addition to this intrinsic reward, both modules are rewarded when the agent reaches the extrinsic goal specified by the environment.
We consider the traditional RL framework of a Markov Decision Process with a state space, a set of actions and a transition function which specifies the distribution over next states given a current state and action. At each time-step , the agent in state takes an action by sampling from a goal-conditioned stochastic student policy represented as a neural network with parameters where is the goal. The agent receives a reward , which is the sum of the intrinsic and extrinsic rewards. The student is trained to maximize the discounted expected reward where is the discount factor, and the expectation is taken with respect to the student and the environment. We consider a finite time horizon as provided by the environment.
The teacher is a neural network which takes as input an initial state and outputs a goal for the student. The teacher proposes a new goal every time an episode begins or whenever the student reaches the intrinsic goal. We assume that some goal verification function can be specified such that if the state satisfies the goal , and otherwise. We define the undiscounted intrinsic reward as:
We train the teacher using a reward computed every time an intrinsic goal is reached (or at the end of an episode) as a function of the student’s performance on the goal it was set. To encourage the teacher to propose goals that are always pushing the student to improve, we penalize it if the student either cannot achieve the goal, or can do so too easily. There are different options for measuring the performance of the student here, but for simplicity we will use the number of steps it takes the student to reach an intrinsic goal since the intrinsic goal was set (with if the student does not reach the goal before the episode ends). We define a threshold such that the teacher is positively rewarded by when the student takes more steps than the threshold to reach the set goal, and negatively if it takes fewer steps or never reaches the goal. We thus define the teacher reward as follows, where and
are hyperparameters (see Section4.2 for implementation details) specifying the weight of positive and negative teacher reward:
One can try to calibrate a fixed target threshold
to force the teacher to propose harder and harder goals as the student improves. However, this threshold is likely to be different across environments. A more adaptive—albeit heuristic—approach is to linearly increase the thresholdafter a fixed number of times in which the student successfully reaches the intrinsic goals. Specifically, the threshold is increased by 1 whenever the student successfully reaches an intrinsic goal in more than steps for ten times in a row. This increase in the target threshold provides an additional metric to visualize the improvement of the student through the “difficulty” of its goals (see Figure 5).
While we can conceive of variants of AMIGo whereby goals are provided in the form of linguistic instructions, images, etc., to prove the concept we consider goals in the form of integer pairs, corresponding to absolute coordinates of the cell the agent must modify before the end of an episode (e.g. by moving to it, or causing the object in it to move or change state). The verification function is then trivially the indicator function of whether the cell state is different from its initial state at the beginning of the episode.
Proposing coordinates can present a challenging and diverse set of goals, as the coordinates can not only be affected by reaching them but also by modifying what is on them. This includes picking up keys, opening doors and dropping objects onto empty tiles. Likewise, simply navigating to a set of coordinates (say, the corner of a locked room) might require solving several non-trivial sub-problems (e.g. identifying the right key, going to it, then going to the door, unlocking it, and finally going to the target location).
To complement our main form of intrinsic reward, we explore a few other criteria, including goal diversity, extrinsic reward, environment change and novelty. We report, in our experiments of Section 4, the results for AMIGo using these auxiliary losses. We present, in Appendix D, an ablation study of the effect of these losses, alongside some alternatives to the reward structure for the teacher network.
Diverse Goals. One desirable property is goal diversity (Pong et al., 2019; Raileanu and Rocktäschel, 2020), which is naturally encouraged in AMIGo through the entropy regularization term of the IMPALA loss used to train the teacher. This regularization, along with the scheduling of the threshold, helps the teacher avoid getting stuck in local minima. Relatedly, we consider rewarding the teacher for proposing novel goals similar to count-based exploration methods (Bellemare et al., 2016; Ostrovski et al., 2017) with the difference that in our case the counts are for goals instead of states.
Episode Boundary Awareness. When playing in a procedurally-generated environment, humans will notice the factors of variation and exploit them. In episodic training, RL agents and algorithms are informed if a particular state was an episode end. To softly bias AMIGo towards learning the factors of variation in an environment, while not giving it privileged information which other comparable intrinsic motivation systems and RL agents would not have access to, we reward the teacher if the content of the goal location it proposes changes at an episode boundary, regardless of whether this change was due to the agent. While this prior is not specific to MiniGrid tasks, it is not guaranteed to be relevant in every procedurally generated environment, and as such is not an essential part of AMIGo or its success, as AMIGo trained agents can still obtain reward without it on some of the harder tasks where competing methods fail (see the ablation study of Appendix D). Furthermore, it is neither straightforward nor obvious how to extend other intrinsic motivation methods to incorporate similar heuristics, and we leave this for future research.
Extrinsic Goals. To help transition into the extrinsic task and avoid local minima, we reward both the teacher and the student whenever the student reaches the extrinsic goal, even if it did not coincide with the intrinsic goal set by the teacher. This avoids the degenerate case where the student becomes good at satisfying the extrinsic goal, and the teacher is forced to encourage it “away” from it.
We follow Raileanu and Rocktäschel (2020) and evaluate our models on several challenging procedurally-generated environments from MiniGrid (Chevalier-Boisvert et al., 2018). This environment provides a good testbed for exploration in RL since the observations are symbolic rather than high-dimensional, which helps to disentangle the problem of exploration from that of visual understanding. We compare AMIGo with state-of-the-art methods that use various forms of exploration bonuses. We use TorchBeast (Küttler et al., 2019)
, a PyTorch platform for RL research based on IMPALA(Espeholt et al., 2018) for fast, asynchronous parallel training. The code for our method and experiments, and details for running baselines from other codebases, is released at https://github.com/facebookresearch/adversarially-motivated-intrinsic-goals.
We evaluate AMIGo on the following MiniGrid environments: KeyCorrS3R3 (KCmedium), ObstrMaze1Dl (OMmedium), ObstrMaze2Dlhb (OMmedhard), KeyCorrS4R3 (KChard), KeyCorrS5R3 (KCharder), and ObstrMaze1Q (OMhard). The agent receives a full observation of the MiniGrid environment. The layout of the environment changes at every episode as it is procedurally-generated. Examples of these tasks can be found in Figure 2.
Each environment is a grid of size ( being environment-specific) where each tile contains at most one of the following colored objects: wall, door, key, ball, chest. An object in each episode is selected as an extrinsic goal, if the agent reaches the extrinsic goal or a maximum number of time-steps is reached, the environment is reset. The agent can take the following actions: turn left, turn right, move forward, pick up an object, drop an object, or toggle (open doors or interact with objects). Each tile is encoded using three integer values: the object, the color, and a type or flag indicating whether doors are open or closed. While policies could be learned from pixel observation alone, we will see below that the exploration problem is sufficiently complex with these semantic layers, owing to the procedurally generated nature of the tasks. The observations are transformed before being fed to agents by embedding each tile of the observed frame into a compositional representation corresponding to the object, color and type/flag.
The extrinsic reward provided by each environment for reaching the extrinsic goal in steps is , where is the maximum episode length (which is intrinsic to each environment and set by the MiniGrid designers), if the extrinsic goal is reached at , and otherwise. Episodes end when the goal is reached, and thus the positive weight encourages agents to reach the goal as quickly as possible.
The teacher is a dimensionality-preserving network of four convolutional layers interleaved with exponential linear units. Similarly, the student consists of four convolutional layers interleaved with exponential linear units followed by two linear layers with rectified linear units.
Both the student and the teacher are trained using the TorchBeast (Küttler et al., 2019) implementation of IMPALA (Espeholt et al., 2018), a distributed actor-critic algorithm. But while the teacher proposes goals only at the beginning of an episode or when the student reaches a goal, the student produces an action and gets a reward at every step. To replicate the structure of reward for reaching extrinsic goals, intrinsic reward for the student is discounted to when , and otherwise. While the hyperparameters for the reward for the teacher are grid searched, and optimal values are found at and (see Appendix B for full hyperparameter search details).
We use IMPALA (Espeholt et al., 2018) without intrinsic motivation as a standard deep RL baseline. We then compare AMIGo to a series of methods that use intrinsic motivation to supplement extrinsic reward, as listed here. Count is Count-Based Exploration from Bellemare et al. (2016), which computes state visitation counts and gives higher rewards to less visited states. RND is Random Network Distillation Exploration by Burda et al. (2019b) which uses a random network to compute a prediction error used as a bonus to reward novel states; ICM is Intrinsic Curiosity Module from Pathak et al. (2017), which trains forward and inverse models to learn a latent representation used to compare the predicted and actual next states. The Euclidean distance between the two (as measured in the latent space) is used as intrinsic reward. RIDE, from Raileanu and Rocktäschel (2020), defines the intrinsic reward as the (magnitude of the) change between two consecutive state representations.
We have noted from the literature that some of these baselines were designed for partially observable environments (Raileanu and Rocktäschel, 2020; Pathak et al., 2017) so they might benefit from observing an agent-centric partial view of the environment rather than a full absolute view (Ye et al., 2020). Despite our environment being fully observable, for the strongest comparison with AMIGo we ran the baselines in each of the following four modes: full observation of the environment for both the intrinsic reward module and the policy network, full observation for the intrinsic reward and partial observation for the policy, partial view for the intrinsic reward and full view for the policy, and partial view for both. We use an LSTM for the policy network when it is provided with partial observations and a feed-forward network when provided with full observations. In Section 4.4, we report the best result (across all four modes) for each baseline and environment pair, with a full breakdown of the results in Appendix A. This, alongside a comprehensive hyperparameter search, ensures that AMIGo is compared against the baselines trained under their individually best-performing training arrangement.
We also compare AMIGo to the authors’ implementation222https://github.com/tesatory/hsp of Asymmetric Self-Play (ASP) (Sukhbaatar et al., 2017). In their reversible mode two policies are trained adversarially: Alice starts from a start-point and tries to reach goals, while Bob is tasked to travel in reverse from the goal to the start-point.
We summarize the main results of our experiments in Table 1. As discussed in Section 4.3, the reported result for each baseline and each environment is that of the best performing configuration for the policy and intrinsic motivation system for that environment, as reported in Tables 2–5 of Appendix A. This aggregation of 85 experiments (not counting the number of times experiments were run for different seeds) ensures that each baseline is given the opportunity to perform in its best setting, in order to fairly benchmark the performance of AMIGo.
|Medium Difficulty Environments||Hard Environments|
|AMIGo||.93 .00||.92 .00||.83 .05||.54 .45||.44 .44||.17 .34|
IMPALA and Asymmetric Self-Play are unable to pass any of these medium or hard environments. ICM and Count struggle on the “easier” medium environments, and fail to obtain any reward from the hard ones. Only RND and RIDE perform competitively on the medium environments, but struggle to obtain any reward on the harder environments.
Our results demonstrate that AMIGo establishes a new state of the art in harder exploration problems in MiniGrid. On environments with medium difficulty such as KCmedium, OMmedium, and OMmedhard, AMIGo performs comparably to other state-of-the-art intrinsic motivation methods. While AMIGo is often able to successfully reach the extrinsic goal even on the hardest tasks. To showcase results and sample complexity, we illustrate and discuss how mean extrinsic reward changes during training in Appendix C. To analyze which components of the teacher loss were important, we present, in Appendix D, an ablation study over the components presented in Section 3.4.
Qualitatively, the learning trajectories of AMIGo display interesting and partially adversarial dynamics. These often involve periods in which both modules cooperate as the student becomes able to reach the proposed goals, followed by others in which the student becomes too good, forcing a drop in the teacher reward, in turn forcing the teacher to increase the difficulty of the proposed goals and forcing the student to further explore. In Appendix E, we provide a more thorough qualitative analysis of AMIGo, wherein we describe the different phases of evolution in the difficulty of the intrinsic goals proposed by the teacher, as exemplified in Figure 3. Further goal examples are shown in Figure 6 of Appendix F.
In this work, we propose AMIGo, a framework for generating a natural curriculum of goals that help train an agent as a form of intrinsic reward, to supplement extrinsic reward (or replace it if it is not available). This is achieved by having a goal generator as a teacher that acts as a constructive adversary, and a policy that acts as a student conditioning on those goals to maximize an intrinsic reward. The teacher is rewarded to propose goals that are challenging but not impossible. We demonstrate that AMIGo surpasses state-of-the-art intrinsic motivation methods in challenging procedurally-generated tasks in a comprehensive comparison against multiple competitive baselines, in a series of 85 experiments across 6 tasks. Crucially, it is the only intrinsic motivation method which allows agents to obtain any reward on some of the harder tasks, where pure RL also fails.
The key contribution of this paper is a model-agnostic framework for improving the sample complexity and efficacy of RL algorithms in solving the exploration problems they face. In our experiments, the choice of goal type imposed certain constraints on the nature of the observation, in that both the teacher and student need to fully observe the environment, due to the goals being provided as absolute coordinates. Technically, this method could also be applied to partially observed environments where part of the full observation is uncertain or occluded (e.g. “fog of war” in StarCraft), as the only requirement is that absolute coordinates can be provided and acted on. However, this is not a fundamental requirement, and in future work we would wish to investigate the cases where the teacher could provide more abstract goals, perhaps in the form of language instructions which could directly specify sequences of subgoals.
Other extensions to this work worth investigating are its applicability to continuous control domains, visually rich domains, or more complex procedurally generated environments such as (Cobbe et al., 2019). Until then, we are confident we have proved the concept in a meaningful way, which other researchers will already be able to easily adapt to their model and RL algorithm of choice, in their domain of choice.
Twenty-Ninth AAAI Conference on Artificial Intelligence, 2015.
2005 IEEE Congress on Evolutionary Computation, volume 1, pages 128–135. IEEE, 2005.
Tables 2–5 show the final performance of the intrinsic motivation baselines trained using one of four different training regimes enumerated in Section 4.3. If a baseline obtains positive return on KCMedium or OMmedium, we use the best hyperparameters found on that task (for that particular baseline and training regime) to train it on the remaining environments (i.e. on KChard and KCharder or OMmedhard and OMhard, respectively). Otherwise, we assume the baseline won’t be able to achieve positive return on the harder versions of these tasks and report a return of 0, indicating that this is an assumed return with an asterisk (*).
|Medium Difficulty Environments||Hard Environments|
|Medium Difficulty Environments||Hard Environments|
|Medium Difficulty Environments||Hard Environments|
|Medium Difficulty Environments||Hard Environments|
For IMPALA, the numbers reported for KCmedium and OMmedium are from the experiments in Raileanu and Rocktäschel (2020), while the numbers for the harder environments are presumed to be because IMPALA fails to train on simpler environments.
For ASP specifically, we ran 5 experiments with different seeds for KCmedium and OMmedium, where it was unable to get any reward. Therefore, we assume ASP will similarly be unable to learn in harder environments, which we further verified by running one seed in OMmedhard and KChard, where we report the standard deviation as 0 with a dagger (). As a sanity check, we verified that ASP learns in easier environments not considered here, such as MiniGrid-Empty-Random-5x5-v0, and MiniGrid-KeyCorridorS3R1-v0.
Tables 2–5 indicate that the best training regime for the intrinsic motivation baselines (for all the tasks they can reliably solve) is the one that uses a partially observed intrinsic reward and a partially observed policy (Table 5). When the intrinsic reward is based on a full view of the environment, Count and RND will consider almost all states to be "novel" since the environment is procedurally-generated. Thus, the reward they provide will not be very helpful for the agent since it does not transfer knowledge from one episode to another (as is the case in fixed environments (Bellemare et al., 2016; Burda et al., 2019b)). In the case of RIDE and ICM, the change in the full view of the environment produced by one action is typically a single number in the MiniGrid observation. For ICM, this means that the agent can easily learn to predict the next state representation, so the intrinsic reward might vanish early in training leaving the agent without any guidance for exploring (Raileanu and Rocktäschel, 2020). For RIDE, it means that the intrinsic reward will be largely uniform across all state-action pairs, thus not differentiating between more and less "interesting" states (which it can do when the intrinsic reward is based on partial observations (Raileanu and Rocktäschel, 2020)).
For AMIGo, we grid search over batch size for student and teacher , learning rate for student , learning rate for teacher unroll length , entropy cost for student , entropy cost for teacher , embedding dimensions for the observations , embedding dimensions for the student last linear layer
, and teacher loss function parametersand .
For RND, RIDE, Count, and ICM, we used learning rate
, batch size 32, unroll length 100, RMSProp optimizer withand momentum 0, which were the best values found for these methods on MiniGrid tasks by Raileanu and Rocktäschel (2020). We further searched over the entropy coefficient and the intrinsic reward coefficient on KCmedium and OMmedium. The results reported in Tables 2, 3 and 4 use the best values found from these experiments, while the results reported in Table 5 use the best parameter values reported by Raileanu and Rocktäschel (2020). For ASP, we ran the authors’ implementation using its reverse mode. We used the defaults for most hyperparameters, grid searching only over sp_steps , sp_test_rate , and sp_alice_entr .
The best hyperparameters for AMIGo and each baseline are reported below:
AMIGo: a student batch size of 8, a teacher batch size of 150, a student learning rate of .001, a teacher learning rate of .001, an unroll length of 100, a student entropy cost of .0005, a teacher entropy cost of .01, and observation embedding dimension of 5, a student last layer embedding dimension of 256, and finally, and .
RND: partially observed intrinsic reward, partially observed policy, entropy cost of .0005, intrinsic reward coefficient of .1
RIDE: for KCmedium, partially observed intrinsic reward, partially observed policy, entropy cost of .0005, intrinsic reward coefficient of .1; for OMmedium: fully observed intrinsic reward, partially observed policy, entropy cost of .0005, intrinsic reward coefficient of .1
COUNT: partially observed intrinsic reward, partially observed policy, entropy cost of .0005, intrinsic reward coefficient of .1
ICM: for KCmedium, partially observed intrinsic reward, fully observed policy, entropy cost of .0005, intrinsic reward coefficient of .1; for OMmedium: partially observed intrinsic reward, partially observed policy, entropy cost of .0005, intrinsic reward coefficient of .1
ASP: Best performing hyperparameters (in the easier environments) were 10 sp_steps, a sp_test_rate of .5, and Alice entropy of .003. All other hyperparameters used the defaults in the codebase.
We show, in Figure 4, the mean extrinsic reward over time during training for the best configuration of the various methods. The first row consists of intermediately difficult environments in which different forms of intrinsic motivation perform similarly, the first two evironments require less than 30 million steps while OMmedhard and the three more challenging environments of the bottom rows require in the order of hundreds of millions of frames. Any plot where a method’s line is not visible indicates that the method is consistently failing to reach reward states throughout its training.
The important point of note here is that on the two easiest environments, KCmedium and OMmedium, agents need about 10 million steps to converge while on the other four more challenging environments, they need an order of 100 million steps to learn the tasks, showcasing AMIGo’s contributions not just to solving the exploration problem, but also to improving the sample complexity of agent training.
To further explore the effectiveness and robustness of our method, in this subsection we investigate the alternative criteria discussed in Section 3.4. We compare the Full Model with its ablations and alternatives consisting of removing the extrinsic bonus (NoExtrinsic), removing the environment change bonus (NoEnvChange), adding a novelty bonus( withNovelty).
We also considered two alternative reward forms for the teacher to provide a more continuous and gradual reward than the previously introduced “all or nothing” threshold. We consider a Gaussian reward around the target threshold :
and a Linear-Exponential reward which grows linearly towards the threshold and then decays exponentially as the goal proposed becomes too hard (as measured according to the number of steps):
We report these two alternative forms of reward as (Gaussian and Linear-Exponential) in the study below.
|Medium Difficulty Environments||Hard Environments|
Performance is shown in Table 6 where the number of steps needed to converge (to the final extrinsic reward) is reported. A positive number means the model learned to solve the task, while 0 means the model did not manage to get any extrinsic reward. For all models we encourage goal diversity on the teacher with a high entropy coefficient of .05 (as compared to of the student).
As the table shows, removing the extrinsic reward or the environment change bonus severely hurts the model, making it unable to solve the harder environments. The novelty bonus was minimally beneficial in one of the environments (namely KChard) but slightly ineffective on the others. The more gradual reward forms considered are not robust to the learning dynamics and often result in the system going into rabbit holes where the algorithm learns to propose goals which provide sub-optimal rewards, thus not helping to solve the actual task. Best results across all environments in our Full Model were obtained using the simple threshold reward function along with entropy regularization and in combination with the extrinsic reward and changing bonuses, but without the novelty bonus.
To better understand the learning dynamics of AMIGo, Figure 5 shows the intrinsic reward throughout training received by the student (top panel) as well as the teacher (middle panel). The bottom panel shows the difficulty of the proposed goals as measured by the target threshold used by the teacher (described in Section 3). The trajectories reflect interesting and complex learning dynamics.
For visualization purposes we divide this learning period into five phases: Phase 1: The student slowly becomes able to reach intrinsic goals with minimal difficulty. The teacher first learns to propose easy nearby goals. Phase 2: Once the student learns how to reach nearby goals, the adversarial dynamics cause a drop in the teacher reward which is then forced to explore and propose harder goals. Phase 3: An equilibrium is found in which the student is forced to learn to reach more challenging goals. Phase 4: The student becomes too capable again and the teacher is forced to increase the difficulty of the proposed goals. Phase 5: The difficulty reaches a state where it induces a new equilibrium in which the student is unable to reach the goals and forced to improve its student.
AMIGo generates diverse and complex learning dynamics that lead to constant improvements of the agent’s policy. In some phases, both components benefit from learning in the environment (as is the case during the first phase), while some phases are completely adversarial (fourth phase), and some phases require more exploration from both components (i.e. third and fifth phases).
Figure 3, presented in Section 4.4, further exemplifies a typical curriculum in which the teacher learns to propose increasingly harder goals. The examples show some of the typical goals proposed at different learning phases. First, the teacher proposes nearby goals. After some training, it learns to propose goals that involve traversing rooms and opening doors. Eventually, the teacher proposes goals which involve interacting with different objects. Despite the increasing capacity of the agent to interact with the environment, OMhard remains a challenging task and AMIGo learns to solve it in only one of the five runs.
Figure 6 shows examples of goals proposed by the agent during different stages of learning. Typically, in early stages the teacher learns to propose easy nearby goals. As learning progresses it is incentivized to proposed farther away goals that often involve traversing rooms and opening doors. Finally, in later stages the agent often learns to propose goals that involve removing obstacles and interacting with objects. We often observe this before the policy achieves any extrinsic reward.