DeepAI
Log In Sign Up

Forward-Backward Reinforcement Learning

Goals for reinforcement learning problems are typically defined through hand-specified rewards. To design such problems, developers of learning algorithms must inherently be aware of what the task goals are, yet we often require agents to discover them on their own without any supervision beyond these sparse rewards. While much of the power of reinforcement learning derives from the concept that agents can learn with little guidance, this requirement greatly burdens the training process. If we relax this one restriction and endow the agent with knowledge of the reward function, and in particular of the goal, we can leverage backwards induction to accelerate training. To achieve this, we propose training a model to learn to take imagined reversal steps from known goal states. Rather than training an agent exclusively to determine how to reach a goal while moving forwards in time, our approach travels backwards to jointly predict how we got there. We evaluate our work in Gridworld and Towers of Hanoi and empirically demonstrate that it yields better performance than standard DDQN.

READ FULL TEXT VIEW PDF
05/17/2017

Automatic Goal Generation for Reinforcement Learning Agents

Reinforcement learning is a powerful technique to train an agent to perf...
11/21/2019

Accelerating Reinforcement Learning with Suboptimal Guidance

Reinforcement Learning in domains with sparse rewards is a difficult pro...
12/22/2016

First-Person Activity Forecasting with Online Inverse Reinforcement Learning

We address the problem of incrementally modeling and forecasting long-te...
11/28/2018

Unsupervised Control Through Non-Parametric Discriminative Rewards

Learning to control an environment without hand-crafted rewards or exper...
10/21/2014

Where do goals come from? A Generic Approach to Autonomous Goal-System Development

Goals express agents' intentions and allow them to organize their behavi...
07/17/2018

Reinforcement Learning for LTLf/LDLf Goals

MDPs extended with LTLf/LDLf non-Markovian rewards have recently attract...
02/25/2020

G-Learner and GIRL: Goal Based Wealth Management with Reinforcement Learning

We present a reinforcement learning approach to goal based wealth manage...

1 Introduction

Reinforcement Learning (RL) problems are often formulated with the agent blind to the task reward of the environment. However, for many sparse reward problems, including goal-directed tasks such as point-to-point navigation, pick-and-place manipulation, assembly, etc., endowing the agent with knowledge of the reward function is both feasible and practical for learning generalizable behavior. In general, developers of these problems often know what the task goals are, but not necessarily how to solve them. In this paper, we will describe how we can leverage our knowledge of goals to enable learning of behaviors in these regions before the agent even reaches them. This formulation may be easier to solve than approaches that initialize learning from the start alone. For example, if we know the desired location, pose, or configuration of a task, then we can reverse the actions that brought us there, rather than forcing the agent to solve these difficult problems solely through random discovery.

In this paper, we introduce Forward-Backward Reinforcement Learning (FBRL), which introduces backward induction, to enable our agent to reason backwards in time. Through an iterative process, we both explore forwards from the start position and backwards from the target/goal. To achieve this we introduce a learned backwards dynamics model to explore in reverse from known goal states and update values within this local neighborhood. This has the effect of “spreading out” sparse rewards so that they are easier to discover, thus accelerating the learning process.

Standard model-based approaches aim to reduce the amount of experience necessary to learn good policies by imagining steps forward and using these hallucinated events to augment the training data. However, there is no guarantee that the projected states will lead to the goal, so these roll-outs may be inadequate. The ability to predict the result of an action does not necessarily provide guidance about which actions lead to the goal. In contrast, FBRL takes a more guided approach since, given an accurate model, we have confidence that each state visited in a backwards step has a path to the goal.

In the rest of the paper, we will describe the relevant background and related works. We will then formally introduce FBRL, followed by an empirical section in which we evaluate our approach in Gridworld and Towers of Hanoi, and show that it yields better results than standard Deep Double Q-Learning (DDQN) (Van Hasselt et al., 2016). Finally, we will conclude with discussions for future work.

2 Background

Reinforcement Learning (RL) problems are specified through a Markov Decision Process (MDP)

 (Sutton & Barto, 1998). Here, describes the states in the environment, defines the actions the agent can take, refers to the rewards an agent receives within state , and

is a transition model that specifies the probability of entering state

after taking action in . A policy estimates the probability of taking action in state , and we are typically interested in learning an optimal policy that maximizes the expected long-term discounted return. Model-free approaches do not have access to , and rather learn an action-value function that predicts the return after experiencing samples in the environment:

(1)

Here, is a replay buffer that stores experiences (Mnih et al., 2015). This loss aims to minimize the TD-error, or the difference between the expected return and current prediction.

Learning Q-values often requires a large quantity of samples. Rather than directly experiencing the states, an alternative method is to jointly use model-based planning to predict values. DYNA-Q (Sutton, 1990) makes updates to values by using imagined experiences. In this case, the parameters from Equation 1 may also be obtained from imagined experiences.

3 Related Work

When we have access to the true dynamics model, purely model-based approaches such as dynamic programming can be used to compute values over all states (Sutton & Barto, 1998). Though when the state space is large or continuous, it may be intractable to iterate over the entire state-space. Q-Learning is a model-free approach and updates values in an online manner by directly visiting states, and function approximation techniques such as Deep Q-Learning enable generalizing to unseen ones (Mnih et al., 2015). Hybrid approaches that combine model-based and model-free information can also be used. DYNA-Q (Sutton, 1990), for example, was an early approach that used imagined roll-outs to update the Q-values as if they had been experienced in the true environment. There are more recent approaches as well, for example NAF (Gu et al., 2016) and I2A (Weber et al., 2017). But these approaches only use forward imagination.

A similar approach to our own does value iteration in reverse (Zang et al., 2007), but this is a purely model-based approach, and it does not learn a reverse model. A related approach performs bidirectional search from the start and goal (Baldassarre, 2003), but that work learns values only, whereas we aim to learn action-values. Another comparable work solves problems by using a reverse curriculum near goal states (Florensa et al., 2017). However, that approach assumes the agent can be initialized near the goal. We do not make this assumption, as knowing what the goal state is does not mean that we know how to get to it.

Many works have used domain knowledge to help speed up learning, for example through reward shaping (Ng et al., 1999). Another approach is to more efficiently use the experiences from the replay buffer. Prioritized experience replay (Schaul et al., 2015) aims to replay samples that have high TD-error. Hindsight experience replay treats each state in an environment as a potential goal so that the system can learn even when it fails to reach the desired target.

The concept of using reverse dynamics is similar to inverse dynamics (Agrawal et al., 2016; Pathak et al., 2017). In those approaches, a system predicts the dynamics that yielded a transition between two states. In our approach, we use the state and action to predict the previous state. The purpose of this function is to reverse an action and use this unraveling to learn values near the goal.

4 Approach

while training do
       /* Forward step */ Take step from and update Train , with /* Backward step */ for imagination steps do
             random or greedy
       end for
      
end while
Algorithm 1 Forward-Backward RL

We now introduce our approach, Forward-Backward Reinforcement Learning (FBRL). In this work, we utilize both imagined and real experiences to learn values. A forward step uses samples of real experiences originating from the start state to update Q-values, and a backward step uses imagined states that are asynchronously predicted in reverse from known goal states. We hypothesize that this approach will improve our model of values in the vicinity of the goal, and thus expedite learning. We now describe the preliminaries for our approach.

4.1 Preliminaries

We specify FBRL problems through a modified MDP . As before, corresponds to the states in the environment, are the actions the agent can take, and represents the rewards an agent receives in . We assume that does not distinguish between real and imagined inputs and can be queried at any time. Finally, is a distribution of goal states from which we can sample uniformly.

4.2 Backwards model

We aim to learn a backward transition model that captures what happens if we undo an action in a state. We use a tuple of experience to learn the model. Rather than predicting the previous state directly, we aim to learn the difference between the two: . This allows the model to learn how states will change, rather than absolute positional information. It reduces the expected range of output values and generally centers them around zero, resulting in a more stable estimate. This formulation is appropriate since we are using states from the start of the problem to learn the backwards model, which is used near goal states that will initially have little training data.

The backwards model is a neural network that is trained to predict

, where . Now, we can predict the previous state as . The loss for the backward model then is: , where denotes a Huber loss.

In some environments, it may be impossible to learn an accurate deterministic backward model, even if the problem has deterministic actions. For example, if an agent is next to a wall, we might not know if it previously bumped into the wall or if it took a step towards it. Additionally, for discrete-valued problems, it may be difficult to learn a network that can predict discrete values. These issues are compounded further in stochastic settings. To address this we formulate the problem using a variational approach. If we know the distribution over , then we can predict a distribution over potential outcomes. In this formulation,

will represent a probability distribution for each state variable that can be trained using a cross-entropy loss from the true distribution.

4.3 Action sampling

Another important consideration is how to sample actions that lead to useful updates. Our approach either randomly samples actions or uses a more greedy step that aims to direct the roll-outs towards the start by moving to states with high Q-values: .

4.4 Backwards Imagination

Algorithm 1 shows the pseudo-code for our approach. In the forward step, we train the agent using experiences from the replay buffer, according to whichever learning paradigm we choose. In this work, we use DDQN. We additionally use real experiences to update the backward model.

The backward step takes place asynchronously. During this process, we use backward imagination for a limited amount of steps. Starting from the goal state, the approach samples an action, uses the model to imagine backwards, and then repeats the process from the resulting state. These imagined experiences are used to augment the replay buffer.

It is important to note that initially the backwards model is unlikely to accurately predict the true dynamics model. The model starts by being trained on experience near the starting region. Often, the portion of the dynamics model exercised outside of this initial region will vary significantly, especially near the goal. For example, consider a maze for navigation task where the maze beyond is unknown or the difference in dynamics for a humanoid lying down versus standing up.

While the model may start out being inaccurate, it provides a constantly improving signal that helps formulate the value function, which is then used to guide exploration. In this way, it acts like an intrinsic reward to provide a predicted direction for exploration for the model. Consider again the navigation problem, where the model in the immediate region will learn a factored representation for locomotion, but cannot predict the walls of the maze further away. The hallucinated experience will likely predict movement through walls. While this is is inaccurate, it does provide a shape for the value function that will encourage traveling towards the goal until a wall is discovered. Once discovered, the model will update and the value function will shift to anticipate the presence of the wall. As training progresses, the system will capture larger regional dynamics and start to predict potential global dynamics, e.g., presence of walls beyond what has been directly observed. As the system approaches the goal, the backward model will converge to the real model.

5 Experiments

Figure 1: Gridworld and Towers of Hanoi environments.

The purpose of our experiments is to demonstrate that FBRL can significantly speed up learning in environments with sparse rewards. We evaluate our approach in Gridworld and Towers of Hanoi, illustrated in Figure 1. For comparison we formulate FBRL by augmented DDQN, which we compare against a standard DDQN baseline.

5.1 Gridworld

Figure 2: Results for Gridworld where . We use a fixed horizon of steps, respectively. The results are averaged over trials.

We first evaluate our approach in an x Gridworld. We use this environment as it allows us to easily show the benefits of our approach as the reward becomes more sparse. The agent’s actions are to move up, down, left, and right by a single unit, and its state consists of its cartesian coordinates. The agent is initialized in the bottom left corner of the grid, and receives a reward of when it reaches the top right. It receives a step cost of per time-step. The inputs to the backward model are and it must learn to predict . The model architecture is a fully-connected network with

outputs followed by RELU, followed by another fully-connected network with

outputs, one for each state dimension. For FBRL, we used steps of imagination with asynchronous stream.

Figure 2 shows the results for running different size gridworlds. The results show that as we increase the size of the grid, i.e., as the goal gets further away, there is a clear advantage for using reverse imagination. The gap between the performance of DDQN compared to FBRL increases as the size gets larger. This suggests the approach is better suited for longer horizon sparse reward environments–but still does not degrade performance for short horizon tasks.

5.2 Towers of Hanoi

Figure 3: Results for Towers of Hanoi where . We use a fixed horizon of steps, respectively. The results are averaged over trials.

The next environment we evaluate in is -disc Towers of Hanoi. In this problem, the agent needs to move discs from the first to the third pillar, but it is only able to place a disc on top of another one if it is smaller than it. The actions are to move each disc to the first, second, or third pillar. It receives a reward of when all discs are in the third pillar and a step cost of per time-step. The inputs to the backward model are bit-strings indicating which pillars each disc is on. For example, the environment in Figure 1 has a representation of since the small disc is on the first pillar and the large disc is on the third pillar. The backward model predicts a distribution for each bit over possible values: . The model architecture is a fully-connected network with outputs followed by RELU, followed by another fully-connected network with outputs, representing the distribution over each bit. For FBRL, we used steps of imagination with asynchronous streams.

Figure 3 shows the results for running Towers of Hanoi with a different number of discs. We again see an advantage for using FBRL as the goal gets further away. When we increase the number of discs, FBRL outperforms DDQN. We did find though that the performance of FBRL degraded for discs, which may be due to overfitting.

6 Conclusion

In this paper, we have introduced an approach for speeding up learning in problems with sparse rewards. We introduced FBRL, which takes imagined steps in reverse from the goal. We demonstrated that this approach can perform better than DDQN in Gridworld and Towers of Hanoi. There are many directions for extending this work. We were interested in evaluating a backward planner, but we could also train using both forward and backward imagination. Another improvement would be to improve the planning policy. We used a exploratory and greedy approach, but did not evaluate how to balance the two. We could also use prioritized sweeping (Moore & Atkeson, 1993), which chooses actions that lead to states with high TD-error.

7 Acknowledgements

We thank Anoop Korattikara, Himanshu Sahni, Sergio Guadarrama, and Shixiang Gu for useful discussions and feedback about this work.

References

Appendix A Experimental setup

For each experiment, we used a batch-size of . The discount factor was . The exploration parameter was initialized to and decayed to . The replay memory had a size of and we collected initial samples before training DDQN. The architectures for the backwards models are described in the main text.

a.1 Gridworld

The learning rate for DDQN was . The architecture for DDQN was a fully-connected network with outputs followed by RELU, followed by another fully-connected network with outputs, one for each action. We updated the target network every steps. FBRL had the same settings except we increased the learning rate to .

a.2 Towers of Hanoi

The learning rate for DDQN was . The architecture for DDQN was a fully-connected network with outputs followed by RELU, followed by another fully-connected network with outputs, one for each action. We updated the target network every steps. Like with Gridworld, we had the same architecture as DDQN, but we found we obtained better results when the learning rate was reduced to .