DeepAI
Log In Sign Up

Perceptual Values from Observation

05/20/2019
by   Ashley D. Edwards, et al.
0

Imitation by observation is an approach for learning from expert demonstrations that lack action information, such as videos. Recent approaches to this problem can be placed into two broad categories: training dynamics models that aim to predict the actions taken between states, and learning rewards or features for computing them for Reinforcement Learning (RL). In this paper, we introduce a novel approach that learns values, rather than rewards, directly from observations. We show that by using values, we can significantly speed up RL by removing the need to bootstrap action-values, as compared to sparse-reward specifications.

READ FULL TEXT VIEW PDF

page 3

page 4

07/25/2021

Reinforced Imitation Learning by Free Energy Principle

Reinforcement Learning (RL) requires a large amount of exploration espec...
10/14/2020

Self-Imitation Learning in Sparse Reward Settings

The application of reinforcement learning (RL) in real-world is still li...
12/05/2019

Reinforcement Learning Upside Down: Don't Predict Rewards – Just Map Them to Actions

We transform reinforcement learning (RL) into a form of supervised learn...
04/03/2017

Multi-Advisor Reinforcement Learning

We consider tackling a single-agent RL problem by distributing it to n l...
07/02/2019

On Conforming and Conflicting Values

Values are things that are important to us. Actions activate values - th...
07/02/2019

On Conflicting and Conflicting Values

Values are things that are important to us. Actions activate values - th...
04/28/2020

Augmented Behavioral Cloning from Observation

Imitation from observation is a computational technique that teaches an ...

1 Introduction

When people solve tasks, there is often a clear ordering to which steps are more preferable than others. For example, when building a piece of furniture, we may assume that it is better to have the pieces outside of the box than in, and a leg screwed into its base than laying on the floor. The final steps of a problem are often more desirable than earlier ones, assuming the task is being completed optimally, because they indicate that we have fewer steps left to go. Put in other terms, these later steps are typically more valuable than those seen in the beginning of the problem.

In this paper, we use this insight to compute values from expert observations without access to the underlying actions or rewards. Because task goals are often achieved at or near the end of a demonstration trajectory, it is likely that later states should have more value than early ones. Hence, given expert state trajectories, we compute the expected value of each state by assuming that the reward at the last state in the trajectory is 1 and 0 everywhere else, and then backing up values to the start of the trace by utilizing knowledge of the length of the trajectory in a self-supervised manner. We show how these values can be used to learn action-values for reinforcement learning more efficiently than training from sparse rewards.

We formally introduce our approach, Perceptual Values from Observation (PVO), which aims to learn values directly from expert observations. We show that this approach learns meaningful values that increase as the goal nears, and demonstrate that these values can be used to train a reinforcement learning agent. We demonstrate the learned values in a maze environment (Zuo, 2018), liquid pouring task (Sermanet et al., 2016), and a task for picking up objects (Goyal et al., 2017), and show that PVO can be used to train RL agents in OpenAI’s CoinRun environment (Cobbe et al., 2018).

2 Related work

There has recently been a large amount of interest in learning behaviors from expert state observations. This mechanism introduces several opportunities to obtain training examples for agents; there is a wealth of pre-existing videos that consist of humans and other entities—such as animated characters and animals—performing tasks that we might like an agent to learn. Learning in this manner becomes more difficult however because the underlying actions and rewards are unknown. In order to make use of the abundance of video data available on the web, we should consider how we can learn goals and values without access to this information.

Given a set of single-goal observations, one recent approach is to train a classifier to predict if a state is a goal or not and then use this discriminator as a reward signal 

(Xie et al., 2018; Singh et al., 2019). However, goal-prediction is essentially a sparse-reward problem and thus may not shape behavior. As such, while single-goal representations require little demonstration data, they may require more environment interactions to train reinforcement learning agents than methods that provide more guidance. In general, we should expect a trade-off between the amount of experience we need to provide an agent and the amount of time it will take the agent to learn.

To that point, we can train models using already existing videos or other forms of observation. This work focuses on learning to imitate from such sequences. One approach to this problem is to learn or use pre-existing features for computing rewards (Edwards et al., 2016; Liu et al., 2017; Sermanet et al., 2017; Aytar et al., 2018; Yu et al., 2019). Such approaches likely offer a better shaped reward than goal-prediction based rewards because they are based on the distance to the goal. Another approach is to learn rewards directly (Sermanet et al., 2016; Edwards & Isbell Jr, 2017) or in an adversarial manner (Torabi et al., 2018b). Finally, we can avoid learning rewards at all by learning dynamics that aim to infer the actions taken in the state sequences (Pathak et al., 2018; Torabi et al., 2018a; Edwards et al., 2018). However, learning dynamics can often be difficult and may require a large amount of demonstration or environmental data.

This paper introduces another mechanism for learning from state observations. In particular, we are interested in learning values because they allow us to bypass engineering reward functions that may be susceptible to locally sub-optimal solutions. As we will show, by using values we can additionally remove the bootstrapped component of training reinforcement learning.

3 Formalities

We are interested in solving problems specified through a Markov Decision Process, where we do not have access to the transition function or environment rewards, and the states consist of visual inputs. We are given a set of expert state observations

where we assume we also do not have access to the underlying expert actions or rewards.

4 Approach

Figure 1: Value assignment for a length trajectory sampled from expert demonstrations. We can train these values in a self-supervised manner by utilizing the number of steps a state is from the end of the trajectory.

Given a trajectory of expert observations , PVO aims to learn a value function that makes an approximation of the expert value function. As we noted, we are not given the underlying reward function with these demonstrations. Rather, we enforce a surrogate reward based on a simple assumption that tasks obtained from expert observations can be specified through a sparse reward of 1 at the end of the trajectory and 0 elsewhere.

This hypothesis comes from the observation that the goal will often occur at the end of the trajectory, especially in goal-directed tasks. However, we enforce this reward function even if a trajectory does not actually end at the goal. Using this assumption, we may backtrack values from the end of a trajectory to the start without knowing the actions taken. We then use this value function to learn values of novel states and to learn action-values for RL.

4.1 Step 1: Learning values from observation

PVO, , …,
Algorithm 1 Perceptual Values from Observation

The first step of this approach aims to obtain values from expert observations. Given a length trajectory , we first make the assumption that is a terminal goal state, and so its reward is assigned to .

Note that the expected value of some state can be expressed as:

(1)

Because the reward at is , we can assign the values using samples from the demonstration as:

In general, we express the value of some observation as:

(2)

This update is shown in figure 1. Here is effectively the number of steps remaining in the trajectory. It corresponds to how much the value at the goal will be discounted from state before reaching the terminal state. Because we have a sequence of optimal expert observations, we know how many steps remain.

We use a deep neural network to learn the values, and aim to minimize the following loss:

(3)

This simple yet effective approach is shown in Algorithm 1.

4.2 Step 2: Learning action-values from values

Figure 2: Heatmap of values learned by PVO in unseen maze environments. Brighter colors have larger values.
Figure 3: Values learned by PVO in the pouring dataset. The top row represents select frames from a single, unseen video. The bottom row represents learned values for each frame in the video.

Given the learned values, we aim to use RL to learn action-values and a corresponding policy. We introduce two approaches to this problem: 1) using the values to replace bootstrapping in Q-learning and 2) using the values as a potential-based shaping reward.

4.2.1 Replacing bootstrapping in Q-learning

The typical loss update for Q-learning can be defined as:

(4)

where

. The problem with this approach is that it requires making estimates based off of a moving target. We aim to remove this bootstrapped step by replacing the target network with our estimate of the value function.

The Bellman equation states that the maximal action-value is equivalent to the value of a state under the optimal policy (Sutton & Barto, 1998):

(5)

Given this definition, we can replace the max operator from equation 4 with the learned value function , and modify the target accordingly:

(6)

Because we assume a sparse reward obtained only at the goal, and because we do not compute action-values at terminal states, can be replaced with the surrogate reward of , and so the target becomes:

(7)

4.2.2 Potential-based shaping reward

If the value function is incorrect for some states, using it as a replacement for bootstrapping might be too strong of a signal. That is because the formulation aims to directly maximize the value function, and so may get stuck in locally sub-optimal areas if it is not truly optimal.

As such, we also introduce using a potential-based shaping reward (Ng et al., 1999):

(8)

5 Experiments

Figure 4: Values learned by PVO in the something something dataset. The top row represents select frames from a single, unseen video. The bottom row represents learned values for each frame in the video.

Our experiments aim to demonstrate that PVO can learn values from observation only and that these values can be used to train reinforcement learning agents. We evaluate the agent within unseen environments and aim to determine if PVO learns a general value function that can infer values outside of the training environments.

5.0.1 Environments

In this section, we discuss the environments used for evaluation. We were interested in goal-directed tasks that consisted of a desired target state. We were additionally interested in demonstrating generalization and thus also evaluated within procedurally generated environments.

5.0.2 Maze environment

The maze environment, shown in figure 2. consists of procedurally generated mazes. The agent can take actions up, down, left, and right. The game ends when the agent (blue) reaches some target goal (green). We used search to obtain demonstrations in this environment. The demonstration set only consisted of mazes from sizes 4x4 to 20x20. We aim to determine if PVO can learn values in unseen mazes of size 25x25. We obtained 1000 episodes of demonstrations for a simple empty maze and a more complicated one where the agent must navigate around obstacles to reach the goal.

5.0.3 Liquid pouring dataset

The liquid pouring dataset has been used to train robots to learn to pour from videos of humans (Sermanet et al., 2016). We use pouring demonstrations to train values and aim to determine if PVO can infer values in an unseen video.

5.0.4 Something something dataset

The something something dataset (Goyal et al., 2017) consists of videos of humans doing something to something, for example, pouring something into something, plugging something into something, etc. We use videos of people picking up something from a surface to determine if PVO can infer values in an unseen video.

5.0.5 CoinRun environment

Figure 5: CoinRun reinforcement learning results. The trials were averaged over runs with a different, unseen, procedurally generated level for each method. The policy was evaluated for runs every steps.

The CoinRun environment (Cobbe et al., 2018) consists of procedurally generated platform environments. The background, player, enemies, platforms, obstacles, and goal locations are all randomly instantiated. The agent can take actions left, right, jump, and down, jump-left, jump-right, and do-nothing. The game ends when the agent reaches a single coin in the game. We trained PPO (Schulman et al., 2017) for 2.5 million steps to obtain 1000 episodes of expert demonstrations. We evaluate on unseen easy levels.

5.1 Results

In this section, we discuss the results of using PVO to learn values and to train RL agents. Our experiments in the maze environment aim to demonstrate that PVO can learn meaningful values in unseen environments. Figure 2 shows a heatmat of the values learned using this approach. It is clear that not only is PVO capable of detecting where the goal is, it can also infer the values of states around the goal.

We also demonstrate value learning in the liquid pouring task, as shown in figure 3. PVO has clearly learned a meaningful value function for this task, even though it was only trained with 10 demonstrations. The initial image is an empty glass without any pouring and the value is clearly low. As the glass becomes more full, the values increase.

Finally, we show value learning for the “picking up something” task in the something something dataset, as shown in figure 4. PVO has again learned meaningful values that increase as the task becomes completed.

Our experiments in the CoinRun environment aim to demonstrate that PVO can be used for training reinforcement learning agents in unseen environments. The results are shown in figure 5. We call the PVO method that replaces bootstrapping with the learned values PVO value and the method that uses the values as a shaping reward PVO shaping. Both methods learn significantly faster than standard RL. We have thus demonstrated that PVO can be used for imitation and can generalize to unseen environments after receiving observation data only. Additionally, PVO can be used to replace bootstrapping for RL, but the shaping reward was also powerful. One reason for this may be that using the value function to replace bootstrapping essentially initializes the Q-values, which has been shown to be equivalent to potential-based reward shaping (Wiewiora, 2003).

6 Conclusion

In this paper, we have demonstrated that PVO is able to learn values for difficult tasks, and that it can be used to train reinforcement learning agents. We have shown that this approach can generalize to unseen configurations. Finally, we have demonstrated that PVO can significantly speed up reinforcement learning within sparse reward settings.

References