Distributional Reward Decomposition for Reinforcement Learning

11/06/2019 ∙ by Zichuan Lin, et al. ∙ 38

Many reinforcement learning (RL) tasks have specific properties that can be leveraged to modify existing RL algorithms to adapt to those tasks and further improve performance, and a general class of such properties is the multiple reward channel. In those environments the full reward can be decomposed into sub-rewards obtained from different channels. Existing work on reward decomposition either requires prior knowledge of the environment to decompose the full reward, or decomposes reward without prior knowledge but with degraded performance. In this paper, we propose Distributional Reward Decomposition for Reinforcement Learning (DRDRL), a novel reward decomposition algorithm which captures the multiple reward channel structure under distributional setting. Empirically, our method captures the multi-channel structure and discovers meaningful reward decomposition, without any requirements on prior knowledge. Consequently, our agent achieves better performance than existing methods on environments with multiple reward channels.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 4

page 7

page 8

page 9

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Reinforcement learning has achieved great success in decision making problems since Deep Q-learning was proposed by Mnih et al. (2015). While general RL algorithms have been deeply studied, here we focus on those RL tasks with specific properties that can be utilized to modify general RL algorithms to achieve better performance. Specifically, we focus on RL environments with multiple reward channels, but only the full reward is available.

Reward decomposition has been proposed to investigate such properties. For example, in Atari game Seaquest, rewards of environment can be decomposed into sub-rewards of shooting sharks and those of rescuing divers. Reward decomposition views the total reward as the sum of sub-rewards that are usually disentangled and can be obtained independently (Sprague and Ballard (2003); Russell and Zimdars (2003); Van Seijen et al. (2017); Grimm and Singh (2019)), and aims at decomposing the total reward into sub-rewards. The sub-rewards may further be leveraged to learn better policies.

Van Seijen et al. (2017) propose to split a state into different sub-states, each with a sub-reward obtained by training a general value function, and learn multiple value functions with sub-rewards. The architecture is rather limited due to requiring prior knowledge of how to split into sub-states. Grimm and Singh (2019)

propose a more general method for reward decomposition via maximizing disentanglement between sub-rewards. In their work, an explicit reward decomposition is learned via maximizing the disentanglement of two sub-rewards estimated with action-value functions. However, their work requires that the environment can be reset to arbitrary state and can not apply to general RL settings where states can hardly be revisited. Furthermore, despite the meaningful reward decomposition they achieved, they fail to utilize the reward decomposition into learning better policies.

In this paper, we propose Distributional Reward Decomposition for Reinforcement Learning (DRDRL), an RL algorithm that captures the latent multiple-channel structure for reward, under the setting of distributional RL. Distributional RL differs from value-based RL in that it estimates the distribution rather than the expectation of returns, and therefore captures richer information than value-based RL. We propose an RL algorithm that estimates distributions of the sub-returns, and combine the sub-returns to get the distribution of the total returns. In order to avoid naive decomposition such as 0-1 or half-half, we further propose a disentanglement regularization term to encourage the sub-returns to be diverged. To better separate reward channels, we also design our network to learn different state representations for different channels.

We test our algorithm on chosen Atari Games with multiple reward channels. Empirically, our method has following achievements:

  • Discovers meaningful reward decomposition.

  • Requires no external information.

  • Achieves better performance than existing RL methods.

2 Background

We consider a general reinforcement learning setting, in which the interaction of the agent and the environment can be viewed as a Markov Decision Process (MDP). Denote the state space by

, action space by , the state transition function by , the action-state dependent reward function by and the discount factor, we write this MDP as .

Given a fixed policy , reinforcement learning estimates the action-value function of , defined by where is the state-action pair at time , and is the corresponding reward. The Bellman equation characterizes the action-value function by temporal equivalence, given by

where . To maximize the total return given by , one common approach is to find the fixed point for the Bellman optimality operator

with the temporal difference (TD) error, given by

over the samples along the trajectory. Mnih et al. (2015)

propose Deep Q-Networks (DQN) that learns the action-value function with a neural network and achieves human-level performance on the Atari-57 benchmark.

2.1 Reward Decomposition

Studies for reward decomposition also leads to state decomposition (Laversanne-Finot et al. (2018); Thomas et al. (2017)), where state decomposition is leveraged to learn different policies. Extending their work, Grimm and Singh (2019) explore the decomposition of the reward function directly, which is considered to be most related to our work. Denote the -th (=1,2,…,) sub-reward function at state-action pair as , the complete reward function is given by

For each sub-reward function, consider the sub-value function and corresponding policy :

where .

In their work, reward decomposition is considered meaningful if each reward is obtained independently (i.e. should not obtain ) and each reward is obtainable.

Two evaluate the two desiderata, the work proposes the following values:

(1)
(2)

where is for weight control and for simplicity set to 1 in their work. During training, the network would maximize to achieve the desired reward decomposition.

2.2 Distributional Reinforcement Learning

In most reinforcement learning settings, the environment is not deterministic. Moreover, in general people train RL models with an -greedy policy to allow exploration, making the agent also stochastic. To better analyze the randomness under this setting, Bellemare et al. (2017) propose C51 algorithm and conduct theoretical analysis of distributional RL.

In distributional RL, reward

is viewed as a random variable, and the total return is defined by

. The expectation of is the traditional action-value and the distributional Bellman optimality operator is given by

where if random variable and satisfies then and follow the same distribution.

Random variable is characterized by a categorical distribution over a fixed set of values in C51, and it outperforms all previous variants of DQN on Atari domain.

3 Distributional Reward Decomposition for Reinforcement Learning

3.1 Distributional Reward Decomposition

Figure 1: (a) Distributional reward decomposition network architecture. (b) Examples of multiple reward channels in Atari games: the top row shows examples of Seaquest in which the submarine receives rewards from both shooting sharks and rescuing divers; the bottom row shows examples of Hero where the hero receives rewards from both shooting bats and rescuing people.

In many reinforcement learning environments, there are multiple sources for an agent to receive reward as shown in Figure 1. Our method is mainly designed for environments with such property.

Under distributional setting, we will assume reward and sub-rewards are random variables and denote them by and respectively. In our architecture, the categorical distribution of each sub-return is the output of a network, denoted by . Note that in most cases, sub-returns are not independent, i.e. . So theoretically we need for each and to obtain the distribution of the full return. We call this architecture as non-factorial model or full-distribution model. The non-factorial model architecture is shown in appendix. However, experiment shows that using an approximation form of so that only is required performs much better than brutally computing for all

, we believe that is due to the increased sample number. In this paper, we approximate the conditional probability

with .

Consider categorical distribution function and with same number of atoms , the -th atom is denoted by with value where is a constant. Let random variable and

, from basic probability theory we know that the distribution function of

is the convolution of and

(3)

When we use sub-returns, the distribution function of the total return is then given by where denotes linear 1D-convolution.

While reward decomposition is not explicitly done in our algorithm, we can derive the decomposed reward with using trained agents. Recall that total return follows bellman equation, so naturally we have

(4)

where represents sub-return on the next state-action pair. Note that we only have access to a sample of the full reward , the sub-rewards are arbitrary and for better visualization a direct way of deriving them is given by

(5)

In the next section we will present an example of those sub-rewards by taking their expectation . Note that our reward decomposition is latent and we do not need for our algorithm, Eq. 5 only provides an approach to visualize our reward decomposition.

3.2 Disentangled Sub-returns

To obtain meaningful reward decomposition, we want the sub-rewards to be disentangled. Inspired by Grimm and Singh (2019), we compute the disentanglement of distributions of two sub-returns and on state with the following value:

(6)

where denotes the cross-entropy term of KL divergence.

Intuitively, estimates the disentanglement of sub-returns and by first obtaining actions that maximize and respectively, and then computes the KL divergence between the two estimated total returns of the actions. If and are independent, the action maximizing two sub-returns would be different and such difference would reflect in the estimation for total return. Through maximizing this value, we can expect a meaningful reward decomposition that learns independent rewards.

3.3 Projected Bellman Update with Regularization

Following the work of C51 (Bellemare et al. (2017)), we use projected Bellman update for our algorithm. When applied with the Bellman optimality operator, the atoms of is shifted by and shrank by . However to compute the loss, usually KL divergence between and , it is required that the two categorical distributions are defined on the same set of atoms, so the target distribution would need to be projected to the original set of atoms before Bellman update. Consider a sample transition , the projection operator proposed in C51 is given by

(7)

where is the number of atoms in C51 and bounds its argument in . The sample loss for would be given by the cross-entropy term of KL divergence of and

(8)

Let be a neural network parameterized by , we combine distributional TD error and disentanglement to jointly update . For each sample transition , is updated by minimizing the following objective function:

(9)

where denotes learning rate.

3.4 Multi-channel State Representation

One complication of our approach outlined above is that very often the distribution cannot distinguish itself from other distributions (e.g., ) during learning since they all depend on the same state feature input. This brings difficulties in maximizing disentanglement by jointly training as different distribution functions are exchangeable. A naive idea is to split the state feature into pieces (e.g., , , …, ) so that each distribution depends on different sub-state-features. However, we empirically found that this method is not enough to help learn good disentangled sub-returns.

To address this problem, we utilize an idea similar to universal value function approximation (UVFA) (Schaul et al. (2015)). The key idea is to take one-hot embedding as an additional input to condition the categorical distribution function, and apply the element-wise multiplication , to force interaction between state features and the one-hot embedding feature:

(10)

where denotes the one-hot embedding where the -th element is one and

denotes one-layer non-linear neural network that is updated by backpropagation during training.

In this way, the agent explicitly learns different distribution functions for different channels. The complete network architecture is shown in Figure 1.

4 Experiment Results

We tested our algorithm on the games from Arcade Learning Environment (ALE; Bellemare et al. (2013)). We conduct experiments on six Atari games, some with complicated rules and some with simple rules. For our study, we implemented our algorithm based on Rainbow (Hessel et al. (2018)) which is an advanced variant of C51 (Bellemare et al. (2017)) and achieved state-of-the-art results in Atari games domain. We replace the update rule of Rainbow by Eq. 9 and network architecture of Rainbow by our convoluted architecture as shown in Figure 1. In Rainbow, the Q-value is bounded by where . In our method, we bound the categorical distribution of each sub-return by a range of . Rainbow uses a categorical distribution with atoms. For fair comparison, we assign atoms for the distribution of each sub-return, which results in the same network capacity as the original network architecture.

Our code is built upon dopamine framework (Castro et al. (2018)). We use the default well-tuned hyper-parameter setting in dopamine. For our updating rule in Eq. 9, we set

. We run our agents for 100 epochs, each with 0.25 million of training steps and 0.125 million of evaluation steps. For evaluation, we follow common practice in

Van Hasselt et al. (2016), starting the game with up to 30 no-op actions to provide random starting positions for the agent. All experiments are performed on NVIDIA Tesla V100 16GB graphics cards.

4.1 Comparison with Rainbow

To verify that our architecture achieves reward decomposition without degraded performance, we compare our methods with Rainbow. However we are not able to compare our method with Van Seijen et al. (2017) and Grimm and Singh (2019) since they require either pre-defined state pre-processing or specific-state resettable environments. We test our reward decomposition (RD) with 2 and 3 channels (e.g., RD(2), RD(3)). The results are shown in Figure 2. We found that our methods perform significantly better than Rainbow on the environments that we tested. This implies that our distributional reward decomposition method can help accelerate the learning process. We also discover that on some environments, RD(3) performs better than RD(2) while in the rest the two have similar performance. We conjecture that this is due to the intrinsic settings of the environments. For example, in Seaquest and UpNDown, the rules are relatively complicated, so RD(3) characterizes such complex reward better. However in simple environments like Gopher and Asterix, RD(2) and RD(3) obtain similar performance, and sometimes RD(2) even outperforms RD(3).

Figure 2: Performance comparison with Rainbow. RD(N) represents using N-channel reward decomposition. Each training curve is averaged by three random seeds.

4.2 Reward Decomposition Analysis

Here we use Seaquest to illustrate our reward decomposition. Figure 3 shows the sub-rewards obtained by taking the expectation of the LHS of Eq.5 and the original reward along an actual trajectory. We observe that while and basically add up to the original reward , dominates as the submarine is close to the surface, i.e. when it rescues the divers and refills oxygen. When the submarine scores by shooting sharks, becomes the main source of reward. We also monitor the distribution of different sub-returns when the agent is playing game. In Figure 4 (a), the submarine floats to the surface to rescue the divers and refill oxygen and has higher values. While in Figure 4 (b), as the submarine dives into the sea and shoots sharks, expected values of (orange) are higher than (blue). This result implies that the reward decomposition indeed captures different sources of returns, in this case shooting sharks and rescuing divers/refilling oxygen. We also provide statistics on actions for quantitative analysis to support the argument. In Figure 6(a), we count the occurrence of each action obtained with and in a single trajectory, using the same policy as in Figure 4. We see that while prefers going up, prefers going down with fire.

Figure 3: Reward decomposition along the trajectory. While sub-rewards and usually adds up to the original reward , we see that the proportion of sub-rewards greatly depends on how the original reward is obtained.
Figure 4: An illustration of how the sub-returns discriminates at different stage of the game. In figure (a), the submarine is refilling oxygen while in figure (b) the submarine is shooting sharks.

4.3 Visualization by saliency maps

Figure 5: Sub-distribution saliency maps on the Atari game Seaquest, for a trained DRDRL of two channels (N=2). One channel learns to pay attention to the oxygen, while another channel learns to pay attention to the sharks.

To better understand the roles of different sub-rewards, we train a DRDRL agent with two channels (N=2) and compute saliency maps (Simonyan et al. (2013)). Specifically, to visualize the salient part of the images as seen by different sub-policies, we compute the absolute value of the Jacobian for each channel. Figure 5 shows that visualization results. We find that channel 1 (red region) focuses on refilling oxygen while channel 2 (green region) pays more attention to shooting sharks as well as the positions where sharks are more likely to appear.

4.4 Direct Control using Induced Sub-policies

We also provide videos111https://sites.google.com/view/drdpaper of running sub-policies defined by . To clarify, the sub-policies are never rolled out during training or evaluation and are only used to compute in Eq. 6. We execute these sub-policies and observe its difference with the main policy to get a better visual effect of the reward decomposition. Take Seaquest in Figure 6(b) as an example, two sub-policies show distinctive preference. As mainly captures the reward for surviving and rescuing divers, tends to stay close to the surface. However represents the return gained from shooting sharks, so appears much more aggressive than . Also, without we see that dies quickly due to out of oxygen.

Figure 6: (a) Action statistics in an example trajectory of Seaquest. (b) Direct controlling using two induced sub-policies : the top picture shows that prefers to stay at the top to keep agent alive; the bottom picture shows that prefers aggressive action of shooting sharks.

5 Related Work

Our method is closely related to previous work of reward decomposition. Reward function decomposition has been studied among others by Russell and Zimdars (2003) and Sprague and Ballard (2003). While these earlier works mainly focus on how to achieve optimal policy given the decomposed reward functions, there have been several recent works attempting to learn latent decomposed rewards. Van Seijen et al. (2017) construct an easy-to-learn value function by decomposing the reward function of the environment into different reward functions. To ensure the learned decomposition is non-trivial, Van Seijen et al. (2017) proposed to split a state into different pieces following domain knowledge and then feed different state pieces into each reward function branch. While such method can accelerate learning process, it always requires many pre-defined preprocessing techniques. There has been other work that explores learn reward decomposition network end-to-end. Grimm and Singh (2019) investigates how to learn independently-obtainable reward functions. While it learns interesting reward decomposition, their method requires that the environments be resettable to specific states since it needs multiple trajectories from the same starting state to compute their objective function. Besides, their method aims at learning different optimal policies for each decomposed reward function. Different from the works above, our method can learn meaningful implicit reward decomposition without any requirements on prior knowledge. Also, our method can leverage the decomposed sub-rewards to find better behaviour for a single agent.

Our work also relates to Horde (Sutton et al. (2011)). The Horde architecture consists of a large number of ‘sub-agents’ that learn in parallel via off-policy learning. Each demon trains a separate general value function (GVF) based on its own policy and pseudo-reward function. A pseudo-reward can be any feature-based signal that encodes useful information. The Horde architecture is focused on building up general knowledge about the world, encoded via a large number of GVFs. UVFA (Schaul et al. (2015)) extends Horde along a different direction that enables value function generalizing across different goals. Our method focuses on learning implicit reward decomposition in order to more efficiently learn a control policy.

6 Conclusion

In this paper, we propose Distributional Reward Decomposition for Reinforcement Learning (DRDRL), a novel reward decomposition algorithm which captures the multiple reward channel structure under distributional setting. Our algorithm significantly outperforms state-of-the-art RL methods RAINBOW on Atari games with multiple reward channels. We also provide interesting experimental analysis to get insight for our algorithm. In the future, we might try to develop reward decomposition method based on quantile networks (

Dabney et al. (2018a, b)).

Acknowledgments

This work was supported in part by the National Key Research & Development Plan of China (grant No. 2016YFA0602200 and 2017YFA0604500), and by Center for High Performance Computing and System Simulation, Pilot National Laboratory for Marine Science and Technology (Qingdao).

References

  • M. G. Bellemare, W. Dabney, and R. Munos (2017) A distributional perspective on reinforcement learning. In

    Proceedings of the 34th International Conference on Machine Learning-Volume 70

    ,
    pp. 449–458. Cited by: §2.2, §3.3, §4.
  • M. G. Bellemare, Y. Naddaf, J. Veness, and M. Bowling (2013) The arcade learning environment: an evaluation platform for general agents.

    Journal of Artificial Intelligence Research

    47, pp. 253–279.
    Cited by: §4.
  • P. S. Castro, S. Moitra, C. Gelada, S. Kumar, and M. G. Bellemare (2018) Dopamine: A Research Framework for Deep Reinforcement Learning. External Links: Link Cited by: §4.
  • W. Dabney, G. Ostrovski, D. Silver, and R. Munos (2018a) Implicit quantile networks for distributional reinforcement learning. In International Conference on Machine Learning, pp. 1104–1113. Cited by: §6.
  • W. Dabney, M. Rowland, M. G. Bellemare, and R. Munos (2018b) Distributional reinforcement learning with quantile regression. In Thirty-Second AAAI Conference on Artificial Intelligence, Cited by: §6.
  • C. Grimm and S. Singh (2019) Learning independently-obtainable reward functions. arXiv preprint arXiv:1901.08649. Cited by: §1, §1, §2.1, §3.2, §4.1, §5.
  • M. Hessel, J. Modayil, H. Van Hasselt, T. Schaul, G. Ostrovski, W. Dabney, D. Horgan, B. Piot, M. Azar, and D. Silver (2018) Rainbow: combining improvements in deep reinforcement learning. In Thirty-Second AAAI Conference on Artificial Intelligence, Cited by: §4.
  • A. Laversanne-Finot, A. Péré, and P. Oudeyer (2018) Curiosity driven exploration of learned disentangled goal spaces. arXiv preprint arXiv:1807.01521. Cited by: §2.1.
  • V. Mnih, K. Kavukcuoglu, D. Silver, A. A. Rusu, J. Veness, M. G. Bellemare, A. Graves, M. Riedmiller, A. K. Fidjeland, G. Ostrovski, et al. (2015) Human-level control through deep reinforcement learning. Nature 518 (7540), pp. 529. Cited by: §1, §2.
  • S. J. Russell and A. Zimdars (2003) Q-decomposition for reinforcement learning agents. In Proceedings of the 20th International Conference on Machine Learning (ICML-03), pp. 656–663. Cited by: §1, §5.
  • T. Schaul, D. Horgan, K. Gregor, and D. Silver (2015) Universal value function approximators. In International Conference on Machine Learning, pp. 1312–1320. Cited by: §3.4, §5.
  • K. Simonyan, A. Vedaldi, and A. Zisserman (2013) Deep inside convolutional networks: visualising image classification models and saliency maps. arXiv preprint arXiv:1312.6034. Cited by: §4.3.
  • N. Sprague and D. Ballard (2003) Multiple-goal reinforcement learning with modular sarsa (0). Cited by: §1, §5.
  • R. S. Sutton, J. Modayil, M. Delp, T. Degris, P. M. Pilarski, A. White, and D. Precup (2011) Horde: a scalable real-time architecture for learning knowledge from unsupervised sensorimotor interaction. In The 10th International Conference on Autonomous Agents and Multiagent Systems-Volume 2, pp. 761–768. Cited by: §5.
  • V. Thomas, J. Pondard, E. Bengio, M. Sarfati, P. Beaudoin, M. Meurs, J. Pineau, D. Precup, and Y. Bengio (2017) Independently controllable features. arXiv preprint arXiv:1708.01289. Cited by: §2.1.
  • H. Van Hasselt, A. Guez, and D. Silver (2016) Deep reinforcement learning with double q-learning. In Thirtieth AAAI Conference on Artificial Intelligence, Cited by: §4.
  • H. Van Seijen, M. Fatemi, J. Romoff, R. Laroche, T. Barnes, and J. Tsang (2017) Hybrid reward architecture for reinforcement learning. In Advances in Neural Information Processing Systems, pp. 5392–5402. Cited by: §1, §1, §4.1, §5.

Appendix

Non-factorial model

To show that is a reasonable assumption, we also implemented a non-factorial version of DRDRL, i.e. assuming that , with 3 channels. The architecture of the non-factorial model is shown in figure 7. The three sub-distributions have atoms, atoms and atoms respectively and they multiply to form a full distribution with atoms. To maintain similar network capacity as C51, we set to form a full distribution of 64 atoms.

Figure 7: Model architecture of full distribution (non-factorial) method mentioned in section 3.1.

Ablative results of the non-factorial model are shown in the following section together with other tricks.

Ablative Analysis

Figure 8 shows how each component of our method contributes to DRDRL. Specifically, we test the performance of ‘1D convolution’, ‘1D convolution + KL’, ‘1D-convolution + onehot’ and ‘1D-convolution + KL + onehot’ as well as the non-factorial model.

One may argue that the superior performance of DRDRL might come from the fact that fitting N simple sub-distributions with M/N categories is easier than fitting one with M categories. We include another set of experiments to justify this argument by using M atoms instead of M/N atoms for each sub-distribution.

Figure 8: Training curves.

Results show that every part of DRDRL is important. To our surprise, the KL term not only allows us to perform reward decomposition, we also observe significant differences between curves with and without the KL term. This suggests that learning decomposed reward can greatly boost the performance of RL algorithms. Together with the degraded performance of M atoms instead of M/N atoms in each sub-distribution, it is suffice to suggest that the DRDRL’s success does not come from fitting easier distributions with M/N atoms.