1 Introduction
Reinforcement learning has achieved great success in decision making problems since Deep Qlearning was proposed by Mnih et al. (2015). While general RL algorithms have been deeply studied, here we focus on those RL tasks with specific properties that can be utilized to modify general RL algorithms to achieve better performance. Specifically, we focus on RL environments with multiple reward channels, but only the full reward is available.
Reward decomposition has been proposed to investigate such properties. For example, in Atari game Seaquest, rewards of environment can be decomposed into subrewards of shooting sharks and those of rescuing divers. Reward decomposition views the total reward as the sum of subrewards that are usually disentangled and can be obtained independently (Sprague and Ballard (2003); Russell and Zimdars (2003); Van Seijen et al. (2017); Grimm and Singh (2019)), and aims at decomposing the total reward into subrewards. The subrewards may further be leveraged to learn better policies.
Van Seijen et al. (2017) propose to split a state into different substates, each with a subreward obtained by training a general value function, and learn multiple value functions with subrewards. The architecture is rather limited due to requiring prior knowledge of how to split into substates. Grimm and Singh (2019)
propose a more general method for reward decomposition via maximizing disentanglement between subrewards. In their work, an explicit reward decomposition is learned via maximizing the disentanglement of two subrewards estimated with actionvalue functions. However, their work requires that the environment can be reset to arbitrary state and can not apply to general RL settings where states can hardly be revisited. Furthermore, despite the meaningful reward decomposition they achieved, they fail to utilize the reward decomposition into learning better policies.
In this paper, we propose Distributional Reward Decomposition for Reinforcement Learning (DRDRL), an RL algorithm that captures the latent multiplechannel structure for reward, under the setting of distributional RL. Distributional RL differs from valuebased RL in that it estimates the distribution rather than the expectation of returns, and therefore captures richer information than valuebased RL. We propose an RL algorithm that estimates distributions of the subreturns, and combine the subreturns to get the distribution of the total returns. In order to avoid naive decomposition such as 01 or halfhalf, we further propose a disentanglement regularization term to encourage the subreturns to be diverged. To better separate reward channels, we also design our network to learn different state representations for different channels.
We test our algorithm on chosen Atari Games with multiple reward channels. Empirically, our method has following achievements:

Discovers meaningful reward decomposition.

Requires no external information.

Achieves better performance than existing RL methods.
2 Background
We consider a general reinforcement learning setting, in which the interaction of the agent and the environment can be viewed as a Markov Decision Process (MDP). Denote the state space by
, action space by , the state transition function by , the actionstate dependent reward function by and the discount factor, we write this MDP as .Given a fixed policy , reinforcement learning estimates the actionvalue function of , defined by where is the stateaction pair at time , and is the corresponding reward. The Bellman equation characterizes the actionvalue function by temporal equivalence, given by
where . To maximize the total return given by , one common approach is to find the fixed point for the Bellman optimality operator
with the temporal difference (TD) error, given by
over the samples along the trajectory. Mnih et al. (2015)
propose Deep QNetworks (DQN) that learns the actionvalue function with a neural network and achieves humanlevel performance on the Atari57 benchmark.
2.1 Reward Decomposition
Studies for reward decomposition also leads to state decomposition (LaversanneFinot et al. (2018); Thomas et al. (2017)), where state decomposition is leveraged to learn different policies. Extending their work, Grimm and Singh (2019) explore the decomposition of the reward function directly, which is considered to be most related to our work. Denote the th (=1,2,…,) subreward function at stateaction pair as , the complete reward function is given by
For each subreward function, consider the subvalue function and corresponding policy :
where .
In their work, reward decomposition is considered meaningful if each reward is obtained independently (i.e. should not obtain ) and each reward is obtainable.
Two evaluate the two desiderata, the work proposes the following values:
(1) 
(2) 
where is for weight control and for simplicity set to 1 in their work. During training, the network would maximize to achieve the desired reward decomposition.
2.2 Distributional Reinforcement Learning
In most reinforcement learning settings, the environment is not deterministic. Moreover, in general people train RL models with an greedy policy to allow exploration, making the agent also stochastic. To better analyze the randomness under this setting, Bellemare et al. (2017) propose C51 algorithm and conduct theoretical analysis of distributional RL.
In distributional RL, reward
is viewed as a random variable, and the total return is defined by
. The expectation of is the traditional actionvalue and the distributional Bellman optimality operator is given bywhere if random variable and satisfies then and follow the same distribution.
Random variable is characterized by a categorical distribution over a fixed set of values in C51, and it outperforms all previous variants of DQN on Atari domain.
3 Distributional Reward Decomposition for Reinforcement Learning
3.1 Distributional Reward Decomposition
In many reinforcement learning environments, there are multiple sources for an agent to receive reward as shown in Figure 1. Our method is mainly designed for environments with such property.
Under distributional setting, we will assume reward and subrewards are random variables and denote them by and respectively. In our architecture, the categorical distribution of each subreturn is the output of a network, denoted by . Note that in most cases, subreturns are not independent, i.e. . So theoretically we need for each and to obtain the distribution of the full return. We call this architecture as nonfactorial model or fulldistribution model. The nonfactorial model architecture is shown in appendix. However, experiment shows that using an approximation form of so that only is required performs much better than brutally computing for all
, we believe that is due to the increased sample number. In this paper, we approximate the conditional probability
with .Consider categorical distribution function and with same number of atoms , the th atom is denoted by with value where is a constant. Let random variable and
, from basic probability theory we know that the distribution function of
is the convolution of and(3) 
When we use subreturns, the distribution function of the total return is then given by where denotes linear 1Dconvolution.
While reward decomposition is not explicitly done in our algorithm, we can derive the decomposed reward with using trained agents. Recall that total return follows bellman equation, so naturally we have
(4) 
where represents subreturn on the next stateaction pair. Note that we only have access to a sample of the full reward , the subrewards are arbitrary and for better visualization a direct way of deriving them is given by
(5) 
In the next section we will present an example of those subrewards by taking their expectation . Note that our reward decomposition is latent and we do not need for our algorithm, Eq. 5 only provides an approach to visualize our reward decomposition.
3.2 Disentangled Subreturns
To obtain meaningful reward decomposition, we want the subrewards to be disentangled. Inspired by Grimm and Singh (2019), we compute the disentanglement of distributions of two subreturns and on state with the following value:
(6) 
where denotes the crossentropy term of KL divergence.
Intuitively, estimates the disentanglement of subreturns and by first obtaining actions that maximize and respectively, and then computes the KL divergence between the two estimated total returns of the actions. If and are independent, the action maximizing two subreturns would be different and such difference would reflect in the estimation for total return. Through maximizing this value, we can expect a meaningful reward decomposition that learns independent rewards.
3.3 Projected Bellman Update with Regularization
Following the work of C51 (Bellemare et al. (2017)), we use projected Bellman update for our algorithm. When applied with the Bellman optimality operator, the atoms of is shifted by and shrank by . However to compute the loss, usually KL divergence between and , it is required that the two categorical distributions are defined on the same set of atoms, so the target distribution would need to be projected to the original set of atoms before Bellman update. Consider a sample transition , the projection operator proposed in C51 is given by
(7) 
where is the number of atoms in C51 and bounds its argument in . The sample loss for would be given by the crossentropy term of KL divergence of and
(8) 
Let be a neural network parameterized by , we combine distributional TD error and disentanglement to jointly update . For each sample transition , is updated by minimizing the following objective function:
(9) 
where denotes learning rate.
3.4 Multichannel State Representation
One complication of our approach outlined above is that very often the distribution cannot distinguish itself from other distributions (e.g., ) during learning since they all depend on the same state feature input. This brings difficulties in maximizing disentanglement by jointly training as different distribution functions are exchangeable. A naive idea is to split the state feature into pieces (e.g., , , …, ) so that each distribution depends on different substatefeatures. However, we empirically found that this method is not enough to help learn good disentangled subreturns.
To address this problem, we utilize an idea similar to universal value function approximation (UVFA) (Schaul et al. (2015)). The key idea is to take onehot embedding as an additional input to condition the categorical distribution function, and apply the elementwise multiplication , to force interaction between state features and the onehot embedding feature:
(10) 
where denotes the onehot embedding where the th element is one and
denotes onelayer nonlinear neural network that is updated by backpropagation during training.
In this way, the agent explicitly learns different distribution functions for different channels. The complete network architecture is shown in Figure 1.
4 Experiment Results
We tested our algorithm on the games from Arcade Learning Environment (ALE; Bellemare et al. (2013)). We conduct experiments on six Atari games, some with complicated rules and some with simple rules. For our study, we implemented our algorithm based on Rainbow (Hessel et al. (2018)) which is an advanced variant of C51 (Bellemare et al. (2017)) and achieved stateoftheart results in Atari games domain. We replace the update rule of Rainbow by Eq. 9 and network architecture of Rainbow by our convoluted architecture as shown in Figure 1. In Rainbow, the Qvalue is bounded by where . In our method, we bound the categorical distribution of each subreturn by a range of . Rainbow uses a categorical distribution with atoms. For fair comparison, we assign atoms for the distribution of each subreturn, which results in the same network capacity as the original network architecture.
Our code is built upon dopamine framework (Castro et al. (2018)). We use the default welltuned hyperparameter setting in dopamine. For our updating rule in Eq. 9, we set
. We run our agents for 100 epochs, each with 0.25 million of training steps and 0.125 million of evaluation steps. For evaluation, we follow common practice in
Van Hasselt et al. (2016), starting the game with up to 30 noop actions to provide random starting positions for the agent. All experiments are performed on NVIDIA Tesla V100 16GB graphics cards.4.1 Comparison with Rainbow
To verify that our architecture achieves reward decomposition without degraded performance, we compare our methods with Rainbow. However we are not able to compare our method with Van Seijen et al. (2017) and Grimm and Singh (2019) since they require either predefined state preprocessing or specificstate resettable environments. We test our reward decomposition (RD) with 2 and 3 channels (e.g., RD(2), RD(3)). The results are shown in Figure 2. We found that our methods perform significantly better than Rainbow on the environments that we tested. This implies that our distributional reward decomposition method can help accelerate the learning process. We also discover that on some environments, RD(3) performs better than RD(2) while in the rest the two have similar performance. We conjecture that this is due to the intrinsic settings of the environments. For example, in Seaquest and UpNDown, the rules are relatively complicated, so RD(3) characterizes such complex reward better. However in simple environments like Gopher and Asterix, RD(2) and RD(3) obtain similar performance, and sometimes RD(2) even outperforms RD(3).
4.2 Reward Decomposition Analysis
Here we use Seaquest to illustrate our reward decomposition. Figure 3 shows the subrewards obtained by taking the expectation of the LHS of Eq.5 and the original reward along an actual trajectory. We observe that while and basically add up to the original reward , dominates as the submarine is close to the surface, i.e. when it rescues the divers and refills oxygen. When the submarine scores by shooting sharks, becomes the main source of reward. We also monitor the distribution of different subreturns when the agent is playing game. In Figure 4 (a), the submarine floats to the surface to rescue the divers and refill oxygen and has higher values. While in Figure 4 (b), as the submarine dives into the sea and shoots sharks, expected values of (orange) are higher than (blue). This result implies that the reward decomposition indeed captures different sources of returns, in this case shooting sharks and rescuing divers/refilling oxygen. We also provide statistics on actions for quantitative analysis to support the argument. In Figure 6(a), we count the occurrence of each action obtained with and in a single trajectory, using the same policy as in Figure 4. We see that while prefers going up, prefers going down with fire.
4.3 Visualization by saliency maps
To better understand the roles of different subrewards, we train a DRDRL agent with two channels (N=2) and compute saliency maps (Simonyan et al. (2013)). Specifically, to visualize the salient part of the images as seen by different subpolicies, we compute the absolute value of the Jacobian for each channel. Figure 5 shows that visualization results. We find that channel 1 (red region) focuses on refilling oxygen while channel 2 (green region) pays more attention to shooting sharks as well as the positions where sharks are more likely to appear.
4.4 Direct Control using Induced Subpolicies
We also provide videos^{1}^{1}1https://sites.google.com/view/drdpaper of running subpolicies defined by . To clarify, the subpolicies are never rolled out during training or evaluation and are only used to compute in Eq. 6. We execute these subpolicies and observe its difference with the main policy to get a better visual effect of the reward decomposition. Take Seaquest in Figure 6(b) as an example, two subpolicies show distinctive preference. As mainly captures the reward for surviving and rescuing divers, tends to stay close to the surface. However represents the return gained from shooting sharks, so appears much more aggressive than . Also, without we see that dies quickly due to out of oxygen.
5 Related Work
Our method is closely related to previous work of reward decomposition. Reward function decomposition has been studied among others by Russell and Zimdars (2003) and Sprague and Ballard (2003). While these earlier works mainly focus on how to achieve optimal policy given the decomposed reward functions, there have been several recent works attempting to learn latent decomposed rewards. Van Seijen et al. (2017) construct an easytolearn value function by decomposing the reward function of the environment into different reward functions. To ensure the learned decomposition is nontrivial, Van Seijen et al. (2017) proposed to split a state into different pieces following domain knowledge and then feed different state pieces into each reward function branch. While such method can accelerate learning process, it always requires many predefined preprocessing techniques. There has been other work that explores learn reward decomposition network endtoend. Grimm and Singh (2019) investigates how to learn independentlyobtainable reward functions. While it learns interesting reward decomposition, their method requires that the environments be resettable to specific states since it needs multiple trajectories from the same starting state to compute their objective function. Besides, their method aims at learning different optimal policies for each decomposed reward function. Different from the works above, our method can learn meaningful implicit reward decomposition without any requirements on prior knowledge. Also, our method can leverage the decomposed subrewards to find better behaviour for a single agent.
Our work also relates to Horde (Sutton et al. (2011)). The Horde architecture consists of a large number of ‘subagents’ that learn in parallel via offpolicy learning. Each demon trains a separate general value function (GVF) based on its own policy and pseudoreward function. A pseudoreward can be any featurebased signal that encodes useful information. The Horde architecture is focused on building up general knowledge about the world, encoded via a large number of GVFs. UVFA (Schaul et al. (2015)) extends Horde along a different direction that enables value function generalizing across different goals. Our method focuses on learning implicit reward decomposition in order to more efficiently learn a control policy.
6 Conclusion
In this paper, we propose Distributional Reward Decomposition for Reinforcement Learning (DRDRL), a novel reward decomposition algorithm which captures the multiple reward channel structure under distributional setting. Our algorithm significantly outperforms stateoftheart RL methods RAINBOW on Atari games with multiple reward channels. We also provide interesting experimental analysis to get insight for our algorithm. In the future, we might try to develop reward decomposition method based on quantile networks (
Dabney et al. (2018a, b)).Acknowledgments
This work was supported in part by the National Key Research & Development Plan of China (grant No. 2016YFA0602200 and 2017YFA0604500), and by Center for High Performance Computing and System Simulation, Pilot National Laboratory for Marine Science and Technology (Qingdao).
References

A distributional perspective on reinforcement learning.
In
Proceedings of the 34th International Conference on Machine LearningVolume 70
, pp. 449–458. Cited by: §2.2, §3.3, §4. 
The arcade learning environment: an evaluation platform for general agents.
Journal of Artificial Intelligence Research
47, pp. 253–279. Cited by: §4.  Dopamine: A Research Framework for Deep Reinforcement Learning. External Links: Link Cited by: §4.
 Implicit quantile networks for distributional reinforcement learning. In International Conference on Machine Learning, pp. 1104–1113. Cited by: §6.
 Distributional reinforcement learning with quantile regression. In ThirtySecond AAAI Conference on Artificial Intelligence, Cited by: §6.
 Learning independentlyobtainable reward functions. arXiv preprint arXiv:1901.08649. Cited by: §1, §1, §2.1, §3.2, §4.1, §5.
 Rainbow: combining improvements in deep reinforcement learning. In ThirtySecond AAAI Conference on Artificial Intelligence, Cited by: §4.
 Curiosity driven exploration of learned disentangled goal spaces. arXiv preprint arXiv:1807.01521. Cited by: §2.1.
 Humanlevel control through deep reinforcement learning. Nature 518 (7540), pp. 529. Cited by: §1, §2.
 Qdecomposition for reinforcement learning agents. In Proceedings of the 20th International Conference on Machine Learning (ICML03), pp. 656–663. Cited by: §1, §5.
 Universal value function approximators. In International Conference on Machine Learning, pp. 1312–1320. Cited by: §3.4, §5.
 Deep inside convolutional networks: visualising image classification models and saliency maps. arXiv preprint arXiv:1312.6034. Cited by: §4.3.
 Multiplegoal reinforcement learning with modular sarsa (0). Cited by: §1, §5.
 Horde: a scalable realtime architecture for learning knowledge from unsupervised sensorimotor interaction. In The 10th International Conference on Autonomous Agents and Multiagent SystemsVolume 2, pp. 761–768. Cited by: §5.
 Independently controllable features. arXiv preprint arXiv:1708.01289. Cited by: §2.1.
 Deep reinforcement learning with double qlearning. In Thirtieth AAAI Conference on Artificial Intelligence, Cited by: §4.
 Hybrid reward architecture for reinforcement learning. In Advances in Neural Information Processing Systems, pp. 5392–5402. Cited by: §1, §1, §4.1, §5.
Appendix
Nonfactorial model
To show that is a reasonable assumption, we also implemented a nonfactorial version of DRDRL, i.e. assuming that , with 3 channels. The architecture of the nonfactorial model is shown in figure 7. The three subdistributions have atoms, atoms and atoms respectively and they multiply to form a full distribution with atoms. To maintain similar network capacity as C51, we set to form a full distribution of 64 atoms.
Ablative results of the nonfactorial model are shown in the following section together with other tricks.
Ablative Analysis
Figure 8 shows how each component of our method contributes to DRDRL. Specifically, we test the performance of ‘1D convolution’, ‘1D convolution + KL’, ‘1Dconvolution + onehot’ and ‘1Dconvolution + KL + onehot’ as well as the nonfactorial model.
One may argue that the superior performance of DRDRL might come from the fact that fitting N simple subdistributions with M/N categories is easier than fitting one with M categories. We include another set of experiments to justify this argument by using M atoms instead of M/N atoms for each subdistribution.
Results show that every part of DRDRL is important. To our surprise, the KL term not only allows us to perform reward decomposition, we also observe significant differences between curves with and without the KL term. This suggests that learning decomposed reward can greatly boost the performance of RL algorithms. Together with the degraded performance of M atoms instead of M/N atoms in each subdistribution, it is suffice to suggest that the DRDRL’s success does not come from fitting easier distributions with M/N atoms.
Comments
There are no comments yet.