Introduction
Mean of the return has been the center for reinforcement learning (RL) for a long time, and there have been many methods to learn a mean quantity (sutton1988learning ̵̃sutton1988learning; watkins1992q ̵̃watkins1992q; mnih2015human ̵̃mnih2015human). Thanks to the advances in distributional RL (jaquette1973markov ̵̃jaquette1973markov; bellemare2017distributional ̵̃bellemare2017distributional), we are able to learn the full distribution, not only the mean, for a stateaction value. Particularly, dabney2017distributional ̵̃dabney2017distributional used a set of quantiles to approximate this value distribution. However, the decision making in prevailing distributional RL methods is still based on the mean (bellemare2017distributional ̵̃bellemare2017distributional; dabney2017distributional ̵̃dabney2017distributional; barth2018distributed ̵̃barth2018distributed; qu2018nonlinear ̵̃qu2018nonlinear). The main motivation of this paper is to answer the questions of how to make decision based on the full distribution and whether an agent can benefit for better exploration. In this paper, we propose the Quantile Option Architecture (QUOTA) for control. In QUOTA, decision making is based on all quantiles, not only the mean, of a stateaction value distribution.
In traditional RL and recent distributional RL, an agent selects an action greedily with respect to the mean of the action values. In QUOTA, we propose to select an action greedily w.r.t. certain quantile of the action value distribution. A high quantile represents an optimisticestimation of the action value, and action selection based on a high quantile indicates an optimistic exploration strategy. A low quantile represents a pessimistic
estimation of the action value, and action selection based on a low quantile indicates a pessimistic exploration strategy. (The two exploration strategies are related to risksensitive RL, which will be discussed later.) We first compared different exploration strategies in two Markov chains, where naive meanbased RL algorithms fail to explore efficiently as they cannot exploit the distribution information during training, which is crucial for efficient exploration. In the first chain, faster exploration is from a high quantile (i.e., an optimistic exploration strategy). However, in the second chain, exploration benefits from a low quantile (i.e., a pessimistic exploration strategy). Different tasks need different exploration strategies. Even within one task, an agent may still need different exploration strategies at different stages of learning. To address this issue, we use the option framework (sutton1999between ̵̃sutton1999between). We learn a highlevel policy to decide which quantile to use for action selection. In this way, different quantiles function as different options, and we name this special option the
quantile option. QUOTA adaptively selects a pessimistic and optimistic exploration strategy, resulting in improved exploration consistently across different tasks.We make two main contributions in this paper:

First, we propose QUOTA for control in discreteaction problems, combining distributional RL with options. Action selection in QUOTA is based on certain quantiles instead of the mean of the stateaction value distribution, and QUOTA learns a highlevel policy to decide which quantile to use for decision making.

Second, we extend QUOTA to continuousaction problems. In a continuousaction space, applying quantilebased action selection is not straightforward. To address this issue, we introduce quantile actors. Each quantile actor is responsible for proposing an action that maximizes one specific quantile of a stateaction value distribution.
We show empirically QUOTA improves the exploration of RL agents, resulting in a performance boost in both challenging video games (Atari games) and physical robot simulators (Roboschool tasks)
In the rest of this paper, we first present some preliminaries of RL. We then show two Markov chains where naive meanbased RL algorithms fail to explore efficiently. Then we present QUOTA for both discrete and continuousaction problems, followed by empirical results. Finally, we give an overview of related work and closing remarks.
Preliminaries
We consider a Markov Decision Process (MDP) of a state space
, an action space , a reward “function”, which we treat as a random variable in this paper, a transition kernel
, and a discount ratio . We use to denote a stochastic policy. We use to denote the random variable of the sum of the discounted rewards in the future, following the policy and starting from the state and the action . We have , where and . The expectation of the random variable is , which is usually called the stateaction value function. We have the Bellman equationIn a RL problem, we are usually interested in finding an optimal policy such that . All the possible optimal policies share the same (optimal) state action value function . This is the unique fixed point of the Bellman optimality operator (bellman2013dynamic ̵̃bellman2013dynamic)
With tabular representation, we can use Qlearning (watkins1992q ̵̃watkins1992q) to estimate . The incremental update per step is
(1) 
where the quadruple
is a transition. There are lots of research and algorithms extending Qlearning to linear function approximation (sutton2018reinforcement ̵̃sutton2018reinforcement; szepesvari2010algorithms ̵̃szepesvari2010algorithms). In this paper, we focus on Qlearning with neural networks. mnih2015human ̵̃mnih2015human proposed DeepQNetwork (DQN), where a deep convolutional neural network
is used to parameterize. At every time step, DQN performs stochastic gradient descent to minimize
where the quadruple is a transition sampled from the replay buffer (lin1992self ̵̃lin1992self) and is the target network (mnih2015human ̵̃mnih2015human), which is a copy of and is synchronized with periodically. To speed up training and reduce the required memory of DQN, mnih2016asynchronous ̵̃mnih2016asynchronous further proposed the
step asynchronous Qlearning with multiple workers (detailed in Supplementary Material), where the loss function at time step
isDistributional RL
Analogous to the Bellman equation of , bellemare2017distributional ̵̃bellemare2017distributional proposed the distributional Bellman equation for the stateaction value distribution given a policy in the policy evaluation setting,
where means the two random variables and are distributed according to the same law. bellemare2017distributional ̵̃bellemare2017distributional also proposed a distributional Bellman optimality operator for control,
When making decision, the action selection is still based on the expected stateaction value (i.e., ). Since we have the optimality, now we need an representation for . dabney2017distributional ̵̃dabney2017distributional proposed to approximate by a set of quantiles. The distribution of is represented by a uniform mix of supporting quantiles:
where denote a Dirac at , and each is an estimation of the quantile corresponding to the quantile level (a.k.a. quantile index) with for . The stateaction value is then approximated by . Such approximation of a distribution is referred to as quantile approximation. Those quantile estimations (i.e., ) are trained via the Huber quantile regression loss (huber1964robust ̵̃huber1964robust). To be more specific, at time step the loss is
where and , where is the indicator function and is the Huber loss,
The resulting algorithm is the Quantile Regression DQN (QRDQN). QRDQN also uses experience replay and target network similar to DQN. dabney2017distributional ̵̃dabney2017distributional showed that quantile approximation has better empirical performance than previous categorical approximation (bellemare2017distributional ̵̃bellemare2017distributional). More recently, dabney2018implicit ̵̃dabney2018implicit approximated the distribution by learning a quantile function directly with the Implicit Quantile Network, resulting in further performance boost. Distributional RL has enjoyed great success in various domains (bellemare2017distributional ̵̃bellemare2017distributional; dabney2017distributional ̵̃dabney2017distributional; hessel2017rainbow ̵̃hessel2017rainbow; barth2018distributed ̵̃barth2018distributed; dabney2018implicit ̵̃dabney2018implicit).
Deterministic Policy
silver2014deterministic ̵̃silver2014deterministic used a deterministic policy for continuous control problems with linear function approximation, and lillicrap2015continuous ̵̃lillicrap2015continuous extended it with deep networks, resulting in the Deep Deterministic Policy Gradient (DDPG) algorithm. DDPG is an offpolicy algorithm. It has an actor and a critic , parameterized by and respectively. At each time step, is updated to minimize
And the policy gradient for in DDPG is
This gradient update is from the chain rule of gradient ascent w.r.t.
, where is interpreted as an approximation to . silver2014deterministic ̵̃silver2014deterministic provided policy improvement guarantees for this gradient.Option
An option (sutton1999between ̵̃sutton1999between) is a temporal abstraction of action. Each option is a triple , where is the option set. We use to denote the initiation set for the option , describing where the option can be initiated. We use to denote the intraoption policy for . Once the agent has committed to the option , it chooses an action based on . We use to denote the termination function for the option . At each time step , the option
terminates with probability
. In this paper, we consider the callandreturn option execution model (sutton1999between ̵̃sutton1999between), where an agent commits to an option until terminates according to .The option value function is used to describe the utility of an option at state , and we can learn this function via Intraoption Qlearning (sutton1999between ̵̃sutton1999between). The update is
where is a step size and is a transition in the cycle that the agent is committed to the option .
A Failure of Mean
(a) two Markov chains illustrating the inefficiency of meanbased decision making. (b)(c) show the required steps to find the optimal policy for each algorithm v.s. the chain length for Chain 1 and Chain 2 respectively. The required steps are averaged over 10 trials, and standard errors are plotted as error bars. For each trial, the maximum step is
.We now present two simple Markov chains (Figure 1) to illustrate meanbased RL algorithms can fail to explore efficiently.
Chain 1 has nonterminal states and two actions {LEFT, UP}. The agent starts at the state 1 in each episode. The action UP will lead to episode ending immediately with reward 0. The action LEFT
will lead to the next state with a reward sampled from a normal distribution
. Once the agent runs into the G terminal state, the episode will end with a reward . There is no discounting. The optimal policy is always moving left.We first consider tabular Qlearning with greedy exploration. To learn the optimal policy, the agent has to reach the G state first. Unfortunately, this is particularly difficult for Qlearning. The difficulty comes from two aspects. First, due to the greedy mechanism, the agent sometimes selects UP by randomness. Then the episode will end immediately, and the agent has to wait for the next episode. Second, before the agent reaches the G state, the expected return of either LEFT or UP at any state is 0. So the agent cannot distinguish between the two actions under the mean criterion because the expected returns are the same. As a result, the agent cannot benefit from the value estimation, a mean, at all.
Suppose now the agent learns the distribution of the returns of LEFT and UP. Before reaching the state G, the learned actionvalue distribution of LEFT is a normal distribution with a mean 0. A high quantile level of this distribution is greater than 0, which is an optimistic estimation. If the agent behaves according to this optimistic estimation, it can quickly reach the state G and find the optimal policy.
Chain 2 has the same state space and action space as Chain 1. However, the reward for LEFT is now except that reaching the G state gives a reward . The reward for UP is sampled from . There is no discounting. When is small, the optimal policy is still always moving left. Before reaching , the estimation of the expected return of LEFT for any nonterminal state is less than 0, which means a Qlearning agent would prefer UP. This preference is bad for this chain as it will lead to episode ending immediately, which prevents further exploration.
We now present some experimental results of four algorithms in the two chains: Qlearning, quantile regression Qlearning (QR, the tabular version of QRDQN), optimistic quantile regression Qlearning (OQR), and pessimistic quantile regression Qlearning (PQR). The OQR / PQR is the same as QR except that the behavior policy is always derived from a high / low quantile, not the mean, of the stateaction value distribution. We used greedy behavior policies for all the above algorithms. We measured the steps that each algorithm needs to find the optimal policy. The agent is said to have found the optimal policy at time if and only if the policy derived from the estimation (for Qlearning) or the mean of the estimation (for the other algorithms) at time step is to move left at all nonterminal states.
All the algorithms were implemented with a tabular state representation. was fixed to . For quantile regression, we used 3 quantiles. We varied the chain length and tracked the steps that an algorithm needed to find the optimal policy. Figure 1b and Figure 1c show the results in Chain 1 and Chain 2 respectively. Figure 1b shows the best algorithm for Chain 1 was OQR, where a high quantile is used to derive the behavior policy, indicating an optimistic exploration strategy. PQR performed poorly with the increase of the chain length. Figure 1c shows the best algorithm for Chain 2 was PQR, where a low quantile is used to derive the behavior policy, indicating a pessimistic exploration strategy. OQR performed poorly with the increase of the chain length. The meanbased algorithms (Qlearning and QR) performed poorly in both chains. Although QR did learn the distribution information, it did not use that information properly for exploration.
The results in the two chains show that quantiles influence exploration efficiency, and quantilebased action selection can improve exploration if the quantile is properly chosen. The results also demonstrate that different tasks need different quantiles. No quantile is the best universally. As a result, a highlevel policy for quantile selection is necessary.
The Quantile Option Architecture
We now introduce QUOTA for discreteaction problems. We have quantile estimations for quantile levels . We construct options . For simplicity, in this paper all the options share the same initiation set and the same termination function , which is a constant function. We use to indicate the intraoption policy of an option . proposes actions based on the mean of the th window of quantiles, where . (We assume is divisible by for simplicity.) Here represents a window size and we have windows in total. To be more specific, in order to compose the th option, we first define a stateaction value function by averaging a local window of quantiles:
We then define the intraoption policy of the th option to be an greedy policy with respect to . Here we compose an option with a window of quantiles, instead of a single quantile, to increase stability. It appears to be a mean form, but it is not the mean of the full stateaction value distribution. QUOTA learns the optionvalue function via Intraoption Qlearning for option selection. The quantile estimations is learned via QRDQN.
To summarize, at each time step , we reselect a new option with probability and continue executing the previous option with probability . The reselection of the new option is done via a greedy policy derived from , where is the random option selection probability. Once the current option is determined, we then select an action according to the intraoption policy of . The pseudo code of QUOTA is provided in Supplementary Material.
QUOTA for Continuous Control
QrDdpg
Quantile Regression DDPG (QRDDPG, detailed in Supplementary Material) is a new algorithm by modifying DDPG’s critic. Instead of learning , we learn the distribution directly in the same manner as the discreteaction controller QRDQN. barth2018distributed ̵̃barth2018distributed also learned instead of . However, they parameterized through categorical approximation and mixed Gaussian approximation. To our best knowledge, QRDDPG is the first to parameterize with quantile estimations in continuousaction problems. We use QRDDPG and DDPG as baselines.
Quota
Given the distribution of the stateaction value approximated by quantiles, the main question now is how to select an action according to certain quantile. For a finite discrete action space, action selection is done according to an greedy policy with respect to , where we need to iterate through all possible actions in the whole action space. To get the action that maximizes a quantile of a distribution in continuousaction problems, we perform gradient ascent for different intraoption policies in analogy to DDPG. We have options . The intraoption policy for the option is a deterministic mapping . We train to approximate the greedy action via gradient ascent. To be more specific, the gradient for (parameterized by ) at time step is
To compute the update target for the critic , we also need one more actor to maximize the mean of the distribution (i.e., ) as it is impossible to iterate through all the actions in a continuousaction space. Note is the same as the actor of QRDDPG. We augment QUOTA’s option set with , giving options. We name the th quantile actor (. QUOTA for continuousaction problems is detailed in Supplementary Material.
Experiments
We designed experiments to study whether QUOTA improves exploration and can scale to challenging tasks. All the implementations are made publicly available. ^{1}^{1}1https://github.com/ShangtongZhang/DeepRL
Does QUOTA improve exploration?
We benchmarked QUOTA for discreteaction problems in the two chains. We used a tabular state representation and three quantiles to approximate as previous experiments. Both and were fixed at 0.1, was fixed at 0, which means an option never terminated and lasted for a whole episode. The results are reported in Figures 1b and 1c. QUOTA consistently performed well in both chains.
Although QUOTA did not achieve the best performance in the two chains, it consistently reached comparable performance level with the best algorithm in both chains. Not only the best algorithm in each chain was designed with certain domain knowledge, but also the best algorithm in one chain performed poorly in the other chain. We do not expect QUOTA to achieve the best performance in both chains because it has not used chainspecific knowledge. Those meanbased algorithms (Qlearning and QR) consistently performed poorly in both chains. QUOTA achieved more efficient exploration than Qlearning and QR.
Can QUOTA scale to challenging tasks?
To verify the scalability of QUOTA, we evaluated QUOTA in both Arcade Learning Environment (ALE) (bellemare2013arcade ̵̃bellemare2013arcade) and Roboschool ^{2}^{2}2https://blog.openai.com/roboschool/, both of which are generalpurpose RL benchmarks.
Arcade Learning Environment
We evaluated QUOTA in the 49 Atari games from ALE as mnih2015human ̵̃mnih2015human. Our baseline algorithm is QRDQN. We implemented QRDQN with multiple workers (mnih2016asynchronous ̵̃mnih2016asynchronous; clemente2017efficient ̵̃clemente2017efficient) and an step return extension (mnih2016asynchronous ̵̃mnih2016asynchronous; hessel2017rainbow ̵̃hessel2017rainbow), resulting in reduced wall time and memory consumption compared with an experiencereplaybased implementation. We also implemented QUOTA in the same way. Details and more justification for this implementation can be found in Supplementary Material.
We used the same network architecture as dabney2017distributional ̵̃dabney2017distributional to process the input pixels. For QUOTA, we added an extra head to produce the option value
after the second last layer of the network. For both QUOTA and QRDQN, we used 16 synchronous workers, and the rollout length is 5, resulting in a batch size 80. We trained each agent for 40M steps with frameskip 4, resulting in 160M frames in total. We used an RMSProp optimizer with an initial learning rate
. The discount factor is 0.99. The for action selection was linearly decayed from 1.0 to 0.05 in the first 4M training steps and remained 0.05 afterwards. All the hyperparameter values above were based on an step Qlearning baseline with synchronous workers from farquhar2018treeqn ̵̃farquhar2018treeqn, and all our implementations inherited these hyperparameter values. We used 200 quantiles to approximate the distribution and set the Huber loss parameter to 1 as used by dabney2017distributional ̵̃dabney2017distributional. We used 10 options in QUOTA () , and was linearly decayed from 1.0 to 0 during the 40M training steps. was fixed at 0.01. We tuned from in the game Freeway. The schedule of was also tuned in Freeway.We measured the final performance at the end of training (i.e., the mean episode return of the last 1,000 episodes) and the cumulative rewards during training. The results are reported in Figure 2. In terms of the final performance / cumulative rewards, QUOTA outperformed QRDQN in 23 / 21 games and underperformed QRDQN in 14 / 13 games. Here we only considered performance change larger than 3%. Particularly, in the 10 most challenging games (according to the scores of DQN reported in mnih2015human ̵̃mnih2015human), QUOTA achieved a 97.2% cumulative reward improvement in average.
In QUOTA, randomness comes from both and . However, in QRDQN, randomness only comes from . So QUOTA does have more randomness (i.e., exploration) than QRDQN when is the same. To make it fair, we also implemented an alternative version of QRDQN, referred to as QRDQNAlt, where all the hyperparameter values were the same except that was linearly decayed from 1.0 to 0 during the whole 40M training steps like . In this way, QRDQNAlt had a comparable amount of exploration with QUOTA.
We also benchmarked QRDQNAlt in the 49 Atari games. In terms of the final performance / cumulative rewards, QUOTA outperformed QRDQNAlt in 27 / 42 games and underperformed QRDQNAlt in 14 / 5 games, indicating a naive increase of exploration via tuning the dose not guarantee a performance boost.
All the original learning curves and scores are reported in Supplementary Material.
Roboschool
Roboschool is a free port of Mujoco ^{3}^{3}3http://www.mujoco.org/
by OpenAI, where a state contains joint information of a robot and an action is a multidimensional continuous vector. We consider DDPG and QRDDPG as our baselines, implemented with experience replay. For DDPG, we used the same hyperparameter values and exploration noise as lillicrap2015continuous ̵̃lillicrap2015continuous, except that we found replacing
and regularizer with brought in a performance boost in our Roboschool tasks. Our implementations of QUOTA and QRDDPG inherited these hyperparameter values. For QRDDPG, we used 20 output units after the second last layer of the critic network to produce 20 quantile estimations for the action value distribution. For QUOTA, we used another twohiddenlayer network with 400 and 300 hidden units to compute . (This network is the same as the actor network in DDPG and QRDDPG.) We used 6 options in total, including one option that corresponds to the actor maximizing the mean value. was linearly decayed from 1.0 to 0 in the whole 1M training steps. was fixed at 1.0, which means we reselected an option at every time step. We tuned from in the game Ant. We trained each algorithm for 1M steps and performed 20 deterministic evaluation episodes every 10K training steps.Visualization
To understand option selection during training, we plot the frequency of the greedy options according to in different stages of training in Figure 4. At different training stage, did propose different quantile options. The quantile option corresponding to mean or median did not dominate the training. In fact, during training, the meanmaximizing options was rarely proposed by the highlevel policy. This indicates that the traditional meancentered action selection (adopted in standard RL and prevailing distributional RL) can be improved by quantilebased exploration.
Related Work
There have been many methods for exploration in modelfree setting based on the idea of optimism in the face of uncertainty (lai1985asymptotically ̵̃lai1985asymptotically). Different approaches are used to achieve this optimism, e.g., countbased methods (auer2002using ̵̃auer2002using; kocsis2006bandit ̵̃kocsis2006bandit; bellemare2016unifying ̵̃bellemare2016unifying; ostrovski2017count ̵̃ostrovski2017count; tang2017exploration ̵̃tang2017exploration) and Bayesian methods (kaufmann2012bayesian ̵̃kaufmann2012bayesian; chen2017ucb ̵̃chen2017ucb; o2017uncertainty ̵̃o2017uncertainty). Those methods make use of optimism of the parametric uncertainty (i.e., the uncertainty from estimation). In contrast, QUOTA makes use of both optimism and pessimism of the intrinsic uncertainty (i.e., the uncertainty from the MDP itself). Recently, moerland2017efficient ̵̃moerland2017efficient combined the two uncertainty via the Double Uncertain Value Network and demonstrated performance boost in simple domains.
There is another line of related work in risksensitive RL. Classic risksensitive RL is also based on the intrinsic uncertainty, where a utility function (morgenstern1953theory ̵̃morgenstern1953theory) is used to distort a value distribution, resulting in riskaverse or riskseeking policies (howard1972risk ̵̃howard1972risk; marcus1997risk ̵̃marcus1997risk; chow2014algorithms ̵̃chow2014algorithms). The expectation of the utilityfunctiondistorted valued distribution can be interpreted as a weighted sum of quantiles (dhaene2012remarks ̵̃dhaene2012remarks), meaning that quantilebased action selection implicitly adopts the idea of utility function. Particularly, morimura2012parametric ̵̃morimura2012parametric used a small quantile for control, resulting in a safe policy in the cliff grid world. chris2017particle ̵̃chris2017particle employed an exponential utility function over the parametric uncertainty, also resulting in a performance boost in the cliff grid world. Recently, dabney2018implicit ̵̃dabney2018implicit proposed various risksensitive policies by applying different utility functions to a learned quantile function. The highlevel policy in QUOTA can be interpreted as a special utility function, and the optimistic and pessimistic exploration in QUOTA can be interpreted as risksensitive policies. However, the “utility function” in QUOTA is formalized in the option framework, which is learnable. We do not need extra labor to prespecify a utility function.
Moreover, tang2018exploration ̵̃tang2018exploration combined Bayesian parameter updates with distributional RL for efficient exploration. However, improvements were demonstrated only in simple domains.
Closing Remarks
QUOTA achieves an onthefly decision between optimistic and pessimistic exploration, resulting in improved performance in various domains. QUOTA provides a new dimension for exploration. In this optimismpessimism dimension, an agent may be able to find an effective exploration strategy more quickly than in the original action space. QUOTA provides an optionlevel exploration.
At first glance, QUOTA introduces three extra hyperparameters, i.e., the number of options , the random option probability , and the termination probability . For , we simply used 10 and 5 options for Atari games and Roboschool tasks respectively. For , we used a linear schedule decaying from 1.0 to 0 during the whole training steps, which is also a natural choice. We do expect performance improvement if and are further tuned. For , we tuned it in only two environments and used the same value for all the other environments. Involved labor effort was little. Furthermore, can also be learned directly in an endtoend training via the termination gradient theorem (bacon2017option ̵̃bacon2017option). We leave this for future work.
BootstrappedDQN (BDQN, osband2016deep ̵̃osband2016deep) approximated the distribution of the expected return via a set of heads. At the beginning of each episode, BDQN uniformly samples one head and commits to that head during the whole episode. This uniform sampling and episodelong commitment is crucial to the deep exploration of BDQN and inspires us to set and . However, this special configuration only improved performance in certain tasks. Each head in BDQN is an estimation of value. All heads are expected to converge to the optimal state action value at the end of training. As a result, a simple uniform selection over the heads in BDQN does not hurt performance. However, in QUOTA, each head (i.e., quantile option) is an estimation of a quantile of . Not all quantiles are useful for control. A selection among quantile options is necessary. One future work is to combine QUOTA and BDQN or other parametricuncertaintybased algorithms (e.g., countbased methods) by applying them to each quantile estimation.
References

[Auer2002]
Auer, P.
2002.
Using confidence bounds for exploitationexploration tradeoffs.
Journal of Machine Learning Research
. 
[Bacon, Harb, and
Precup2017]
Bacon, P.L.; Harb, J.; and Precup, D.
2017.
The optioncritic architecture.
In
Proceedings of the 31st AAAI Conference on Artificial Intelligence
.  [BarthMaron et al.2018] BarthMaron, G.; Hoffman, M. W.; Budden, D.; Dabney, W.; Horgan, D.; Muldal, A.; Heess, N.; and Lillicrap, T. 2018. Distributed distributional deterministic policy gradients. arXiv preprint arXiv:1804.08617.
 [Bellemare et al.2013] Bellemare, M. G.; Naddaf, Y.; Veness, J.; and Bowling, M. 2013. The arcade learning environment: An evaluation platform for general agents. Journal of Artificial Intelligence Research.
 [Bellemare et al.2016] Bellemare, M.; Srinivasan, S.; Ostrovski, G.; Schaul, T.; Saxton, D.; and Munos, R. 2016. Unifying countbased exploration and intrinsic motivation. In Advances in Neural Information Processing Systems.
 [Bellemare, Dabney, and Munos2017] Bellemare, M. G.; Dabney, W.; and Munos, R. 2017. A distributional perspective on reinforcement learning. arXiv preprint arXiv:1707.06887.
 [Bellman2013] Bellman, R. 2013. Dynamic programming. Courier Corporation.
 [Chen et al.2017] Chen, R. Y.; Sidor, S.; Abbeel, P.; and Schulman, J. 2017. Ucb exploration via qensembles. arXiv preprint arXiv:1706.01502.
 [Chow and Ghavamzadeh2014] Chow, Y., and Ghavamzadeh, M. 2014. Algorithms for cvar optimization in mdps. In Advances in Neural Information Pprocessing Systems.
 [Clemente, Castejón, and Chandra2017] Clemente, A. V.; Castejón, H. N.; and Chandra, A. 2017. Efficient parallel methods for deep reinforcement learning. arXiv preprint arXiv:1705.04862.
 [Dabney et al.2017] Dabney, W.; Rowland, M.; Bellemare, M. G.; and Munos, R. 2017. Distributional reinforcement learning with quantile regression. arXiv preprint arXiv:1710.10044.
 [Dabney et al.2018] Dabney, W.; Ostrovski, G.; Silver, D.; and Munos, R. 2018. Implicit quantile networks for distributional reinforcement learning. arXiv preprint arXiv:1806.06923.
 [Dhaene et al.2012] Dhaene, J.; Kukush, A.; Linders, D.; and Tang, Q. 2012. Remarks on quantiles and distortion risk measures. European Actuarial Journal.
 [Farquhar et al.2018] Farquhar, G.; Rocktäschel, T.; Igl, M.; and Whiteson, S. 2018. Treeqn and atreec: Differentiable treestructured models for deep reinforcement learning. arXiv preprint arXiv:1710.11417.
 [Hessel et al.2017] Hessel, M.; Modayil, J.; Van Hasselt, H.; Schaul, T.; Ostrovski, G.; Dabney, W.; Horgan, D.; Piot, B.; Azar, M.; and Silver, D. 2017. Rainbow: Combining improvements in deep reinforcement learning. arXiv preprint arXiv:1710.02298.
 [Howard and Matheson1972] Howard, R. A., and Matheson, J. E. 1972. Risksensitive markov decision processes. Management Science.
 [Huber and others1964] Huber, P. J., et al. 1964. Robust estimation of a location parameter. The Annals of Mathematical Statistics.
 [Jaquette1973] Jaquette, S. C. 1973. Markov decision processes with a new optimality criterion: Discrete time. The Annals of Statistics.
 [Kaufmann, Cappé, and Garivier2012] Kaufmann, E.; Cappé, O.; and Garivier, A. 2012. On bayesian upper confidence bounds for bandit problems. In Proceedings of the 15th International Conference on Artificial Intelligence and Statistics.
 [Kocsis and Szepesvári2006] Kocsis, L., and Szepesvári, C. 2006. Bandit based montecarlo planning. In European conference on machine learning.
 [Lai and Robbins1985] Lai, T. L., and Robbins, H. 1985. Asymptotically efficient adaptive allocation rules. Advances in Applied Mathematics.
 [Lillicrap et al.2015] Lillicrap, T. P.; Hunt, J. J.; Pritzel, A.; Heess, N.; Erez, T.; Tassa, Y.; Silver, D.; and Wierstra, D. 2015. Continuous control with deep reinforcement learning. arXiv preprint arXiv:1509.02971.
 [Lin1992] Lin, L.J. 1992. Selfimproving reactive agents based on reinforcement learning, planning and teaching. Machine Learning.
 [Maddison et al.2017] Maddison, C. J.; Lawson, D.; Tucker, G.; Heess, N.; Doucet, A.; Mnih, A.; and Teh, Y. W. 2017. Particle value functions. arXiv preprint arXiv:1703.05820.
 [Marcus et al.1997] Marcus, S. I.; FernándezGaucherand, E.; HernándezHernandez, D.; Coraluppi, S.; and Fard, P. 1997. Risk sensitive markov decision processes. In Systems and Control in the Twentyfirst Century. Springer.
 [Mnih et al.2015] Mnih, V.; Kavukcuoglu, K.; Silver, D.; Rusu, A. A.; Veness, J.; Bellemare, M. G.; Graves, A.; Riedmiller, M.; Fidjeland, A. K.; Ostrovski, G.; et al. 2015. Humanlevel control through deep reinforcement learning. Nature.
 [Mnih et al.2016] Mnih, V.; Badia, A. P.; Mirza, M.; Graves, A.; Lillicrap, T.; Harley, T.; Silver, D.; and Kavukcuoglu, K. 2016. Asynchronous methods for deep reinforcement learning. In Proceedings of the 33rd International Conference on Machine Learning.
 [Moerland, Broekens, and Jonker2017] Moerland, T. M.; Broekens, J.; and Jonker, C. M. 2017. Efficient exploration with double uncertain value networks. arXiv preprint arXiv:1711.10789.
 [Morgenstern and Von Neumann1953] Morgenstern, O., and Von Neumann, J. 1953. Theory of games and economic behavior. Princeton university press.
 [Morimura et al.2012] Morimura, T.; Sugiyama, M.; Kashima, H.; Hachiya, H.; and Tanaka, T. 2012. Parametric return density estimation for reinforcement learning. arXiv preprint arXiv:1203.3497.
 [O’Donoghue et al.2017] O’Donoghue, B.; Osband, I.; Munos, R.; and Mnih, V. 2017. The uncertainty bellman equation and exploration. arXiv preprint arXiv:1709.05380.
 [Osband et al.2016] Osband, I.; Blundell, C.; Pritzel, A.; and Van Roy, B. 2016. Deep exploration via bootstrapped dqn. In Advances in Neural Information Processing Systems.
 [Ostrovski et al.2017] Ostrovski, G.; Bellemare, M. G.; Oord, A. v. d.; and Munos, R. 2017. Countbased exploration with neural density models. arXiv preprint arXiv:1703.01310.
 [Qu, Mannor, and Xu2018] Qu, C.; Mannor, S.; and Xu, H. 2018. Nonlinear distributional gradient temporaldifference learning. arXiv preprint arXiv:1805.07732.
 [Silver et al.2014] Silver, D.; Lever, G.; Heess, N.; Degris, T.; Wierstra, D.; and Riedmiller, M. 2014. Deterministic policy gradient algorithms. In Proceedings of the 31st International Conference on Machine Learning.
 [Sutton and Barto2018] Sutton, R. S., and Barto, A. G. 2018. Reinforcement learning: An introduction (2nd Edition). MIT press.
 [Sutton, Precup, and Singh1999] Sutton, R. S.; Precup, D.; and Singh, S. 1999. Between mdps and semimdps: A framework for temporal abstraction in reinforcement learning. Artificial Intelligence.
 [Sutton1988] Sutton, R. S. 1988. Learning to predict by the methods of temporal differences. Machine Learning.
 [Szepesvári2010] Szepesvári, C. 2010. Algorithms for reinforcement learning. Morgan and Claypool.
 [Tang and Agrawal2018] Tang, Y., and Agrawal, S. 2018. Exploration by distributional reinforcement learning. arXiv preprint arXiv:1805.01907.
 [Tang et al.2017] Tang, H.; Houthooft, R.; Foote, D.; Stooke, A.; Chen, O. X.; Duan, Y.; Schulman, J.; DeTurck, F.; and Abbeel, P. 2017. # exploration: A study of countbased exploration for deep reinforcement learning. In Advances in Neural Information Processing Systems.
 [Watkins and Dayan1992] Watkins, C. J., and Dayan, P. 1992. Qlearning. Machine Learning.
Supplementary Material
QUOTA for DiscreteAction Problems
The algorithm is presented in an online form for simplicity. Implementation details are illustrated in the next section.
Experimental Results in Atari Games
DQN used experience replay to stabilize the training of the convolutional neural network function approximator. However, a replay buffer storing pixels consumes much memory, and the training of DQN is slow. mnih2016asynchronous ̵̃mnih2016asynchronous proposed asynchronous methods to speed up training, where experience replay was replaced by multiple asynchronous workers. Each worker has its own environment instance and its own copy of the learning network. Those workers interact with the environments and compute the gradients of the learning network in parallel in an asynchronous manner. Only the gradients are collected by a master worker. This master worker updates the learning network with the collected gradients and broadcast the updated network to each worker. However, asynchronous methods cannot take advantage of a modern GPU (mnih2016asynchronous ̵̃mnih2016asynchronous). To address this issue, coulom2006efficient ̵̃coulom2006efficient proposed batched training with multiple synchronous workers. Besides multiple workers, mnih2016asynchronous ̵̃mnih2016asynchronous also used step Qlearning. Recently, the step extension of Qlearning is shown to be a crucial component of the Rainbow architecture (hessel2017rainbow ̵̃hessel2017rainbow), which maintains the stateoftheart performance in Atari games. The step Qlearning with multiple workers has been widely used as a baseline algorithm (oh2017value ̵̃oh2017value; farquhar2018treeqn ̵̃farquhar2018treeqn). In our experiments, we implemented both QUOTA for discreteaction control and QRDQN with multiple workers and an step return extension.
Game  QUOTA  QRDQN  QRDQNAlt 

Alien  1821.91  1760.00  1566.83 
Amidar  571.46  567.97  257.50 
Assault  3511.17  3308.78  3948.54 
Asterix  6112.12  6176.05  5135.85 
Asteroids  1497.62  1305.30  1173.98 
Atlantis  965193.00  978385.30  516660.60 
BankHeist  735.27  644.72  866.66 
BattleZone  25321.67  22725.00  23858.00 
BeamRider  5522.60  5007.83  6411.33 
Bowling  34.08  27.64  31.98 
Boxing  96.16  95.02  91.12 
Breakout  316.74  322.17  334.45 
Centipede  3537.92  4330.31  3492.00 
ChopperCommand  3793.03  3421.10  2623.57 
CrazyClimber  113051.70  107371.67  109117.37 
DemonAttack  61005.12  80026.68  23286.76 
DoubleDunk  21.56  21.66  21.97 
Enduro  1162.35  1220.06  641.76 
FishingDerby  59.09  9.60  71.70 
Freeway  31.04  30.60  29.78 
Frostbite  2208.58  2046.36  503.82 
Gopher  6824.34  9443.89  7352.45 
Gravitar  457.65  414.33  251.93 
IceHockey  9.94  9.87  12.60 
Jamesbond  495.58  601.75  523.37 
Kangaroo  2555.80  2364.60  1730.33 
Krull  7747.51  7725.47  8071.42 
KungFuMaster  20992.57  17807.43  21586.23 
MontezumaRevenge  0.00  0.00  0.00 
MsPacman  2423.57  2273.35  2100.19 
NameThisGame  7327.55  7748.26  9509.96 
Pitfall  30.76  32.99  25.22 
Pong  20.03  19.66  20.59 
PrivateEye  114.16  419.35  375.61 
Qbert  11790.29  10875.34  5544.53 
Riverraid  10169.87  9710.47  8700.98 
RoadRunner  27872.27  27640.73  35419.80 
Robotank  37.68  45.11  35.44 
Seaquest  2628.60  1690.57  4108.77 
SpaceInvaders  1553.88  1387.61  1137.42 
StarGunner  52920.00  49286.60  45910.30 
Tennis  23.70  22.74  23.17 
TimePilot  5125.13  6417.70  5275.07 
Tutankham  195.44  173.26  198.89 
UpNDown  24912.70  30443.61  29886.25 
Venture  26.53  5.30  3.73 
VideoPinball  44919.13  123425.46  78542.20 
WizardOfWor  4582.07  5219.00  3716.80 
Zaxxon  8252.83  6855.17  6144.97 
Quantile Regression DDPG
The algorithm is presented in an online learning form for simplicity. But in our experiments, we used experience replay and a target network same as DDPG.
QUOTA for ContinuousAction Problems
The algorithm is presented in an online learning form for simplicity. But in our experiments, we used experience replay and a target network same as DDPG.
Experimental Results in Roboschool
References
 [Coulom2006] Coulom, R. 2006. Efficient selectivity and backup operators in montecarlo tree search. In Proceedings of the International Conference on Computers and Games.
 [Farquhar et al.2018] Farquhar, G.; Rocktäschel, T.; Igl, M.; and Whiteson, S. 2018. Treeqn and atreec: Differentiable treestructured models for deep reinforcement learning. arXiv preprint arXiv:1710.11417.
 [Hessel et al.2017] Hessel, M.; Modayil, J.; Van Hasselt, H.; Schaul, T.; Ostrovski, G.; Dabney, W.; Horgan, D.; Piot, B.; Azar, M.; and Silver, D. 2017. Rainbow: Combining improvements in deep reinforcement learning. arXiv preprint arXiv:1710.02298.
 [Mnih et al.2016] Mnih, V.; Badia, A. P.; Mirza, M.; Graves, A.; Lillicrap, T.; Harley, T.; Silver, D.; and Kavukcuoglu, K. 2016. Asynchronous methods for deep reinforcement learning. In Proceedings of the 33rd International Conference on Machine Learning.
 [Oh, Singh, and Lee2017] Oh, J.; Singh, S.; and Lee, H. 2017. Value prediction network. In Advances in Neural Information Processing Systems.
Comments
There are no comments yet.