QUOTA: The Quantile Option Architecture for Reinforcement Learning

11/05/2018 ∙ by Shangtong Zhang, et al. ∙ HUAWEI Technologies Co., Ltd. Auburn University University of Alberta 0

In this paper, we propose the Quantile Option Architecture (QUOTA) for exploration based on recent advances in distributional reinforcement learning (RL). In QUOTA, decision making is based on quantiles of a value distribution, not only the mean. QUOTA provides a new dimension for exploration via making use of both optimism and pessimism of a value distribution. We demonstrate the performance advantage of QUOTA in both challenging video games and physical robot simulators.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 7

page 14

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

Introduction

Mean of the return has been the center for reinforcement learning (RL) for a long time, and there have been many methods to learn a mean quantity (sutton1988learning ̵̃sutton1988learning; watkins1992q ̵̃watkins1992q; mnih2015human ̵̃mnih2015human). Thanks to the advances in distributional RL (jaquette1973markov ̵̃jaquette1973markov; bellemare2017distributional ̵̃bellemare2017distributional), we are able to learn the full distribution, not only the mean, for a state-action value. Particularly, dabney2017distributional ̵̃dabney2017distributional used a set of quantiles to approximate this value distribution. However, the decision making in prevailing distributional RL methods is still based on the mean (bellemare2017distributional ̵̃bellemare2017distributional; dabney2017distributional ̵̃dabney2017distributional; barth2018distributed ̵̃barth2018distributed; qu2018nonlinear ̵̃qu2018nonlinear). The main motivation of this paper is to answer the questions of how to make decision based on the full distribution and whether an agent can benefit for better exploration. In this paper, we propose the Quantile Option Architecture (QUOTA) for control. In QUOTA, decision making is based on all quantiles, not only the mean, of a state-action value distribution.

In traditional RL and recent distributional RL, an agent selects an action greedily with respect to the mean of the action values. In QUOTA, we propose to select an action greedily w.r.t. certain quantile of the action value distribution. A high quantile represents an optimisticestimation of the action value, and action selection based on a high quantile indicates an optimistic exploration strategy. A low quantile represents a pessimistic

estimation of the action value, and action selection based on a low quantile indicates a pessimistic exploration strategy. (The two exploration strategies are related to risk-sensitive RL, which will be discussed later.) We first compared different exploration strategies in two Markov chains, where naive mean-based RL algorithms fail to explore efficiently as they cannot exploit the distribution information during training, which is crucial for efficient exploration. In the first chain, faster exploration is from a high quantile (i.e., an optimistic exploration strategy). However, in the second chain, exploration benefits from a low quantile (i.e., a pessimistic exploration strategy). Different tasks need different exploration strategies. Even within one task, an agent may still need different exploration strategies at different stages of learning. To address this issue, we use the option framework (sutton1999between ̵̃sutton1999between). We learn a high-level policy to decide which quantile to use for action selection. In this way, different quantiles function as different options, and we name this special option the

quantile option. QUOTA adaptively selects a pessimistic and optimistic exploration strategy, resulting in improved exploration consistently across different tasks.

We make two main contributions in this paper:

  • First, we propose QUOTA for control in discrete-action problems, combining distributional RL with options. Action selection in QUOTA is based on certain quantiles instead of the mean of the state-action value distribution, and QUOTA learns a high-level policy to decide which quantile to use for decision making.

  • Second, we extend QUOTA to continuous-action problems. In a continuous-action space, applying quantile-based action selection is not straightforward. To address this issue, we introduce quantile actors. Each quantile actor is responsible for proposing an action that maximizes one specific quantile of a state-action value distribution.

We show empirically QUOTA improves the exploration of RL agents, resulting in a performance boost in both challenging video games (Atari games) and physical robot simulators (Roboschool tasks)

In the rest of this paper, we first present some preliminaries of RL. We then show two Markov chains where naive mean-based RL algorithms fail to explore efficiently. Then we present QUOTA for both discrete- and continuous-action problems, followed by empirical results. Finally, we give an overview of related work and closing remarks.

Preliminaries

We consider a Markov Decision Process (MDP) of a state space

, an action space , a reward “function”

, which we treat as a random variable in this paper, a transition kernel

, and a discount ratio . We use to denote a stochastic policy. We use to denote the random variable of the sum of the discounted rewards in the future, following the policy and starting from the state and the action . We have , where and . The expectation of the random variable is , which is usually called the state-action value function. We have the Bellman equation

In a RL problem, we are usually interested in finding an optimal policy such that . All the possible optimal policies share the same (optimal) state action value function . This is the unique fixed point of the Bellman optimality operator (bellman2013dynamic ̵̃bellman2013dynamic)

With tabular representation, we can use Q-learning (watkins1992q ̵̃watkins1992q) to estimate . The incremental update per step is

(1)

where the quadruple

is a transition. There are lots of research and algorithms extending Q-learning to linear function approximation (sutton2018reinforcement ̵̃sutton2018reinforcement; szepesvari2010algorithms ̵̃szepesvari2010algorithms). In this paper, we focus on Q-learning with neural networks. mnih2015human ̵̃mnih2015human proposed Deep-Q-Network (DQN), where a deep convolutional neural network

is used to parameterize

. At every time step, DQN performs stochastic gradient descent to minimize

where the quadruple is a transition sampled from the replay buffer (lin1992self ̵̃lin1992self) and is the target network (mnih2015human ̵̃mnih2015human), which is a copy of and is synchronized with periodically. To speed up training and reduce the required memory of DQN, mnih2016asynchronous ̵̃mnih2016asynchronous further proposed the

-step asynchronous Q-learning with multiple workers (detailed in Supplementary Material), where the loss function at time step

is

Distributional RL

Analogous to the Bellman equation of , bellemare2017distributional ̵̃bellemare2017distributional proposed the distributional Bellman equation for the state-action value distribution given a policy in the policy evaluation setting,

where means the two random variables and are distributed according to the same law. bellemare2017distributional ̵̃bellemare2017distributional also proposed a distributional Bellman optimality operator for control,

When making decision, the action selection is still based on the expected state-action value (i.e., ). Since we have the optimality, now we need an representation for . dabney2017distributional ̵̃dabney2017distributional proposed to approximate by a set of quantiles. The distribution of is represented by a uniform mix of supporting quantiles:

where denote a Dirac at , and each is an estimation of the quantile corresponding to the quantile level (a.k.a. quantile index) with for . The state-action value is then approximated by . Such approximation of a distribution is referred to as quantile approximation. Those quantile estimations (i.e., ) are trained via the Huber quantile regression loss (huber1964robust ̵̃huber1964robust). To be more specific, at time step the loss is

where and , where is the indicator function and is the Huber loss,

The resulting algorithm is the Quantile Regression DQN (QR-DQN). QR-DQN also uses experience replay and target network similar to DQN. dabney2017distributional ̵̃dabney2017distributional showed that quantile approximation has better empirical performance than previous categorical approximation (bellemare2017distributional ̵̃bellemare2017distributional). More recently, dabney2018implicit ̵̃dabney2018implicit approximated the distribution by learning a quantile function directly with the Implicit Quantile Network, resulting in further performance boost. Distributional RL has enjoyed great success in various domains (bellemare2017distributional ̵̃bellemare2017distributional; dabney2017distributional ̵̃dabney2017distributional; hessel2017rainbow ̵̃hessel2017rainbow; barth2018distributed ̵̃barth2018distributed; dabney2018implicit ̵̃dabney2018implicit).

Deterministic Policy

silver2014deterministic ̵̃silver2014deterministic used a deterministic policy for continuous control problems with linear function approximation, and lillicrap2015continuous ̵̃lillicrap2015continuous extended it with deep networks, resulting in the Deep Deterministic Policy Gradient (DDPG) algorithm. DDPG is an off-policy algorithm. It has an actor and a critic , parameterized by and respectively. At each time step, is updated to minimize

And the policy gradient for in DDPG is

This gradient update is from the chain rule of gradient ascent w.r.t.

, where is interpreted as an approximation to . silver2014deterministic ̵̃silver2014deterministic provided policy improvement guarantees for this gradient.

Option

An option (sutton1999between ̵̃sutton1999between) is a temporal abstraction of action. Each option is a triple , where is the option set. We use to denote the initiation set for the option , describing where the option can be initiated. We use to denote the intra-option policy for . Once the agent has committed to the option , it chooses an action based on . We use to denote the termination function for the option . At each time step , the option

terminates with probability

. In this paper, we consider the call-and-return option execution model (sutton1999between ̵̃sutton1999between), where an agent commits to an option until terminates according to .

The option value function is used to describe the utility of an option at state , and we can learn this function via Intra-option Q-learning (sutton1999between ̵̃sutton1999between). The update is

where is a step size and is a transition in the cycle that the agent is committed to the option .

A Failure of Mean

(a)
(b) Chain 1
(c) Chain 2
Figure 1:

(a) two Markov chains illustrating the inefficiency of mean-based decision making. (b)(c) show the required steps to find the optimal policy for each algorithm v.s. the chain length for Chain 1 and Chain 2 respectively. The required steps are averaged over 10 trials, and standard errors are plotted as error bars. For each trial, the maximum step is

.

We now present two simple Markov chains (Figure 1) to illustrate mean-based RL algorithms can fail to explore efficiently.

Chain 1 has non-terminal states and two actions {LEFT, UP}. The agent starts at the state 1 in each episode. The action UP will lead to episode ending immediately with reward 0. The action LEFT

will lead to the next state with a reward sampled from a normal distribution

. Once the agent runs into the G terminal state, the episode will end with a reward . There is no discounting. The optimal policy is always moving left.

We first consider tabular Q-learning with -greedy exploration. To learn the optimal policy, the agent has to reach the G state first. Unfortunately, this is particularly difficult for Q-learning. The difficulty comes from two aspects. First, due to the -greedy mechanism, the agent sometimes selects UP by randomness. Then the episode will end immediately, and the agent has to wait for the next episode. Second, before the agent reaches the G state, the expected return of either LEFT or UP at any state is 0. So the agent cannot distinguish between the two actions under the mean criterion because the expected returns are the same. As a result, the agent cannot benefit from the value estimation, a mean, at all.

Suppose now the agent learns the distribution of the returns of LEFT and UP. Before reaching the state G, the learned action-value distribution of LEFT is a normal distribution with a mean 0. A high quantile level of this distribution is greater than 0, which is an optimistic estimation. If the agent behaves according to this optimistic estimation, it can quickly reach the state G and find the optimal policy.

Chain 2 has the same state space and action space as Chain 1. However, the reward for LEFT is now except that reaching the G state gives a reward . The reward for UP is sampled from . There is no discounting. When is small, the optimal policy is still always moving left. Before reaching , the estimation of the expected return of LEFT for any non-terminal state is less than 0, which means a Q-learning agent would prefer UP. This preference is bad for this chain as it will lead to episode ending immediately, which prevents further exploration.

We now present some experimental results of four algorithms in the two chains: Q-learning, quantile regression Q-learning (QR, the tabular version of QR-DQN), optimistic quantile regression Q-learning (O-QR), and pessimistic quantile regression Q-learning (P-QR). The O-QR / P-QR is the same as QR except that the behavior policy is always derived from a high / low quantile, not the mean, of the state-action value distribution. We used -greedy behavior policies for all the above algorithms. We measured the steps that each algorithm needs to find the optimal policy. The agent is said to have found the optimal policy at time if and only if the policy derived from the estimation (for Q-learning) or the mean of the estimation (for the other algorithms) at time step is to move left at all non-terminal states.

All the algorithms were implemented with a tabular state representation. was fixed to . For quantile regression, we used 3 quantiles. We varied the chain length and tracked the steps that an algorithm needed to find the optimal policy. Figure 1b and Figure 1c show the results in Chain 1 and Chain 2 respectively. Figure 1b shows the best algorithm for Chain 1 was O-QR, where a high quantile is used to derive the behavior policy, indicating an optimistic exploration strategy. P-QR performed poorly with the increase of the chain length. Figure 1c shows the best algorithm for Chain 2 was P-QR, where a low quantile is used to derive the behavior policy, indicating a pessimistic exploration strategy. O-QR performed poorly with the increase of the chain length. The mean-based algorithms (Q-learning and QR) performed poorly in both chains. Although QR did learn the distribution information, it did not use that information properly for exploration.

The results in the two chains show that quantiles influence exploration efficiency, and quantile-based action selection can improve exploration if the quantile is properly chosen. The results also demonstrate that different tasks need different quantiles. No quantile is the best universally. As a result, a high-level policy for quantile selection is necessary.

The Quantile Option Architecture

We now introduce QUOTA for discrete-action problems. We have quantile estimations for quantile levels . We construct options . For simplicity, in this paper all the options share the same initiation set and the same termination function , which is a constant function. We use to indicate the intra-option policy of an option . proposes actions based on the mean of the -th window of quantiles, where . (We assume is divisible by for simplicity.) Here represents a window size and we have windows in total. To be more specific, in order to compose the -th option, we first define a state-action value function by averaging a local window of quantiles:

We then define the intra-option policy of the -th option to be an -greedy policy with respect to . Here we compose an option with a window of quantiles, instead of a single quantile, to increase stability. It appears to be a mean form, but it is not the mean of the full state-action value distribution. QUOTA learns the option-value function via Intra-option Q-learning for option selection. The quantile estimations is learned via QR-DQN.

To summarize, at each time step , we reselect a new option with probability and continue executing the previous option with probability . The reselection of the new option is done via a -greedy policy derived from , where is the random option selection probability. Once the current option is determined, we then select an action according to the intra-option policy of . The pseudo code of QUOTA is provided in Supplementary Material.

QUOTA for Continuous Control

Qr-Ddpg

Quantile Regression DDPG (QR-DDPG, detailed in Supplementary Material) is a new algorithm by modifying DDPG’s critic. Instead of learning , we learn the distribution directly in the same manner as the discrete-action controller QR-DQN. barth2018distributed ̵̃barth2018distributed also learned instead of . However, they parameterized through categorical approximation and mixed Gaussian approximation. To our best knowledge, QR-DDPG is the first to parameterize with quantile estimations in continuous-action problems. We use QR-DDPG and DDPG as baselines.

Quota

Given the distribution of the state-action value approximated by quantiles, the main question now is how to select an action according to certain quantile. For a finite discrete action space, action selection is done according to an -greedy policy with respect to , where we need to iterate through all possible actions in the whole action space. To get the action that maximizes a quantile of a distribution in continuous-action problems, we perform gradient ascent for different intra-option policies in analogy to DDPG. We have options . The intra-option policy for the option is a deterministic mapping . We train to approximate the greedy action via gradient ascent. To be more specific, the gradient for (parameterized by ) at time step is

To compute the update target for the critic , we also need one more actor to maximize the mean of the distribution (i.e., ) as it is impossible to iterate through all the actions in a continuous-action space. Note is the same as the actor of QR-DDPG. We augment QUOTA’s option set with , giving options. We name the -th quantile actor (. QUOTA for continuous-action problems is detailed in Supplementary Material.

Experiments

We designed experiments to study whether QUOTA improves exploration and can scale to challenging tasks. All the implementations are made publicly available. 111https://github.com/ShangtongZhang/DeepRL

Does QUOTA improve exploration?

We benchmarked QUOTA for discrete-action problems in the two chains. We used a tabular state representation and three quantiles to approximate as previous experiments. Both and were fixed at 0.1, was fixed at 0, which means an option never terminated and lasted for a whole episode. The results are reported in Figures 1b and 1c. QUOTA consistently performed well in both chains.

Although QUOTA did not achieve the best performance in the two chains, it consistently reached comparable performance level with the best algorithm in both chains. Not only the best algorithm in each chain was designed with certain domain knowledge, but also the best algorithm in one chain performed poorly in the other chain. We do not expect QUOTA to achieve the best performance in both chains because it has not used chain-specific knowledge. Those mean-based algorithms (Q-learning and QR) consistently performed poorly in both chains. QUOTA achieved more efficient exploration than Q-learning and QR.

Can QUOTA scale to challenging tasks?

To verify the scalability of QUOTA, we evaluated QUOTA in both Arcade Learning Environment (ALE) (bellemare2013arcade ̵̃bellemare2013arcade) and Roboschool 222https://blog.openai.com/roboschool/, both of which are general-purpose RL benchmarks.

Arcade Learning Environment

We evaluated QUOTA in the 49 Atari games from ALE as mnih2015human ̵̃mnih2015human. Our baseline algorithm is QR-DQN. We implemented QR-DQN with multiple workers (mnih2016asynchronous ̵̃mnih2016asynchronous; clemente2017efficient ̵̃clemente2017efficient) and an -step return extension (mnih2016asynchronous ̵̃mnih2016asynchronous; hessel2017rainbow ̵̃hessel2017rainbow), resulting in reduced wall time and memory consumption compared with an experience-replay-based implementation. We also implemented QUOTA in the same way. Details and more justification for this implementation can be found in Supplementary Material.

We used the same network architecture as dabney2017distributional ̵̃dabney2017distributional to process the input pixels. For QUOTA, we added an extra head to produce the option value

after the second last layer of the network. For both QUOTA and QR-DQN, we used 16 synchronous workers, and the rollout length is 5, resulting in a batch size 80. We trained each agent for 40M steps with frameskip 4, resulting in 160M frames in total. We used an RMSProp optimizer with an initial learning rate

. The discount factor is 0.99. The for action selection was linearly decayed from 1.0 to 0.05 in the first 4M training steps and remained 0.05 afterwards. All the hyper-parameter values above were based on an -step Q-learning baseline with synchronous workers from farquhar2018treeqn ̵̃farquhar2018treeqn, and all our implementations inherited these hyper-parameter values. We used 200 quantiles to approximate the distribution and set the Huber loss parameter to 1 as used by dabney2017distributional ̵̃dabney2017distributional. We used 10 options in QUOTA () , and was linearly decayed from 1.0 to 0 during the 40M training steps. was fixed at 0.01. We tuned from in the game Freeway. The schedule of was also tuned in Freeway.

We measured the final performance at the end of training (i.e., the mean episode return of the last 1,000 episodes) and the cumulative rewards during training. The results are reported in Figure 2. In terms of the final performance / cumulative rewards, QUOTA outperformed QR-DQN in 23 / 21 games and underperformed QR-DQN in 14 / 13 games. Here we only considered performance change larger than 3%. Particularly, in the 10 most challenging games (according to the scores of DQN reported in mnih2015human ̵̃mnih2015human), QUOTA achieved a 97.2% cumulative reward improvement in average.

In QUOTA, randomness comes from both and . However, in QR-DQN, randomness only comes from . So QUOTA does have more randomness (i.e., exploration) than QR-DQN when is the same. To make it fair, we also implemented an alternative version of QR-DQN, referred to as QR-DQN-Alt, where all the hyper-parameter values were the same except that was linearly decayed from 1.0 to 0 during the whole 40M training steps like . In this way, QR-DQN-Alt had a comparable amount of exploration with QUOTA.

We also benchmarked QR-DQN-Alt in the 49 Atari games. In terms of the final performance / cumulative rewards, QUOTA outperformed QR-DQN-Alt in 27 / 42 games and underperformed QR-DQN-Alt in 14 / 5 games, indicating a naive increase of exploration via tuning the dose not guarantee a performance boost.

All the original learning curves and scores are reported in Supplementary Material.

(a) Final performance improvement of QUOTA over QR-DQN
(b) Cumulative rewards improvement of QUOTA over QR-DQN
Figure 2: Improvements in Atari games. Numbers indicate improvement and are computed as , where scores in (a) are the final performance (averaged episode return of the last 1,000 episodes at the end of training), and scores in (b) are cumulative rewards during training. All the scores are averaged across 3 independent runs. Bars above / below horizon indicate performance gain / loss.

Roboschool

Roboschool is a free port of Mujoco 333http://www.mujoco.org/

by OpenAI, where a state contains joint information of a robot and an action is a multi-dimensional continuous vector. We consider DDPG and QR-DDPG as our baselines, implemented with experience replay. For DDPG, we used the same hyper-parameter values and exploration noise as lillicrap2015continuous ̵̃lillicrap2015continuous, except that we found replacing

and regularizer with brought in a performance boost in our Roboschool tasks. Our implementations of QUOTA and QR-DDPG inherited these hyper-parameter values. For QR-DDPG, we used 20 output units after the second last layer of the critic network to produce 20 quantile estimations for the action value distribution. For QUOTA, we used another two-hidden-layer network with 400 and 300 hidden units to compute . (This network is the same as the actor network in DDPG and QR-DDPG.) We used 6 options in total, including one option that corresponds to the actor maximizing the mean value. was linearly decayed from 1.0 to 0 in the whole 1M training steps. was fixed at 1.0, which means we reselected an option at every time step. We tuned from in the game Ant. We trained each algorithm for 1M steps and performed 20 deterministic evaluation episodes every 10K training steps.

The results are reported in Figure 3

. QUOTA demonstrated improved performance over both DDPG and QR-DDPG in 5 out of the 6 tasks. For the other six tasks in Roboschool, all the compared algorithms had large variance and are hard to compare. Those results are reported in Supplementary Material.

Visualization

To understand option selection during training, we plot the frequency of the greedy options according to in different stages of training in Figure 4. At different training stage, did propose different quantile options. The quantile option corresponding to mean or median did not dominate the training. In fact, during training, the mean-maximizing options was rarely proposed by the high-level policy. This indicates that the traditional mean-centered action selection (adopted in standard RL and prevailing distributional RL) can be improved by quantile-based exploration.

Figure 3: Evaluation curves of Roboschool tasks. The x-axis is training steps, and the y-axis is score. Blue lines indicate QUOTA, green lines indicate DDPG, and red lines indicate QR-DDPG. The curves are averaged over 5 independent runs, and standard errors are plotted as shadow.
Figure 4: Frequency of the greedy option w.r.t. . The color represents the frequency that an option was proposed by in different training stages. The frequencies in each column sum to 1. The darker a grid is, the more frequent the option is proposed at that time.

Related Work

There have been many methods for exploration in model-free setting based on the idea of optimism in the face of uncertainty (lai1985asymptotically ̵̃lai1985asymptotically). Different approaches are used to achieve this optimism, e.g., count-based methods (auer2002using ̵̃auer2002using; kocsis2006bandit ̵̃kocsis2006bandit; bellemare2016unifying ̵̃bellemare2016unifying; ostrovski2017count ̵̃ostrovski2017count; tang2017exploration ̵̃tang2017exploration) and Bayesian methods (kaufmann2012bayesian ̵̃kaufmann2012bayesian; chen2017ucb ̵̃chen2017ucb; o2017uncertainty ̵̃o2017uncertainty). Those methods make use of optimism of the parametric uncertainty (i.e., the uncertainty from estimation). In contrast, QUOTA makes use of both optimism and pessimism of the intrinsic uncertainty (i.e., the uncertainty from the MDP itself). Recently, moerland2017efficient ̵̃moerland2017efficient combined the two uncertainty via the Double Uncertain Value Network and demonstrated performance boost in simple domains.

There is another line of related work in risk-sensitive RL. Classic risk-sensitive RL is also based on the intrinsic uncertainty, where a utility function (morgenstern1953theory ̵̃morgenstern1953theory) is used to distort a value distribution, resulting in risk-averse or risk-seeking policies (howard1972risk ̵̃howard1972risk; marcus1997risk ̵̃marcus1997risk; chow2014algorithms ̵̃chow2014algorithms). The expectation of the utility-function-distorted valued distribution can be interpreted as a weighted sum of quantiles (dhaene2012remarks ̵̃dhaene2012remarks), meaning that quantile-based action selection implicitly adopts the idea of utility function. Particularly, morimura2012parametric ̵̃morimura2012parametric used a small quantile for control, resulting in a safe policy in the cliff grid world. chris2017particle ̵̃chris2017particle employed an exponential utility function over the parametric uncertainty, also resulting in a performance boost in the cliff grid world. Recently, dabney2018implicit ̵̃dabney2018implicit proposed various risk-sensitive policies by applying different utility functions to a learned quantile function. The high-level policy in QUOTA can be interpreted as a special utility function, and the optimistic and pessimistic exploration in QUOTA can be interpreted as risk-sensitive policies. However, the “utility function” in QUOTA is formalized in the option framework, which is learnable. We do not need extra labor to pre-specify a utility function.

Moreover, tang2018exploration ̵̃tang2018exploration combined Bayesian parameter updates with distributional RL for efficient exploration. However, improvements were demonstrated only in simple domains.

Closing Remarks

QUOTA achieves an on-the-fly decision between optimistic and pessimistic exploration, resulting in improved performance in various domains. QUOTA provides a new dimension for exploration. In this optimism-pessimism dimension, an agent may be able to find an effective exploration strategy more quickly than in the original action space. QUOTA provides an option-level exploration.

At first glance, QUOTA introduces three extra hyper-parameters, i.e., the number of options , the random option probability , and the termination probability . For , we simply used 10 and 5 options for Atari games and Roboschool tasks respectively. For , we used a linear schedule decaying from 1.0 to 0 during the whole training steps, which is also a natural choice. We do expect performance improvement if and are further tuned. For , we tuned it in only two environments and used the same value for all the other environments. Involved labor effort was little. Furthermore, can also be learned directly in an end-to-end training via the termination gradient theorem (bacon2017option ̵̃bacon2017option). We leave this for future work.

Bootstrapped-DQN (BDQN, osband2016deep ̵̃osband2016deep) approximated the distribution of the expected return via a set of heads. At the beginning of each episode, BDQN uniformly samples one head and commits to that head during the whole episode. This uniform sampling and episode-long commitment is crucial to the deep exploration of BDQN and inspires us to set and . However, this special configuration only improved performance in certain tasks. Each head in BDQN is an estimation of value. All heads are expected to converge to the optimal state action value at the end of training. As a result, a simple uniform selection over the heads in BDQN does not hurt performance. However, in QUOTA, each head (i.e., quantile option) is an estimation of a quantile of . Not all quantiles are useful for control. A selection among quantile options is necessary. One future work is to combine QUOTA and BDQN or other parametric-uncertainty-based algorithms (e.g., count-based methods) by applying them to each quantile estimation.

References

Supplementary Material

QUOTA for Discrete-Action Problems

Input:
: random action selection probability
: random option selection probability
: option termination probability
: quantile estimation functions, parameterized by
: an option value function, parameterized by
Output:
parameters and
for each time step t do
       Observe the state
       Select an option ,
      
       Select an action (assuming is the -th option),
      
       /* Note the action selection here is not based on the mean of the value distribution */
       Execute , get reward and next state
      
       for
      
      
      
      
      
      
end for
Algorithm 1 QUOTA for discrete-action problems

The algorithm is presented in an online form for simplicity. Implementation details are illustrated in the next section.

Experimental Results in Atari Games

DQN used experience replay to stabilize the training of the convolutional neural network function approximator. However, a replay buffer storing pixels consumes much memory, and the training of DQN is slow. mnih2016asynchronous ̵̃mnih2016asynchronous proposed asynchronous methods to speed up training, where experience replay was replaced by multiple asynchronous workers. Each worker has its own environment instance and its own copy of the learning network. Those workers interact with the environments and compute the gradients of the learning network in parallel in an asynchronous manner. Only the gradients are collected by a master worker. This master worker updates the learning network with the collected gradients and broadcast the updated network to each worker. However, asynchronous methods cannot take advantage of a modern GPU (mnih2016asynchronous ̵̃mnih2016asynchronous). To address this issue, coulom2006efficient ̵̃coulom2006efficient proposed batched training with multiple synchronous workers. Besides multiple workers, mnih2016asynchronous ̵̃mnih2016asynchronous also used -step Q-learning. Recently, the -step extension of Q-learning is shown to be a crucial component of the Rainbow architecture (hessel2017rainbow ̵̃hessel2017rainbow), which maintains the state-of-the-art performance in Atari games. The -step Q-learning with multiple workers has been widely used as a baseline algorithm (oh2017value ̵̃oh2017value; farquhar2018treeqn ̵̃farquhar2018treeqn). In our experiments, we implemented both QUOTA for discrete-action control and QR-DQN with multiple workers and an -step return extension.

Figure 5: Learning curves of the 49 Atari games. The x-axis is training steps, and the y-axis is the mean return of the most recent 1,000 episodes up to the time step. Blue lines indicate QUOTA, green lines indicate QR-DQN, and red lines indicate QR-DQN-Alt. All the scores are averaged over 3 independent runs, and standard errors are plotted as shadow.
Game QUOTA QR-DQN QR-DQN-Alt
Alien 1821.91 1760.00 1566.83
Amidar 571.46 567.97 257.50
Assault 3511.17 3308.78 3948.54
Asterix 6112.12 6176.05 5135.85
Asteroids 1497.62 1305.30 1173.98
Atlantis 965193.00 978385.30 516660.60
BankHeist 735.27 644.72 866.66
BattleZone 25321.67 22725.00 23858.00
BeamRider 5522.60 5007.83 6411.33
Bowling 34.08 27.64 31.98
Boxing 96.16 95.02 91.12
Breakout 316.74 322.17 334.45
Centipede 3537.92 4330.31 3492.00
ChopperCommand 3793.03 3421.10 2623.57
CrazyClimber 113051.70 107371.67 109117.37
DemonAttack 61005.12 80026.68 23286.76
DoubleDunk -21.56 -21.66 -21.97
Enduro 1162.35 1220.06 641.76
FishingDerby -59.09 -9.60 -71.70
Freeway 31.04 30.60 29.78
Frostbite 2208.58 2046.36 503.82
Gopher 6824.34 9443.89 7352.45
Gravitar 457.65 414.33 251.93
IceHockey -9.94 -9.87 -12.60
Jamesbond 495.58 601.75 523.37
Kangaroo 2555.80 2364.60 1730.33
Krull 7747.51 7725.47 8071.42
KungFuMaster 20992.57 17807.43 21586.23
MontezumaRevenge 0.00 0.00 0.00
MsPacman 2423.57 2273.35 2100.19
NameThisGame 7327.55 7748.26 9509.96
Pitfall -30.76 -32.99 -25.22
Pong 20.03 19.66 20.59
PrivateEye 114.16 419.35 375.61
Qbert 11790.29 10875.34 5544.53
Riverraid 10169.87 9710.47 8700.98
RoadRunner 27872.27 27640.73 35419.80
Robotank 37.68 45.11 35.44
Seaquest 2628.60 1690.57 4108.77
SpaceInvaders 1553.88 1387.61 1137.42
StarGunner 52920.00 49286.60 45910.30
Tennis -23.70 -22.74 -23.17
TimePilot 5125.13 6417.70 5275.07
Tutankham 195.44 173.26 198.89
UpNDown 24912.70 30443.61 29886.25
Venture 26.53 5.30 3.73
VideoPinball 44919.13 123425.46 78542.20
WizardOfWor 4582.07 5219.00 3716.80
Zaxxon 8252.83 6855.17 6144.97
Table 1: Final scores of 49 Atari games. Scores are the averaged episode return of the last 1,000 episodes in the end of training. All the scores are averaged over 3 independent runs.

Quantile Regression DDPG

Input:
: a noise process
: quantile estimation functions, parameterized by
: a deterministic policy, parameterized by
Output:
parameters ,
for each time step t do
       Observe the state
      
       Execute , get reward and next state
      
       , for
      
      
      
      
end for
Algorithm 2 QR-DDPG

The algorithm is presented in an online learning form for simplicity. But in our experiments, we used experience replay and a target network same as DDPG.

QUOTA for Continuous-Action Problems

Input:
: a noise process
: random option selection probability
: option termination probability
: quantile estimation functions, parameterized by
: quantile actors, parameterized by
: option value function, parameterized by
Output:
parameters , , and
for each time step t do
       Observe the state
       Select a candidate option from
      
       Get the quantile actor associated with the option
      
       Execute , get reward and next state
      
       /* The quantile actor is to maximize the mean return and is used for computing the update target for the critic. */
       , for
      
      
      
      
       for  do
            
       end for
      
      
      
      
      
end for
Algorithm 3 QUOTA for continuous-action problems

The algorithm is presented in an online learning form for simplicity. But in our experiments, we used experience replay and a target network same as DDPG.

Experimental Results in Roboschool

Figure 6: Evaluation curves of Roboschool tasks. The x-axis is training steps, and the y-axis is score. Blue lines indicate QUOTA, green lines indicate DDPG, and red lines indicate QR-DDPG. The curves are averaged over 5 independent runs, and standard errors are plotted as shadow.

References