Increasing the Action Gap: New Operators for Reinforcement Learning

12/15/2015 ∙ by Marc G. Bellemare, et al. ∙ Google Carnegie Mellon University 0

This paper introduces new optimality-preserving operators on Q-functions. We first describe an operator for tabular representations, the consistent Bellman operator, which incorporates a notion of local policy consistency. We show that this local consistency leads to an increase in the action gap at each state; increasing this gap, we argue, mitigates the undesirable effects of approximation and estimation errors on the induced greedy policies. This operator can also be applied to discretized continuous space and time problems, and we provide empirical results evidencing superior performance in this context. Extending the idea of a locally consistent operator, we then derive sufficient conditions for an operator to preserve optimality, leading to a family of operators which includes our consistent Bellman operator. As corollaries we provide a proof of optimality for Baird's advantage learning algorithm and derive other gap-increasing operators with interesting properties. We conclude with an empirical study on 60 Atari 2600 games illustrating the strong potential of these new operators.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

Code Repositories

torch-dqn

Simple PuddleWorld DQN example using torch7


view repo
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

Background

We consider a Markov decision process

where is the state space, is the finite action space,

is the transition probability kernel,

is the reward function mapping state-action pairs to a bounded subset of , and is the discount factor. We denote by and the space of bounded real-valued functions over and , respectively. For we write , and follow this convention for related quantities ( for , for , etc.) whenever convenient and unambiguous. In the context of a specific we further write to mean the expectation with respect to , with the convention that

always denotes the next state random variable.

A deterministic policy induces a Q-function whose Bellman equation is

The state-conditional expected return is the expected discounted total reward received from starting in and following .

The Bellman operator is defined pointwise as

(1)

is a contraction mapping in supremum norm (bertsekas96neurodynamic bertsekas96neurodynamic) whose unique fixed point is the optimal Q-function

which induces the optimal policy :

A Q-function induces a greedy policy , with the property that if and only if . For we call the greedy action with respect to and a nongreedy action; for these are the usual optimal and suboptimal actions, respectively.

We emphasize that while we focus on the Bellman operator, our results easily extend to its variations such as SARSA (rummery94online rummery94online), policy evaluation (sutton88learning sutton88learning), and fitted Q-iteration (ernst05treebased ernst05treebased). In particular, our new operators all have a sample-based form, i.e., an analogue to the Q-Learning rule of watkins89learning watkins89learning.

The Consistent Bellman Operator

It is well known (and implicit in our notation) that the optimal policy for is stationary (i.e., time-independent) and deterministic. In looking for , we may therefore restrict our search to the space of stationary deterministic policies. Interestingly, as we now show the Bellman operator on is not, in a sense, restricted to .

Figure 1: A two-state MDP illustrating the non-stationary aspect of the Bellman operator. Here, and indicate transition probabilities and rewards, respectively. In state the agent may either eat cake to receive a reward of 1 and transition to with probability , or abstain for no reward. State is a low-value absorbing state with .

To begin, consider the two-state MDP depicted in Figure 1. This MDP abstracts a Faustian situation in which an agent repeatedly chooses between an immediately rewarding but ultimately harmful option (), or an unrewarding alternative (). For concreteness, we imagine the agent as faced with an endless supply of delicious cake (with ) and call these the “cake” and “no cake” actions.

Eating cake can cause a transition to , the “bad state”, whose value is independent of the agent’s policy:

In state , however, the Q-values depend on the agent’s future behaviour. For a policy , the value of is

(2)

By contrast, the value of is

which is greater than for all . It follows that not eating cake is optimal, and thus . Furthermore, (2) tells us that the value difference between optimal and second best action, or action gap, is

Notice that does not describe the value of any stationary policy. That is, the policy with has value

(3)

and in particular this value is lower than . Instead, describes the value of a nonstationary policy which eats cake once, but then subsequently abstains.

So far we have considered the Q-functions of given stationary policies , and argued that these are nonstationary. We now make a similar statement about the Bellman operator: for any , the nongreedy components of do not generally describe the expected return of stationary policies. Hence the Bellman operator is not restricted to .

When the MDP of interest can be solved exactly, this nonstationarity is a non-issue since only the Q-values for optimal actions matter. In the presence of estimation or approximation error, however, small perturbations in the Q-function may result in erroneously identifying the optimal action. Our example illustrates this effect: an estimate of which is off by can induce a pessimal greedy policy (i.e. ).

To address this issue, we may be tempted to define a new Q-function which explicitly incorporates stationarity:

(4)

Under this new definition, the action gap of the optimal policy is . Unfortunately, (4) does not visibly yield a useful operator on . As a practical approximation we now propose the consistent Bellman operator, which preserves a local form of stationarity:

(5)

Effectively, our operator redefines the meaning of Q-values: if from state an action is taken and the next state is then is again taken. In our example, this new Q-value describes the expected return for repeatedly eating cake until a transition to the unpleasant state .

Since the optimal policy is stationary, we may intuit that iterated application of this new operator also yields . In fact, below we show that the consistent Bellman operator is both optimality-preserving and, in the presence of direct loops in the corresponding transition graph, gap-increasing:

Definition 1.

An operator is optimality-preserving if, for any and , letting ,

exists, is unique, , and for all ,

Thus under an optimality-preserving operator at least one optimal action remains optimal, and suboptimal actions remain suboptimal.

Definition 2.

Let be an MDP. An operator for is gap-increasing if for all , , letting and ,

(6)

We are particularly interested in operators which are strictly gap-increasing, in the sense that (6) is a strict inequality for at least one pair.

Our two-state MDP illustrates the first benefit of increasing the action gap: a greater robustness to estimation error. Indeed, under our new operator the optimal Q-value of eating cake becomes

which is, again, smaller than whenever . In the presence of approximation error in the Q-values, we may thus expect to occur more frequently than the converse.

Aggregation Methods

At first glance, the use of an indicator function in (5) may seem limiting: may be zero or close to zero everywhere, or the state may be described by features which preclude a meaningful identity test . There is, however, one important family of value functions which have “tabular-like” properties: aggregation schemes (bertsekas11approximate bertsekas11approximate). As we now show, the consistent Bellman operator is well-defined for all aggregation schemes.

An aggregation scheme for is a tuple where is a set of aggregate states, is a mapping from to distributions over , and is a mapping from to distributions over . For let and , where as before we assign specific roles to and . We define the aggregation Bellman operator as

(7)

When is a finite subset of and corresponds to the identity transition function, i.e. 

, we recover the class of averagers (gordon95stable gordon95stable; e.g., multilinear interpolation, illustrated in Figure

2) and kernel-based methods (ormoneit02kernelbased ormoneit02kernelbased). If also corresponds to the identity and is finite, reduces to the Bellman operator (1) and we recover the familiar tabular representation (sutton98reinforcement sutton98reinforcement).

Figure 2: Multilinear interpolation in two dimensions. The value at is approximated as . Here , , etc.

Generalizing (5), we define the consistent Bellman operator over :

(8)

Intuitively (see, e.g., bertsekas11approximate bertsekas11approximate), the and mappings induce a new MDP, with

In this light, we see that our original definition of and (8) only differ in their interpretation of the transition kernel. Thus the consistent Bellman operator remains relevant in cases where is a deterministic transition kernel, for example when applying multilinear or barycentric interpolation to continuous space MDPs (e.g. munos98barycentric munos98barycentric).

Q-Value Interpolation

Aggregation schemes as defined above do not immediately yield a Q-function over . Indeed, the Q-value at an arbitrary is defined (in the ordinary Bellman operator sense) as

(9)

which may only be computed from a full or partial model of the MDP, or by inverting . It is often the case that neither is feasible. One solution is instead to perform Q-value interpolation:

which is reasonable when are interpolation coefficients111One then typically, but not always, takes to be the identity.. This gives the related Bellman operator

with by convexity of the operation. From here one may be tempted to define the corresponding consistent operator as

While remains a contraction, is not guaranteed, and it is easy to show that is not optimality-preserving. Instead we define the consistent Q-value interpolation Bellman operator as

(10)

As a corollary to Theorem 1 below we will prove that is also optimality-preserving and gap-increasing.

Experiments on the Bicycle Domain

We now study the behaviour of our new operators on the bicycle domain (randlov98learning randlov98learning). In this domain, the agent must simultaneously balance a simulated bicycle and drive it to a goal 1km north of its initial position. Each time step consists of a hundredth of a second, with a successful episode typically lasting 50,000 or more steps. The driving aspect of this problem is particularly challenging for value-based methods, since each step contributes little to an eventual success and the “curse of dimensionality” (bellman57dynamic bellman57dynamic) precludes a fine representation of the state-space. In this setting our consistent operator provides significantly improved performance and stability.

We approximated value functions using multilinear interpolation on a uniform

grid over a 6-dimensional feature vector

. The first four components of describe relevant angles and angular velocities, while and are polar coordinates describing the bicycle’s position relative to the goal. We approximated Q-functions using Q-value interpolation () over this grid, since in a typical setting we may not have access to a forward model.

We are interested here in the quality of the value functions produced by different operators. We thus computed our Q-functions using value iteration, rather than a trajectory-based method such as Q-Learning. More precisely, at each iteration we simultaneously apply our operator to all grid points, with expected next state values estimated from samples. The interested reader may find full experimental details and videos in the appendix.222Videos: https://youtu.be/0pUFjNuom1A

While the limiting value functions ( and ) coincide on (by the optimality-preserving property), they may differ significantly elsewhere. For we have

in general. This is especially relevant in the relatively high-dimensional bicycle domain, where a fine discretization of the state space is not practical and most of the trajectories take place “far” from grid points. As an example, consider , the relative angle to the goal: each grid cell covers an arc of , while a single time step typically changes by less than .

Figure 3: Top. Falling and goal-reaching frequency for greedy policies derived from value iteration. Bottom. Sample bicycle trajectories after iterations. In this coarse-resolution regime, the Bellman operator initially yields policies which circle the goal forever, while the consistent operator quickly yields successful trajectories.

Figure 3 summarizes our results. Policies derived from our consistent operator can safely balance the bicycle earlier on, and also reach the goal earlier than policies derived from the Bellman operator. Note, in particular, the striking difference in the trajectories followed by the resulting policies. The effect is even more pronounced when using a grid (results provided in the appendix). Effectively, by decreasing suboptimal Q-values at grid points we produce much better policies within the grid cells. This phenomenon is consistent with the theoretical results of farahmand11actiongap farahmand11actiongap relating the size of action gaps to the quality of derived greedy policies. Thus we find a second benefit to increasing the action gap: it improves policies derived from Q-value interpolation.

A Family of Convergent Operators

One may ask whether it is possible to extend the consistent Bellman operator to Q-value approximation schemes which lack a probabilistic interpretation, such as linear approximation (sutton96generalization sutton96generalization), locally weighted regression (atkeson91using atkeson91using), neural networks (tesauro95temporal tesauro95temporal), or even information-theoretic methods (veness15compress veness15compress). In this section we answer by the affirmative.

The family of operators which we describe here are applicable to arbitrary Q-value approximation schemes. While these operators are in general no longer contractions, they are gap-increasing, and optimality-preserving when the Q-function is represented exactly. Theorem 1 is our main result; one corollary is a convergence proof for Baird’s advantage learning (baird99reinforcement baird99reinforcement). Incidentally, our taking the minimum in (10) was in fact no accident, but rather a simple application of this theorem.

Theorem 1.

Let be the Bellman operator defined by (1). Let be an operator with the property that there exists an such that for all , , and letting ,

  1. , and

  2. .

Then is both optimality-preserving and gap-increasing.

Thus any operator which satisfies the conditions of Theorem 1 will eventually yield an optimal greedy policy, assuming an exact representation of the Q-function. Condition 2, in particular, states that we may subtract up to (but not including) from at each iteration. This is exactly the action gap at , but for , rather than the optimal . For a particular , this implies we may initially devalue the optimal action in favour of the greedy action. But our theorem shows that cannot be undervalued infinitely often, and in fact must ultimately reach .333When two or more actions are optimal, we are only guaranteed that one of them will ultimately be correctly valued. The “1-lazy” operator described below exemplifies this possibility. The proof of this perhaps surprising result may be found in the appendix.

To the best of our knowledge, Theorem 1 is the first result to show the convergence of iterates of dynamic programming-like operators without resorting to a contraction argument. Indeed, the conditions of Theorem 1 are particularly weak: we do not require to be a contraction, nor do we assume the existence of a fixed point (in the Q-function space ) of . In fact, the conditions laid out in Theorem 1 characterize the set of optimality-preserving operators on , in the following sense:

Remark 1.

There exists a single-state MDP and an operator with either

  1. or

  2. ,

and in both cases there exists a for which .

We note that the above remark does not cover the case where condition (2) is an equality (i.e., ). We leave as an open problem the existence of a divergent example for .

Corollary 1.

The consistent Bellman operator (8) and consistent Q-value interpolation Bellman operator (10) are optimality-preserving.

In fact, it is not hard to show that the consistent Bellman operator (7) is a contraction, and thus enjoys even stronger convergence guarantees than those provided by Theorem 1. Informally, whenever Condition 2 of the theorem is strengthened to an inequality, we may also expect our operators to be gap-increasing; this is in fact the case for both of our consistent operators.

To conclude this section, we describe a few operators which satisfy the conditions of Theorem 1, and are thus optimality-preserving and gap-increasing. Critically, none of these operators are contractions; one of them, the “lazy” operator, also possesses multiple fixed points.

Baird’s Advantage Learning

The method of advantage learning was proposed by baird99reinforcement baird99reinforcement as a means of increasing the gap between the optimal and suboptimal actions in the context of residual algorithms applied to continuous time problems.444Advantage updating, also by Baird, is a popular but different idea where an agent maintains both and . The corresponding operator is

where is a time constant and with . Taking and , we define a new operator with the same fixed point but a now-familiar form:

Note that, while the two operators are motivated by the same principle and share the same fixed point, they are not isomorphic. We believe our version to be more stable in practice, as it avoids the multiplication by the term.

Corollary 2.

For , the advantage learning operator has a unique limit , and .

While our consistent Bellman operator originates from different principles, there is in fact a close relationship between it and the advantage learning operator. Indeed, we can rewrite (5) as

which corresponds to advantage learning with a -dependent parameter.

Persistent Advantage Learning

In domains with a high temporal resolution, it may be advantageous to encourage greedy policies which infrequently switch between actions — to encourage a form of persistence. We define an operator which favours repeated actions:

Note that the second term of the can also be written as

As we shall see below, persistent advantage learning achieves excellent performance on Atari 2600 games.

The Lazy Operator

As a curiosity, consider the following operator with :

This -lazy operator only updates -values when this would affect the greedy policy. And yet, Theorem 1 applies! Hence is optimality-preserving and gap-increasing, even though it may possess a multitude of fixed points in . Of note, while Theorem 1 does not apply to the -lazy operator, the latter is also optimality-preserving; in this case, however, we are only guaranteed that one optimal action remain optimal.

Experimental Results on Atari 2600

We evaluated our new operators on the Arcade Learning Environment (ALE; bellemare13arcade bellemare13arcade), a reinforcement learning interface to Atari 2600 games. In the ALE, a frame lasts of a second, with actions typically selected every four frames. Intuitively, the ALE setting is related to continuous domains such as the bicycle domain studied above, in the sense that each individual action has little effect on the game.

For our evaluation, we trained agents based on the Deep Q-Network (DQN) architecture of mnih15human mnih15human. DQN acts according to an -greedy policy over a learned neural-network Q-function. DQN uses an experience replay mechanism to train this Q-function, performing gradient descent on the sample squared error , where

where is a previously observed transition. We define the corresponding errors for our operators as

where we further parametrized the weight given to in persistent advantage learning (compare with ).

Our first experiment used one of the new ALE standard versions, which we call here the Stochastic Minimal setting. This setting includes stochasticity applied to the Atari 2600 controls, no death information, and a per-game minimal action set. Specifically, at each frame (not time step) the environment accepts the agent’s action with probability , or rejects it with probability (here, ). If an action is rejected, the previous frame’s action is repeated. In our setting the agent selects a new action every four frames: the stochastic controls therefore approximate a form of reaction delay. As evidenced by a lower DQN performance, Stochastic Minimal is more challenging than previous settings.

We trained each agent for 100 million frames using either regular Bellman updates, advantage learning (A.L.), or persistent advantage learning (P.A.L.). We optimized the parameters over 5 training games and tested our algorithms on 55 more games using 10 independent trials each.

For each game, we performed a paired -test (99% C.I.) on the post-training evaluation scores obtained by our algorithms and DQN. A.L. and P.A.L. are statistically better than DQN on 37 and 35 out of 60 games, respectively; both perform worse on one (Atlantis, James Bond). P.A.L. often achieves higher scores than A.L., and is statistically better on 16 games and worse on 6. These results are especially remarkable given that the only difference between DQN and our operators is a simple modification to the update rule.

For comparison, we also trained agents using the Original DQN setting (mnih15human mnih15human), in particular using a longer 200 million frames of training. Figure 4 depicts learning curves for two games, Asterix and Space Invaders. These curves are representative of our results, rather than exceptional: on most games, advantage learning outperforms Bellman updates, and persistent advantage learning further improves on this result. Across games, the median score improvement over DQN is 8.4% for A.L. and 9.1% for P.A.L., while the average score improvement is respectively 27.0% and 32.5%. Full experimental details are provided in the appendix.

Figure 4: Learning curves for two Atari 2600 games in the Original DQN setting.

The learning curve for Asterix illustrates the poor performance of DQN on certain games. Recently, vanhasselt15deep vanhasselt15deep argued that this poor performance stems from the instability of the Q-functions learned from Bellman updates, and provided conclusive empirical evidence to this effect. In the spirit of their work, we compared our learned Q-functions on a single trajectory generated by a trained DQN agent playing Space Invaders in the Original DQN setting. For each Q-function and each state along the trajectory, we computed as well as the action gap at .

Figure 5: Action gaps (left) and value functions (right) for a single episode of Space Invaders (Original DQN setting). Our operators yield markedly increased action gaps and lower values.

The value functions and action gaps resulting from this experiment555Videos: https://youtu.be/wDfUnMY3vF8 are depicted in Figure 5. As expected, the action gaps are significantly greater for both of our operators, in comparison to the action gaps produced by DQN. Furthermore, the value estimates are themselves lower, and correspond to more realistic estimates of the true value function. In their experiments, van Hasselt et al. observed a similar effect on the value estimates when replacing the Bellman updates with Double Q-Learning updates, one of many solutions recently proposed to mitigate the negative impact of statistical bias in value function estimation [van Hasselt2010, Azar et al.2011, Lee, Defourny, and Powell2013]. This bias is positive and is a consequence of the max term in the Bellman operator. We hypothesize that the lower value estimates observed in Figure 5 are also a consequence of bias reduction. Specifically, increased action gaps are consistent with a bias reduction: it is easily shown that the value estimation bias is strongest when Q-values are close to each other. If our hypothesis holds true, the third benefit of increasing the action gap is thus to mitigate the statistical bias of Q-value estimates.

Open Questions

Weaker Conditions for Optimality. At the core of our results lies the redefinition of Q-values in order to facilitate approximate value estimation. Theorem 1 and our empirical results indicate that there are many practical operators which do not preserve suboptimal Q-values. Naturally, preserving the optimal value function is itself unnecessary, as long as the iterates converge to a Q-function for which . It may well be that even weaker conditions for optimality exist than those required by Theorem 1. At the present, however, our proof technique does not appear to extend to this case.

Statistical Efficiency of New Operators. Advantage learning (as given by our redefinition) may be viewed as a generalization of the consistent Bellman operator when is unknown or irrelevant. In this light, we ask: is there a probabilistic interpretation to advantage learning? We further wonder about the statistical efficiency of the consistent Bellman operator: is it ever less efficient than the usual Bellman operator, when considering the probability of misclassifying the optimal action? Both of these answers might shed some light on the differences in performance observed in our experiments.

Maximally Efficient Operator. Having revealed the existence of a broad family of optimality-preserving operators, we may now wonder which of these operators, if any, should be preferred to the Bellman operator. Clearly, there are trivial MDPs on which any optimality-preserving operator performs equally well. However, we may ask whether there is, for a given MDP, a “maximally efficient” optimality-preserving operator; and whether a learning agent can benefit from simultaneously searching for this operator while estimating a value function.

Concluding Remarks

We presented in this paper a family of optimality-preserving operators, of which the consistent Bellman operator is a distinguished member. At the center of our pursuits lay the desire to increase the action gap; we showed through experiments that this gap plays a central role in the performance of greedy policies over approximate value functions, and how significantly increased performance could be obtained by a simple modification of the Bellman operator. We believe our work highlights the inadequacy of the classical Q-function at producing reliable policies in practice, calls into question the traditional policy-value relationship in value-based reinforcement learning, and illustrates how revisiting the concept of value itself can be fruitful.

Acknowledgments

The authors thank Michael Bowling, Csaba Szepesvári, Craig Boutilier, Dale Schuurmans, Marty Zinkevich, Lihong Li, Thomas Degris, and Joseph Modayil for useful discussions, as well as the anonymous reviewers for their excellent feedback.

References

  • [Atkeson1991] Atkeson, C. G. 1991. Using locally weighted regression for robot learning. In Proceedings of 1991 IEEE International Conference on Robotics and Automation, 958–963.
  • [Azar et al.2011] Azar, M. G.; Munos, R.; Gavamzadeh, M.; and Kappen, H. J. 2011. Speedy Q-learning. In Advances in Neural Information Processing Systems 24.
  • [Baird1999] Baird, L. C. 1999. Reinforcement learning through gradient descent. Ph.D. Dissertation, Carnegie Mellon University.
  • [Bellemare et al.2013] Bellemare, M. G.; Naddaf, Y.; Veness, J.; and Bowling, M. 2013. The Arcade Learning Environment: An evaluation platform for general agents.

    Journal of Artificial Intelligence Research

    47:253–279.
  • [Bellman1957] Bellman, R. E. 1957. Dynamic programming. Princeton, NJ: Princeton University Press.
  • [Bertsekas and Tsitsiklis1996] Bertsekas, D. P., and Tsitsiklis, J. N. 1996. Neuro-Dynamic Programming. Athena Scientific.
  • [Bertsekas and Yu2012] Bertsekas, D. P., and Yu, H. 2012. Q-learning and enhanced policy iteration in discounted dynamic programming. Mathematics of Operations Research 37(1):66–94.
  • [Bertsekas2011] Bertsekas, D. P. 2011. Approximate policy iteration: A survey and some new methods. Journal of Control Theory and Applications 9(3):310–335.
  • [Deisenroth and Rasmussen2011] Deisenroth, M. P., and Rasmussen, C. E. 2011. PILCO: A model-based and data-efficient approach to policy search. In

    Proceedings of the International Conference on Machine Learning

    .
  • [Ernst, Geurts, and Wehenkel2005] Ernst, D.; Geurts, P.; and Wehenkel, L. 2005. Tree-based batch mode reinforcement learning. Journal of Machine Learning Research 6:503–556.
  • [Farahmand2011] Farahmand, A. 2011. Action-gap phenomenon in reinforcement learning. In Advances in Neural Information Processing Systems 24.
  • [Gordon1995] Gordon, G. 1995. Stable function approximation in dynamic programming. In Proceedings of the Twelfth International Conference on Machine Learning.
  • [Hoburg and Tedrake2009] Hoburg, W., and Tedrake, R. 2009. System identification of post stall aerodynamics for UAV perching. In Proceedings of the AIAA Infotech Aerospace Conference.
  • [Jiang and Powell2015] Jiang, D. R., and Powell, W. B. 2015. Optimal hour ahead bidding in the real time electricity market with battery storage using approximate dynamic programming. INFORMS Journal on Computing 27(3):525 – 543.
  • [Kushner and Dupuis2001] Kushner, H., and Dupuis, P. G. 2001. Numerical methods for stochastic control problems in continuous time. Springer.
  • [Lee, Defourny, and Powell2013] Lee, D.; Defourny, B.; and Powell, W. B. 2013. Bias-corrected Q-learning to control max-operator bias in Q-learning. In Symposium on Adaptive Dynamic Programming And Reinforcement Learning.
  • [Mnih et al.2015] Mnih, V.; Kavukcuoglu, K.; Silver, D.; Rusu, A. A.; Veness, J.; Bellemare, M. G.; Graves, A.; Riedmiller, M.; Fidjeland, A. K.; Ostrovski, G.; et al. 2015. Human-level control through deep reinforcement learning. Nature 518(7540):529–533.
  • [Munos and Moore1998] Munos, R., and Moore, A. 1998. Barycentric interpolators for continuous space & time reinforcement learning. In Advances in Neural Information Processing Systems 11.
  • [Munos and Moore2002] Munos, R., and Moore, A. 2002. Variable resolution discretization in optimal control. Machine learning 49(2-3):291–323.
  • [Ormoneit and Sen2002] Ormoneit, D., and Sen, Ś. 2002. Kernel-based reinforcement learning. Machine learning 49(2-3):161–178.
  • [Randlov and Alstrom1998] Randlov, J., and Alstrom, P. 1998. Learning to drive a bicycle using reinforcement learning and shaping. In Proceedings of the Fifteenth International Conference on Machine Learning.
  • [Riedmiller et al.2009] Riedmiller, M.; Gabel, T.; Hafner, R.; and Lange, S. 2009. Reinforcement learning for robot soccer. Autonomous Robots 27(1):55–73.
  • [Rummery and Niranjan1994] Rummery, G. A., and Niranjan, M. 1994. On-line Q-learning using connectionist systems. Technical report, Cambridge University Engineering Department.
  • [Sutton and Barto1998] Sutton, R. S., and Barto, A. G. 1998. Reinforcement learning: An introduction. MIT Press.
  • [Sutton et al.2011] Sutton, R.; Modayil, J.; Delp, M.; Degris, T.; Pilarski, P.; White, A.; and Precup, D. 2011. Horde: A scalable real-time architecture for learning knowledge from unsupervised sensorimotor interaction. In Proceedings of the Tenth International Conference on Autonomous Agents and Multiagents Systems.
  • [Sutton1988] Sutton, R. S. 1988. Learning to predict by the methods of temporal differences. Machine Learning 3(1):9–44.
  • [Sutton1996] Sutton, R. S. 1996. Generalization in reinforcement learning: Successful examples using sparse coarse coding. In Advances in Neural Information Processing Systems 8, 1038–1044.
  • [Tesauro1995] Tesauro, G. 1995. Temporal difference learning and TD-Gammon. Communications of the ACM 38(3).
  • [Togelius et al.2009] Togelius, J.; Karakovskiy, S.; Koutník, J.; and Schmidhuber, J. 2009. Super Mario evolution. In Symposium on Computational Intelligence and Games.
  • [van Hasselt, Guez, and Silver2016] van Hasselt, H.; Guez, A.; and Silver, D. 2016. Deep reinforcement learning with double Q-learning. In Proceedings of the AAAI Conference on Artificial Intelligence (to appear).
  • [van Hasselt2010] van Hasselt, H. 2010. Double Q-learning. In Advances in Neural Information Processing Systems 23.
  • [Veness et al.2015] Veness, J.; Bellemare, M. G.; Hutter, M.; Chua, A.; and Desjardins, G. 2015. Compress and control. In Proceedings of the AAAI Conference on Artificial Intelligence.
  • [Watkins1989] Watkins, C. 1989. Learning From Delayed Rewards. Ph.D. Dissertation, Cambridge University, Cambridge, England.

Appendix

This appendix is divided into three sections. In the first section we present the proofs of our theoretical results. In the second we provide experimental details and additional results for the Bicycle domain. In the final section we provide details of our experiments on the Arcade Learning Environment, including results on 60 games.

Theoretical Results

Lemma 1.

Let and be the policy greedy with respect to . Let be an operator with the properties that, for all ,

  1. , and

  2. .

Consider the sequence with , and let . Then the sequence converges, and furthermore, for all ,

Proof.

By Condition 1, we have that

since has a unique fixed point. From this we deduce the second claim. Now, for a given , let and . We have

where in the second line we used Condition 2 of the lemma, and in the third the definition of applied to . Thus we have

and by induction

(11)

where is the -step transition kernel at derived from the nonstationary policy . Let . We now show that also. First note that Conditions 1 and 2, together with the boundedness of , ensure that is also bounded and thus . By definition, for any and , such that . Since is a nonexpansion in -norm, we have

and for all ,

such that

It follows that for any and , we can choose an to make small enough such that for all , . Hence

and thus converges. ∎

Lemma 2.

Let be an operator satisfying the conditions of Lemma 1, and let . Then for all and all ,

(12)
Proof.

Following the derivation of Lemma 1, we have

By the same derivation, for we have

But then

from which the lower bound follows. Now let be defined as in the proof of Lemma 1, and assume the upper bound of (12) holds up to . Then

and combined with the fact that (12) holds for this proves the upper bound. ∎

Theorem 2.

Let be the Bellman operator ((1) in the main text). Let be an operator with the property that there exists an such that for all , , and letting ,

  1. , and

  2. .

Consider the sequence with , and . Then is optimality-preserving: for all , converges,

and

Furthermore, is also gap-increasing:

Proof.

Note that these conditions imply the conditions of Lemma 1. Thus for all , converges to the limit . Now let . We have

(13)
(14)
(15)

where in (13) we used Jensen’s inequality, and (14) follows from the commutativity of and . Now

(16)

Now, by Lemma 1 converges to . Furthermore, using Lemma 2 and Lebesgue’s dominated convergence theorem, we have

(17)

We now take the of both sides of (16), which Lemma 2 guarantees exists, and obtain

Thus

Combining the above with (15), we deduce that

and, by uniqueness of the fixed point of the Bellman operator over