Provably Efficient Q-Learning with Low Switching Cost

05/30/2019 ∙ by Yu Bai, et al. ∙ University of Illinois at Urbana-Champaign The Regents of the University of California Stanford University 3

We take initial steps in studying PAC-MDP algorithms with limited adaptivity, that is, algorithms that change its exploration policy as infrequently as possible during regret minimization. This is motivated by the difficulty of running fully adaptive algorithms in real-world applications (such as medical domains), and we propose to quantify adaptivity using the notion of local switching cost. Our main contribution, Q-Learning with UCB2 exploration, is a model-free algorithm for H-step episodic MDP that achieves sublinear regret whose local switching cost in K episodes is O(H^3SA K), and we provide a lower bound of Ω(HSA) on the local switching cost for any no-regret algorithm. Our algorithm can be naturally adapted to the concurrent setting, which yields nontrivial results that improve upon prior work in certain aspects.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

This paper is concerned with reinforcement learning (RL) under

limited adaptivity or low switching cost, a setting in which the agent is allowed to act in the environment for a long period but is constrained to switch its policy for at most times. A small switching cost restricts the agent from frequently adjusting its exploration strategy based on feedback from the environment.

There are strong practical motivations for developing RL algorithms under limited adaptivity. The setting of restricted policy switching captures various real-world settings where deploying new policies comes at a cost. For example, in medical applications where actions correspond to treatments, it is often unrealistic to execute fully adaptive RL algorithms – instead one can only run a fixed policy approved by the domain experts to collect data, and a separate approval process is required every time one would like to switch to a new policy [18, 2, 3]. In personalized recommendation [24], it is computationally impractical to adjust the policy online based on instantaneous data, and a more common practice is to aggregate data in a long period before deploying a new policy. In problems where we run RL for compiler optimization [4] and hardware placements [19], as well as for learning to optimize databases [17], often it is desirable to limit the frequency of changes to the policy since it is costly to recompile the code, to run profiling, to reconfigure an FPGA devices, or to restructure a deployed relational database. The problem is even more prominent in the RL-guided new material discovery as it takes time to fabricate the materials and setup the experiments [23, 20]. In many of these applications, adaptivity turns out to be really the bottleneck.

Understanding limited adaptivity RL is also important from a theoretical perspective. First, algorithms with low adaptivity (a.k.a. “batched” algorithms) that are as effective as their fully sequential counterparts have been established in bandits [22, 11], online learning [8], and optimization [10], and it would be interesting to extend such undertanding into RL. Second, algorithms with few policy switches are naturally easy to parallelize as there is no need for parallel agents to communicate if they just execute the same policy. Third, limited adaptivity is closed related to off-policy RL111In particular, corresponds to off-policy RL, where the algorithm can only choose one data collection policy [13]. and offers a relaxation less challenging than the pure off-policy setting.

In this paper, we take initial steps towards studying theoretical aspects of limited adaptivity RL through designing low-regret algorithms with limited adaptivity. We focus on model-free algorithms, in particular Q-Learning, which was recently shown to achieve a regret bound with UCB exploration and a careful stepsize choice by Jin et al. [15]. Our goal is to design Q-Learning type algorithms that achieve similar regret bounds with a bounded switching cost.

The main contributions of this paper are summarized as follows:

  1. We propose a notion of local switching cost that captures the adaptivity of an RL algorithm in episodic MDPs (Section 2). Algorithms with lower local switching cost will make fewer switches in its deployed policies.

  2. Building on insights from the UCB2 algorithm in multi-armed bandits [5] (Section 3), we propose our main algorithms, Q-Learning with UCB2-{Hoeffding, Bernstein} exploration. We prove that these two algorithms achieve regret (respectively) and local switching cost (Section 4). The regret matches their vanilla counterparts of [15] but the switching cost is only logarithmic in the number of episodes.

  3. We show how our low switching cost algorithms can be applied in the concurrent RL setting [12], in which multiple agents can act in parallel (Section 5). The parallelized versions of our algorithms with UCB2 exploration give rise to Concurrent Q-Learning algorithms, which achieve a nearly linear speedup in execution time and compares favorably against existing concurrent algorithms in sample complexity for exploration.

  4. We show a simple lower bound on the switching cost for any sublinear regret algorithm, which has at most a gap from the upper bound (Section 7).

1.1 Prior work

Low-regret RL

Sample-efficient RL has been studied extensively since the classical work of Kearns and Singh [16] and Brafman and Tennenholtz [7], with a focus on obtaining a near-optimal policy in polynomial time, i.e. PAC guarantees. A subsequent line of work initiate the study of regret in RL and provide algorithms that achieve regret  [14, 21, 1]. In our episodic MDP setting, the information-theoretic lower bound for the regret is , which is matched in recent work by the UCBVI [6] and ORLC [9] algorithms. On the other hand, while all the above low-regret algorithms are essentially model-based, the recent work of [15] shows that model-free algorithms such as Q-learning are able to achieve regret which is only worse than the lower bound.

Low switching cost / batched algorithms

Auer et al. [5] propose UCB2 in bandit problems, which achieves the same regret bound as UCB but has switching cost only instead of the naive . Cesa-Bianchi et al. [8] study the switching cost in online learning in both the adversarial and stochastic setting, and design an algorithm for stochastic bandits that acheive optimal regert and switching cost.

Learning algorithms with switching cost bounded by a fixed constant is often referred to as batched algorithms. Minimax rates for batched algorithms have been established in various problems such as bandits [22, 11] and convex optimization [10]. In all these scenarios, minimax optimal -batch algorithms are obtained for all , and their rate matches that of fully adaptive algorithms once .

2 Problem setup

In this paper, we consider undiscounted episodic tabular MDPs of the form . The MDP has horizon with trajectories of the form , where and . The state space and action space are discrete with and . The initial state can be either adversarial (chosen by an adversary who has access to our algorithm), or stochastic specified by some distribution . For any

, the transition probability is denoted as

. The reward is denoted as , which we assume to be deterministic222Our results can be straightforwardly extended to the case with stochastic rewards.. We assume in addition that for all , so that the last state is effectively an (uninformative) absorbing state.

A deterministic policy consists of sub-policies . For any deterministic policy , let and denote its value function and state-action value function at the -th step respectively. Let denote an optimal policy, and and denote the optimal and functions for all . As a convenient short hand, we denote and also use in the proofs to denote observed transition. Unless otherwise specified, we will focus on deterministic policies in this paper, which will be without loss of generality as there exists at least one deterministic policy that is optimal.

Regret

We focus on the regret for measuring the performance of RL algorithms. Let be the number of episodes that the agent can play. (so that total number of steps is .) The regret of an algorithm is defined as

where is the policy it employs before episode starts, and is the optimal value function for the entire episode.

Miscellanous notation

We use standard Big-Oh notations in this paper: means that there exists an absolute constant such that (similarly for ). means that where depends at most poly-logarithmically on all the problem parameters.

2.1 Measuring adaptivity through local switching cost

To quantify the adaptivity of RL algorithms, we consider the following notion of local switching cost for RL algorithms. The local switching cost (henceforth also “switching cost”) between any pair of policies is defined as the number of pairs on which and are different:

For an RL algorithm that employs policies , its local switching cost is defined as

Note that (1)

is a random variable in general, as

can depend on the outcome of the MDP; (2) we have the trivial bound for any and for any algorithm .

Remark The local switching cost extends naturally the notion of switching cost in online learning [8] and is suitable in scenarios where the cost of deploying a new policy scales with the portion of on which the action is changed.

A closely related notion of adaptivity is the global switching cost, which simply measures how many times the algorithm switches its entire policy:

As implies , we have the trivial bound that . However, the global switching cost can be substantially smaller for algorithms that tend to change the policy “entirely” rather than “locally”. In this paper, we focus on bounding , and leave the task of tighter bounds on as future work.

3 UCB2 for multi-armed bandits

To gain intuition about the switching cost, we briefly review the UCB2 algorithm [5] on multi-armed bandit problems, which achieves the same regret bound as the original UCB but has a substantially lower switching cost.

The multi-armed bandit problem can be viewed as an RL problem with , , so that the agent needs only play one action and observe the (random) reward . The distribution of ’s are unknown to the agent, and the goal is to achieve low regret.

The UCB2 algorithm is a variant of the celebrated UCB (Upper Confidence Bound) algorithm for bandits. UCB2 also maintains upper confidence bounds on the true means , but instead plays each arm multiple times rather than just once when it’s found to maximize the upper confidence bound. Specifically, when an arm is found to maximize the UCB for the -th time, UCB2 will play it times, where

(1)

for and some parameter to be determined. 333For convenience, here we treat as an integer. In Q-learning we could not make this approximation (as we choose super small), and will massage the sequence to deal with it. The full UCB2 algorithm is presented in Algorithm 1.

0:  Parameter .
  Initialize: for . Play each arm once. Set and .
  while  do
     Select arm that maximizes , where is the average reward obtained from arm and (with some specific choice.)
     Play arm exactly times.
     Set and .
  end while
Algorithm 1 UCB2 for multi-armed bandits

[Auer et al. [5]] For , the UCB2 algorithm acheives expected regret bound

where is the gap between arm and the optimal arm. Further, the switching cost is at most . The switching cost bound in Theorem 1 comes directly from the fact that implies , by the convexity of and Jensen’s inequality. Such an approach can be fairly general, and we will follow it in sequel to develop RL algorithm with low switching cost.

4 Q-learning with UCB2 exploration

In this section, we propose our main algorithm, Q-learning with UCB2 exploration, and show that it achieves sublinear regret as well as logarithmic local switching cost.

4.1 Algorithm description

High-level idea

Our algorithm maintains wo sets of optimistic estimates: a running estimate which is updated after every episode, and a delayed estimate which is only updated occasionally but used to select the action. In between two updates to , the policy stays fixed, so the number of policy switches is bounded by the number of updates to .

To describe our algorithm, let be defined as

and define the triggering sequence as

(2)

where the parameters will be inputs to the algorithm. Define for all the quantities

Two-stage switching strategy

The triggering sequence (2) defines a two-stage strategy for switching policies. Suppose for a given , the algorithm decides to take some particular for the -th time, and has observed and updated the running estimate accordingly. Then, whether to also update the policy network is decided as

  • Stage I: if , then always perform the update .

  • Stage II: if , then perform the above update only if is in the triggering sequence, that is, for some .

In other words, for any state-action pair, the algorithm performs eager policy update in the beginning visitations, and switches to delayed policy update after that according to UCB2 scheduling.

Optimistic exploration bonus

We employ either a Hoeffding-type or a Bernstein-type exploration bonus to make sure that our running estimates are optimistic. The full algorithm with Hoeffding-style bonus is presented in Algorithm 2.

0:  Parameter , , and .
  Initialize: , , for all .
  for episode  do
     Receive .
     for step  do
        Take action , and observe .
        ;
         (Hoeffding-type bonus);
        .
        .
        if  (where is defined in (2)) then
           (Update policy) .
        end if
     end for
  end for
Algorithm 2 Q-learning with UCB2-Hoeffding (UCB2H) Exploration

4.2 Regret and switching cost guarantee

We now present our main results. [Q-learning with UCB2H exploration achieves sublinear regret and low switching cost] Choosing and , with probability at least , the regret of Algorithm 2 is bounded by . Further, the local switching cost is bounded as . Theorem 4.2 shows that the total regret of Q-learning with UCB2 exploration is , the same as UCB version of [15]. In addition, the local switching cost of our algorithm is only , which is logarithmic in , whereas the UCB version can have in the worst case the trivial bound . We give a high-level overview of the proof Theorem 4.2 in Section 6, and defer the full proof to Appendix A.

Bernstein version

Replacing the Hoeffding bonus with a Bernstein-type bonus, we can achieve regret ( better than UCB2H) and the same switching cost bound. [Q-learning with UCB2B exploration achieves sublinear regret and low switching cost] Choosing and , with probability at least , the regret of Algorithm 3 is bounded by as long as . Further, the local switching cost is bounded as . The full algorithm description, as well as the proof of Theorem 4.2, are deferred to Appendix B.

Compared with Q-learning with UCB [15], Theorem 4.2 and 4.2 demonstrate that “vanilla” low-regret RL algorithms such as Q-Learning can be turned into low switching cost versions without any sacrifice on the regret bound.

4.3 PAC guarantee

Our low switching cost algorithms can also achieve the PAC learnability guarantee. Specifically, we have the following [PAC bound for Q-Learning with UCB2 exploration] Suppose (WLOG) that is deterministic. For any , Q-Learning with {UCB2H, UCB2B} exploration can output a (stochastic) policy such that with high probability

after episodes. The proof of Corollary 4.3 involves turning the regret bounds in Theorem 4.2 and 4.2 to PAC bounds using the online-to-batch conversion, similar as in [15]. The full proof is deferred to Appendix C.

5 Application: Concurrent Q-Learning

Our low switching cost Q-Learning can be applied to developing algorithms for Concurrent RL [12] – a setting in which multiple RL agents can act in parallel and hopefully accelerate the exploration in wall time.

Setting

We assume there are agents / machines, where each machine can interact with a independent copy of the episodic MDP (so that the transitions and rewards on the MDPs are mutually independent). Within each episode, the machines must play synchronously and cannot communiate, and can only exchange information after the entire episode has finished. Note that our setting is in a way more stringent than [12], which allows communication after each timestep.

We define a “round” as the duration in which the machines simultanesouly finish one episode and (optionally) communicate and update their policies. We measure the performance of a concurrent algorithm in its required number of rounds to find an near-optimal policy. With larger , we expect such number of rounds to be smaller, and the best we can hope for is a linear speedup in which the number of rounds scales as .

Concurrent Q-Learning

Intuitively, any low switching cost algorithm can be made into a concurrent algorithm, as its execution can be parallelized in between two consecutive policy switches. Indeed, we can design concurrent versions of our low switching Q-Learning algorithm and achieve a nearly linear speedup. [Concurrent Q-Learning achieves nearly linear speedup] There exists concurrent versions of Q-Learning with {UCB2H, UCB2B} exploration such that, given a budget of parallel machines, returns an near-optimal policy in

rounds of execution. Theorem 5 shows that concurrent Q-Learning has a linear speedup so long as . In particular, in high-accuracy (small ) cases, the constant overhead term can be negligible and we essentially have a linear speedup over a wide range of . The proof of Theorem 5 is deferred to Appendix D.

Comparison with existing concurrent algorithms

Theorem 5 implies a PAC mistake bound as well: there exists concurrent algorithms on machines, Concurrent Q-Learning with {UCB2H, UCB2B}, that performs a near-optimal action on all but

actions with high probability (detailed argument in Appendix D.2).

We compare ourself with the Concurrent MBIE (CMBIE) algorithm in [12], which considers the discounted and infinite-horizon MDPs, and has a mistake bound444 are the {# states, # actions, discount factor} of the discounted infinite-horizon MDP.

Our concurrent Q-Learning compares favorably against CMBIE in terms of the mistake bound:

  • Dependence on . CMBIE achieves , whereas our algorithm achieves , better by a factor of .

  • Dependence on . These are not comparable in general, but under the “typical” correspondence555One can transform an episodic MDP with states to an infinite-horizon MDP with states. Also note that the “effective” horizon for discounted MDP is . , , , we get . Compared to , CMBIE has a higher dependence on as well as a term due to its model-based nature.

6 Proof overview of Theorem 4.2

The proof of Theorem 4.2 involves two parts: the switching cost bound and the regret bound. The switching cost bound results directly from the UCB2 switching schedule, similar as in the bandit case (cf. Section 3). However, such a switching schedule results in delayed policy updates, which makes establishing the regret bound technically challenging.

The key to the regret bound for “vanilla” Q-Learning in [15] is a propagation of error argument, which shows that the regret666Technically it is an upper bound on the regret. from the -th step and forward (henceforth the -regret), defined as

is bounded by times the -regret, plus some bounded error term. As , this fact can be applied recursively for which will result in a total regret bound that is not exponential in . The control of the (excess) error propagation factor by and the ability to converge are then achieved simultaneously via the stepsize choice .

In constrast, our low-switching version of Q-Learning updates the exploration policy in a delayed fashion according to the UCB2 schedule. Specifically, the policy at episode does not correspond to the argmax of the running estimate , but rather a previous version for some . This introduces a mismatch between the used for exploration and the being updated, and it is a priori possible whether such a mismatch will blow up the propagation of error.

We resolve this issue via a novel error analysis, which at a high level consists of the following steps:

  1. We show that the quantity is upper bounded by a max error

    (Lemma A.3). On the right hand side, the first term does not have a mismatch (as depends on ) and can be bounded similarly as in [15]. The second term is a perturbation term, which we bound in a precise way that relates to stepsizes in between episodes to and the -regret (Lemma A.3).

  2. We show that, under the UCB2 scheduling, the combined error above results a mild blowup in the relation between -regret and -regret – the multiplicative factor can be now bounded by (Lemma A.4). Choosing will make the multiplicative factor and the propagation of error argument go through.

We hope that the above analysis can be applied more broadly in analyzing exploration problems with delayed updates or asynchronous parallelization.

7 Lower bound on switching cost

Let and be the set of episodic MDPs satisfying the conditions in Section 2. For any RL algorithm satisfying , we have

i.e. the worst case regret is linear in . Theorem 7 implies that the switching cost of any no-regret algorithm is lower bounded by , which is quite intuitive as one would like to play each action at least once on all . Compared with the lower bound, the switching cost we achieve through UCB2 scheduling is at most off by a factor of . We believe that the factor is not necessary as there exist algorithms achieving double-log [8] in bandits, and would also like to leave the tightening of the factor as future work. The proof of Theorem 7 is deferred to Appendix E.

8 Conclusion

In this paper, we take steps toward studying limited adaptivity RL. We propose a notion of local switching cost to account for the adaptivity of RL algorithms. We design a Q-Learning algorithm with infrequent policy switching that achieves regret while switching its policy for at most times. Our algorithm works in the concurrent setting through parallelization and achieves nearly linear speedup and favorable sample complexity. Our proof involves a novel perturbation analysis for exploration algorithms with delayed updates, which could be of broader interest.

There are many interesting future directions, including (1) low switching cost algorithms with tighter regret bounds, most likely via model-based approaches; (2) algorithms with even lower switching cost; (3) investigate the connection to other settings such as off-policy RL.

References

  • Agrawal and Jia [2017] S. Agrawal and R. Jia. Optimistic posterior sampling for reinforcement learning: worst-case regret bounds. In Advances in Neural Information Processing Systems, pages 1184–1194, 2017.
  • Almirall et al. [2012] D. Almirall, S. N. Compton, M. Gunlicks-Stoessel, N. Duan, and S. A. Murphy. Designing a pilot sequential multiple assignment randomized trial for developing an adaptive treatment strategy. Statistics in medicine, 31(17):1887–1902, 2012.
  • Almirall et al. [2014] D. Almirall, I. Nahum-Shani, N. E. Sherwood, and S. A. Murphy. Introduction to smart designs for the development of adaptive interventions: with application to weight loss research. Translational behavioral medicine, 4(3):260–274, 2014.
  • Ashouri et al. [2018] A. H. Ashouri, W. Killian, J. Cavazos, G. Palermo, and C. Silvano.

    A survey on compiler autotuning using machine learning.

    ACM Computing Surveys (CSUR), 51(5):96, 2018.
  • Auer et al. [2002] P. Auer, N. Cesa-Bianchi, and P. Fischer. Finite-time analysis of the multiarmed bandit problem. Machine learning, 47(2-3):235–256, 2002.
  • Azar et al. [2017] M. G. Azar, I. Osband, and R. Munos. Minimax regret bounds for reinforcement learning. In Proceedings of the 34th International Conference on Machine Learning-Volume 70, pages 263–272. JMLR. org, 2017.
  • Brafman and Tennenholtz [2002] R. I. Brafman and M. Tennenholtz. R-max-a general polynomial time algorithm for near-optimal reinforcement learning. Journal of Machine Learning Research, 3(Oct):213–231, 2002.
  • Cesa-Bianchi et al. [2013] N. Cesa-Bianchi, O. Dekel, and O. Shamir. Online learning with switching costs and other adaptive adversaries. In Advances in Neural Information Processing Systems, pages 1160–1168, 2013.
  • Dann et al. [2018] C. Dann, L. Li, W. Wei, and E. Brunskill. Policy certificates: Towards accountable reinforcement learning. arXiv preprint arXiv:1811.03056, 2018.
  • Duchi et al. [2018] J. Duchi, F. Ruan, and C. Yun. Minimax bounds on stochastic batched convex optimization. In Conference On Learning Theory, pages 3065–3162, 2018.
  • Gao et al. [2019] Z. Gao, Y. Han, Z. Ren, and Z. Zhou. Batched multi-armed bandits problem. arXiv preprint arXiv:1904.01763, 2019.
  • Guo and Brunskill [2015] Z. Guo and E. Brunskill. Concurrent pac rl. In

    Twenty-Ninth AAAI Conference on Artificial Intelligence

    , 2015.
  • Hanna et al. [2017] J. P. Hanna, P. S. Thomas, P. Stone, and S. Niekum. Data-efficient policy evaluation through behavior policy search. In Proceedings of the 34th International Conference on Machine Learning-Volume 70, pages 1394–1403. JMLR. org, 2017.
  • Jaksch et al. [2010] T. Jaksch, R. Ortner, and P. Auer. Near-optimal regret bounds for reinforcement learning. Journal of Machine Learning Research, 11(Apr):1563–1600, 2010.
  • Jin et al. [2018] C. Jin, Z. Allen-Zhu, S. Bubeck, and M. I. Jordan. Is Q-learning provably efficient? In Advances in Neural Information Processing Systems, pages 4868–4878, 2018.
  • Kearns and Singh [2002] M. Kearns and S. Singh. Near-optimal reinforcement learning in polynomial time. Machine learning, 49(2-3):209–232, 2002.
  • Krishnan et al. [2018] S. Krishnan, Z. Yang, K. Goldberg, J. Hellerstein, and I. Stoica. Learning to optimize join queries with deep reinforcement learning. arXiv preprint arXiv:1808.03196, 2018.
  • Lei et al. [2012] H. Lei, I. Nahum-Shani, K. Lynch, D. Oslin, and S. A. Murphy. A ”smart” design for building individualized treatment sequences. Annual review of clinical psychology, 8:21–48, 2012.
  • Mirhoseini et al. [2017] A. Mirhoseini, H. Pham, Q. V. Le, B. Steiner, R. Larsen, Y. Zhou, N. Kumar, M. Norouzi, S. Bengio, and J. Dean. Device placement optimization with reinforcement learning. In nternational Conference on Machine Learning (ICML-17), pages 2430–2439. JMLR. org, 2017.
  • Nguyen et al. [2019] P. Nguyen, T. Tran, S. Gupta, S. Rana, M. Barnett, and S. Venkatesh. Incomplete conditional density estimation for fast materials discovery. In Proceedings of the 2019 SIAM International Conference on Data Mining, pages 549–557. SIAM, 2019.
  • Osband et al. [2013] I. Osband, D. Russo, and B. Van Roy. (more) efficient reinforcement learning via posterior sampling. In Advances in Neural Information Processing Systems, pages 3003–3011, 2013.
  • Perchet et al. [2016] V. Perchet, P. Rigollet, S. Chassang, E. Snowberg, et al. Batched bandit problems. The Annals of Statistics, 44(2):660–681, 2016.
  • Raccuglia et al. [2016] P. Raccuglia, K. C. Elbert, P. D. Adler, C. Falk, M. B. Wenny, A. Mollo, M. Zeller, S. A. Friedler, J. Schrier, and A. J. Norquist. Machine-learning-assisted materials discovery using failed experiments. Nature, 533(7601):73, 2016.
  • Theocharous et al. [2015] G. Theocharous, P. S. Thomas, and M. Ghavamzadeh. Personalized ad recommendation systems for life-time value optimization with guarantees. In Twenty-Fourth International Joint Conference on Artificial Intelligence, 2015.

Appendix A Proof of Theorem 4.2

This section is structured as follows. We collect notation in Section A.1 and list some basic properties of the running estimate in Section A.2, establish useful perturbation bounds on in Section A.3, and present the proof of the main theorem in Section A.4.

a.1 Notation

Let and denote the estimates and in Algorithm 2 before the -th episode has started. Note that .

Define the sequences

For , we have and . For , we have .

With the definition of in hand, we have the following explicit formula for :

where is the number of updates on prior to the

-th epoch, and

are the indices for the epochs. Note that if the algorithm indeed observes and takes the action on the -th step of episode .

Throughout the proof we let denote a log factor, where we recall is the pre-specified tail probability.

a.2 Basics

[Properties of ; Lemma 4.1, [15]] The following properties hold for the sequence :

  1. for every .

  2. and for every .

  3. for every .

[ is optimistic and accurate; Lemma 4.2 & 4.3, [15]] We have for all that

(3)

where .

Further, with probability at least , choosing for some absolute constant , we have for all that

where . Remark. This first part of the Lemma, i.e. the expression of in terms of rewards and value functions, is an aggregated form for the functions under the Q-Learning updates, and is independent to the actual exploration policy as well as the bonus.

a.3 Perturbation bound under delayed Q updates

For any , let

(4)

denote the errors of the estimated relative to and . As is optimistic, the regret can be bounded as

The goal of the propagation of error is to related by .

We begin by showing that is controlled by the max of and , where . [Max error under delayed policy update] We have

(5)

where (which depends on .) In particular, if , then and the upper bound reduces to .

Proof.

We first show (5). By definition of we have ,

so it suffices to show that

Indeed, we have

On the other hand, maximizes . Due to the scheduling of the delayed update, was set to , and was not updated since then before , so .

Now, defining

the vectors

and only differ in the -th component (which is the only action taken therefore also the only component that is updated). If is also maximized at , then we have ; otherwise it is maximized at some and we have

Putting together we get

which implies (5).

Lemma A.3 suggests bounding via bounding the “main term” and “perturbation term” separately. We now establish the bound on the perturbation term. [Perturbation bound on ] For any such that (so that the perturbation term is non-zero), we have

(6)

where