# Learning to Control in Metric Space with Optimal Regret

We study online reinforcement learning for finite-horizon deterministic control systems with arbitrary state and action spaces. Suppose that the transition dynamics and reward function is unknown, but the state and action space is endowed with a metric that characterizes the proximity between different states and actions. We provide a surprisingly simple upper-confidence reinforcement learning algorithm that uses a function approximation oracle to estimate optimistic Q functions from experiences. We show that the regret of the algorithm after K episodes is O(HL(KH)^d-1/d) where L is a smoothness parameter, and d is the doubling dimension of the state-action space with respect to the given metric. We also establish a near-matching regret lower bound. The proposed method can be adapted to work for more structured transition systems, including the finite-state case and the case where value functions are linear combinations of features, where the method also achieve the optimal regret.

## Authors

• 49 publications
• 3 publications
• 54 publications
• ### Provably adaptive reinforcement learning in metric spaces

We study reinforcement learning in continuous state and action spaces en...
06/18/2020 ∙ by Tongyi Cao, et al. ∙ 0

• ### Zooming for Efficient Model-Free Reinforcement Learning in Metric Spaces

Despite the wealth of research into provably efficient reinforcement lea...
03/09/2020 ∙ by Ahmed Touati, et al. ∙ 8

• ### Variance Reduction Methods for Sublinear Reinforcement Learning

This work considers the problem of provably optimal reinforcement learni...
02/26/2018 ∙ by Sham Kakade, et al. ∙ 0

• ### Stochastic Linear Bandits with Protected Subspace

We study a variant of the stochastic linear bandit problem wherein we op...
11/02/2020 ∙ by Advait Parulekar, et al. ∙ 0

• ### Model Selection with Near Optimal Rates for Reinforcement Learning with General Model Classes

We address the problem of model selection for the finite horizon episodi...
07/13/2021 ∙ by Avishek Ghosh, et al. ∙ 0

• ### Regret Minimization for Partially Observable Deep Reinforcement Learning

Deep reinforcement learning algorithms that estimate state and state-act...
10/31/2017 ∙ by Peter H. Jin, et al. ∙ 0

• ### Exploration in Structured Reinforcement Learning

We address reinforcement learning problems with finite state and action ...
06/03/2018 ∙ by Jungseul Ok, et al. ∙ 0

##### This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

## 1 Introduction

Reinforcement learning has proved to be a powerful approach for online control of complicated systems [Bertsekas, 1995, Sutton and Barto, 2018]. Given an unknown transition system with unknown rewards, we aim to learn to control the system on-the-fly by exploring available actions and receiving real-time feedback. Learning to control efficiently requires the algorithm to actively explore the problem space and dynamically update the control policy.

A major challenge with effective exploration is how to generalize past experiences to unseen states. Extensive research has focused on reinforcement learning with parametric models, for examples linear quadratic control

[Dean et al., 2018], linear model for value function approximation [Parr et al., 2008], and state aggregation model [Singh et al., 1995]. While these parametric models would substantially reduce the complexity or regret of reinforcement learning, their practical performances are at risk of model misspecification.

In this paper, we focus on finite-horizon deterministic control systems without any parametric model. We suppose that the control system is endowed with a metric dist that characterizes the proximity between state and action pairs. We assume that the transition and reward functions are continuous with respect to dist, i.e., states that are close to each other have similar values. Proximity measures of the state-actions have been extensively studied in the literature (see e.g. [Ferns et al., 2004, Ortner, 2007, Castro and Precup, 2010, Ferns et al., 2012a, Ferns et al., 2012b, Tang and van Breugel, 2016] and reference therein).

Under this very general assumption, we develop a surprisingly simple upper-confidence reinforcement learning algorithm. It adaptively updates the control policy through episodes of online learning. The algorithm keeps track of an experience buffer, as is common in practical deep reinforcement learning methods [Mnih et al., 2013]. After each episode, the algorithm recomputes the Q-functions by using the updated experience buffer through a function approximation oracle. The function approximation oracle is required to find an upper-confidence Q-function that fits the known data and optimistically estimate the value of unseen states and actions. We show that the oracle can be achieved using a nearest neighbor construction. The optimism nature of the algorithm would encourage exploration of unseen states and actions. We show that for arbitrary metric state-action space, the algorithm achieves the sublinear regret

 O[HL(KH)(d−1)/d]

where is the diameter of the state-action space, is some smoothness parameter, and is the doubling dimension of the state-action space with respect to the metric. This regret is “sublinear” in the number of episodes played. Therefore the average number of mistakes decreases as more experiences are collected. When the state-action-space is a smooth and compact manifold, its intrinsic dimension can be substantially smaller than the observed ambient dimension. We use an information-theoretical approach to show that this regret is optimal.

The algorithm we propose is surprisingly general and easy to implement. It uses a function approximation oracle to find “optimistic” -functions, which are later used to control the system in the next episode. By picking suitable function approximators, we can adapt our method and analysis to more structured classes of control systems. As an example, we show that the method can be adapted to the setting where the value functions are linear combinations of features. In this setting, we show the method achieves a state-of-art regret upper bound , where

is the dimension of feature space. This regret is also known to be optimal. We believe our method can be adapted to work with a broader family of function approximators, including both the classical spline methods and deep neural networks. Understanding the regret for learning to control using these function classes is for future research.

## 2 Related Literatures

Complexity and regret for reinforcement learning on stochastic systems received significant attention. A basic setting is the Markov decision process (MDP), where the transition law at a given state

and action

is according to some probability distribution

. In the case of finite-state-action MDP without any structure knowledge, efficient reinforcement learning methods typically achieve regret that scale as , where is the number of discrete states and is the number of discrete actions, is number of time steps (see for examples [Jaksch et al., 2010, Agrawal and Jia, 2017, Azar et al., 2017, Osband and Van Roy, 2016]). The work of [Jaksch et al., 2010] provided a lower bound on the regret of for -horizon MDP and also regret bounds for weakly communicating infinite-horizon average reward MDP. The number of sample transitions needed to learn an approximate policy has been considered by [Strehl et al., 2006, Lattimore and Hutter, 2014a, Lattimore and Hutter, 2014b, Dann and Brunskill, 2015, Szita and Szepesvári, 2010, Kakade et al., 2003]. The optimal sample complexity for finding an -optimal policy is [Sidford et al., 2018].

In the regime of continuous-state MDP, the complexity and regret of online reinforcement learning has been explored under structured assumptions. [Lattimore et al., 2013] studies the complexity when the true transition system belongs to a finite or compact hypothesis class, and shows sample policy that depends polynomially on the cadinality or covering number of the model class. Ortner and Ryabko [Ortner and Ryabko, 2012] develops a model-based algorithm with a regret bound for an algorithm that applies to Lipschitz MDP problems with continuous state spaces and Holder-continuous transition kernels in terms of the total variation divergence. Pazis and Parr [Pazis and Parr, 2013] considers MDP with continuous state spaces, under the assumption that Q-functions are Lipschitz-continuous and establishes the sample complexity bound that involves an approximate covering number. Ok et al. [Ok et al., 2018] studied structured MDP with a finite state space and a finite action space, including the special case of Liptschiz MDP, and provides various regret upper bounds.

Unfortunately, existing results for Lipchitz MDP do not apply to deterministic control systems with continuity in a metric space. In the preceding works on Lipschitz MDP, the transition kernel is assumed to be continuous in the sense that . However, this assumption almost never holds for deterministic systems. Due to the deterministic nature, the transition density is always a Dirac measure, which is discontinuous in the norm. In fact, we often have when regardless of how close and are.

In contrast to the vast literatures on reinforcement learning for MDP, online learning for deterministic control has been studied by few. Note that deterministic transition is far more common in applications like robotics and self-driving cars. A closely related and significant result is [Wen and Van Roy, 2017], which studies the complexity and regret for online learning in episodic deterministic systems. Under the assumption that the optimal function belongs to a hypothesis class, it provides an optimistic constraint propagation method that achieves optimal regret. This result applies to many important special cases including the case of finitely many states, the cases of linear models and state aggregation and beyond. This result of [Wen and Van Roy, 2017] points out a significant observation that the complexity of learning to control depends on complexity of the functional class where the functions reside in. However, the algorithm provided by [Wen and Van Roy, 2017] is rather abstract (which is due to the generality of the method). In comparison to [Wen and Van Roy, 2017], our paper focuses on the setting where the only structural knowledge is the continuity with respect to a metric. In such a setting, [Wen and Van Roy, 2017] would imply an infinite regret as the Euler-dimension can be infinity in this case. We achieve a sublinear regret w.r.t. the number of episodes played. We show that it is optimal in this setting, and our algorithm is based on an upper-confidence function approximator which is easier to implement and generalize.

Upon finishing this paper, we are aware several recent papers working on similar settings independently. For instance, [Song and Sun, 2019] and [Zhu and Dunson, 2019] study the metric space reinforcement learning problem under stochastic reward and transition. Although we obtain a similar regret bound, our contribution is focused on the more general function approximators that capture a variate of settings, in which metric space is a special case. Another group [Wang and Du, 2019] is performing a comprehensive study on the linear value function setting. Our function approximators can be potentially combined with their results to obtain a better algorithm in the linear setting.

## 3 Problem Formulation

We review the basics of Markov decision problems and the notion of regret.

### 3.1 Deterministic MDP

Consider a deterministic finite-horizon Markov Decision Process (MDP) , where is an arbitrary set of states, is an arbitrary set of actions, is a deterministic transition function, is the horizon, and is a reward function. A policy is a function ( denotes the set of integers ). The optimal policy maximizes the cumulative reward in time steps, from any fixed initial state :

 maxπH∑h=1 r(sh,ah) subject to sh+1 =f(sh,ah), ah =π(sh,h), s1 =s0.

Given a policy , the value function is defined recursively as follows.

 ∀s∈S:VπH(s)=r(s,π(s,H))

and

 Vπh(s)=r(s,π(s,h))+Vπh+1[f(s,π(s,h))]

An optimal policy satisfies

 ∀h: Vπ∗h:=V∗h=maxπVπh% entrywisely.

In particular, the optimal value function satisfies the following Bellman equation

 ∀s∈S,h∈[H−1]:V∗H(s)=maxa∈Ar(s,a)

and

 V∗h(s)=maxa∈A{r(s,a)+V∗h+1[f(s,a)]}.

We also define the -functions, , as,

 Qπh(s,a) =r(s,a)+Vπh+1[f(s,a)]=r(s,a)+Qπh+1[f(s,a),π(f(s,a),h+1)],

where . We further denote for .

### 3.2 Episodic Reinforcement Learning and Regret

We focus on the online episodic reinforcement learning problem, in which the learning agent does not know or to begin with. The agent repetitively controls the system for episodes of time, where each episode starts from some initial state that does not depend on the history. We denote the total number of episodes played by the agent as .

Suppose that the learning agent is an algorithm (possibly randomized). It can observe all the state transitions and rewards generated by the system and adaptively pick the next action. We define its regret of this algorithm as

 RegretK(K)=EK[K⋅V∗(s0,1)−K∑k=1H∑h=1r(s(k)h,a(k)h)],

where the action is generated by the algorithm at time based on the entire past history, and is taken over the randomness of the algorithm . In words, the regret of measures the difference between the total rewards collected by the algorithm and that by the optimal policy after episodes.

## 4 The Basic Case of Finitely Many States and Actions

We provide Algorithm 1 for the case where the state space is a finite set of size and the action space is a finite set of size , without assuming any structural knowledge. Note although [Wen and Van Roy, 2017] has provided a regret-optimal algorithm for this setting, we provide a simpler algorithm based on upper-confidence bounds. Despite of the simplicity of this setting, we include the result to illustrate our idea, which might be of independent interest.

Algorithm 1 always maintains an upper bound of the optimal value function using the past experiences. The algorithm uses the value upper bound to plan the future actions. After each episode, the value upper bound is improved based on the newly obtained data. Since the exploration is based on the upper bound of the value function, it always encourages the exploration of un-explored actions. The value is improved in such a way that once a regret is paid, the algorithm is always able to gain some new information such that the same regret will not be paid again. The guarantee of the algorithm is presented in the following theorem.

###### Theorem 1.

After episodes, the above algorithm obtains a regret bound

 Regret(K)≤SAH.

The proof of the algorithm is presented in the appendix. Whenever a state-action is visited for the first time, an instant regret is paid and the algorithm “gains confidence” by setting the confidence bound to zero. This can happen at most times. Theorem 1 matches the regret upper and lower bound in the finite case, which was proved in [Wen and Van Roy, 2017]. Note that our setting is slightly different from that of [Wen and Van Roy, 2017] (they assumed to be time-dependent). For completeness, we include a rigorous regret lower bound proof for our setting. See Theorem 8 in the appendix.

## 5 Policy Exploration In Metric Space

Now we consider the more general case where the state-action space

is arbitrarily large. For example, the state in a video game can be a raw-pixel image, and the state of a robotic system can be a vector of positions, velocities and acceleration. In these problems the state space can be considered a smooth manifold in a high-dimensional ambient space.

### 5.1 Metric and Continuity

The major challenge with reinforcement learning is to generalize past experiences to unseen states. For the sake of generality, we only assume that a proper notion of distance between states is given, which suggests that states that are closer to each other have similar values.

Suppose we have a metric111In fact, our analysis does not require the condition . Hence the metric space can be further relaxed to pseudometric space. over the state-action space , i.e., , and dist satisfies the triangle inequality.

###### Assumption 1 (MDP in metric space with Lipschitz continuity).

Let the optimal action-value function be . Then there exist constants such that and , ,

 |Q∗h(s,a)−Q∗h(s′,a′)| ≤L1⋅dist[(s,a),(s′,a′)] (1)

and

 maxa′′dist[(f(s,a),a′′),(f(s′,a′),a′′)] ≤L2⋅dist((s,a),(s′,a′)) (2)

We further denote for convenience.

### 5.2 Optimistic Function Approximation

To handle the curse of dimensionality of general state space, we will use a function approximator for computing optimistic Q-function from experiences. The function approximator needs to satisfy the following conditions.

###### Assumption 2 (Function Approximation Oracle).

Let be a function. Let be a set of key-value pairs generated by function . Let be a parameter. Then there exists a function approximator, FuncApprox, which, on given , outputs a function that satisfies

1. is -Lipschitz continuous;

2. ;

3. 222This condition can be further relaxed to , for some error parameter . In this case, the regret bound is linearly depending on ..

One way to achieve the conditions required by the function approximator is to use the nearest neighbor approach, given by

 ∀x∈X:ˆq(x):=mini∈[N]{q(xi)+L⋅%dist(x,xi)}, (3)

where the distance regularization will overestimate the value at an unseen point using its near neighbors.

###### Lemma 2.

Suppose the function is -Lipschitz. Then the nearest neighbor approximator given by (3) is a function-approxiamtor satisfying Assumption 2 with Lipschitz constant .

###### Proof.

Firstly, we observed that, by triangle inequality, for any ,

 |ˆq(x)−ˆq(x′)| ≤maxi|q(xi)+L⋅dist(x,xi)−q(xi)−L⋅dist(x′,xi)| ≤L⋅maxi|dist(x,xi)−dist(x′,xi)| ≤L⋅dist(x,x′).

Therefore (1) of Assumption 2 holds.

Secondly, since is Lipschitz continuous, we have, for all ,

 q(x)≤q(xi)+L⋅dist(x,xi).

Thus

 q(x)≤miniq(xi)+L⋅dist(x,xi)=ˆq(x)

and (2) of Assumption 2 holds.

We now verify (3) of Assumption 2. For all , we have,

 q(xj)≤ˆq(xj)=mini{q(xi)+L⋅dist(xj,xi)}≤q(xj)+L⋅dist(xj,xj)=q(xj),

as desired. ∎

More generally speaking, one can construct the function approximator by solving a regression problem. For example, suppose that is integrable with respect to a measure over the state-action space. Then we can find it using

 maxˆq∈F,ˆq≥q∫qdμ(x),s.t. ˆq(xi)=q(xi),∀i∈[N],

or for some arbitrarily small ,

 maxˆq∈F,ˆq≥qN∑i=1−(q(xi)−ˆq(xi))2+δ∫ˆqdμ(x),

where is the set of -Lipschitz functions with infinity norm bounded by . This formulation is compatible with a broader family of function classes, where can be replaced by a parametric family that is sufficient to express the unknown

, including spline interpolation and deep neural networks.

### 5.3 Regret-Optimal Algorithm in Metric Space

Next we provide a regret-optimal algorithm for the metric MDP. The algorithm does not need additional structural assumption other than the Lipschitz continuity of and . It is a combination of the UCB-type algorithm with a nearest-neighbor search. It measures the confidence by coupling the Lipschitz constant with the distance of a newly observed state to its nearest observed state. The algorithm is formally presented in Algorithm 2.

The algorithm keeps an experience buffer that grows as new sample transitions are observed. It optimistically explores the policy space in online training using upper-estimate of Q-values. These Q-values are computed recursively by using the function approximator according to the dynamic programming principle.

In particular, if the function approximation oracle is given by the nearest neighbor construction (3), Step 11 and Step 12 of the algorithm take the form of

 ˆr(k+1)(s,a) =min[min(s′,a′)∈B(k+1)(r(s′,a′)+L1⋅dist[(s,a),(s′,a′)]),1] Q(k+1)H(s,a) ←ˆr(k+1)(s,a) Q(k+1)h(s,a) ←min(s′,a′)∈B(k+1)[r(s′,a′)+supa′′∈AQ(k+1)h+1(f(s′,a′),a′′)+L1⋅dist[(s′,a′),(s,a)]] (4)

for all . In this case, the nearest-neighbor function approximator will prioritize exploring ’s that are farther away from the seen ones. Note that the function approximators defined in (5.3) satisfy Assumption 2 (see Lemma 2).

Algorithm 2 provides a general framework for reinforcement learning with a function approximator in deterministic control systems. It can be adapted to work with a broad class of function approximators.

## 6 Regret Analysis

In this section, we prove the main results of this paper.

### 6.1 Main Results

For a metric space , we denote the -net, , as a set such that

 ∀x∈X:∃x′∈N(ϵ), s.t.dist(x,x′)≤ϵ.

If is compact, we denote as the minimum size of an -net for . We also denote a similar concept, the -packing, , as a set such that

 ∀x,x′∈C(ϵ):dist(x,x′)>ϵ.

If is compact, we denote as the maximum size of an -packing for . In general, and are of the same order. For a normed space (the metric is induced by a norm), we have .

Next we show that the regret till reaching -optimality is upper bounded by a constant that is proportional to the size of the -net. We will show later that the regret is lower bounded by a constant proportional to the size of the -packing.

###### Theorem 3 (Regret till ϵ-optimality).

Suppose we have an episodic deterministic MDP that satisfies Assumption 1. Let be a state-action space with diameter , and be parameters specified in Assumption 1. Suppose we use the -continuous function approximator defined in (5.3). Suppose the state-action space admits an -cover for any . Then after steps, Algorithm 2 obtains a regret bound

 Regret(K)≤H|N(ϵ)|+2ϵLKH.

where .

Suppose is the doubling dimension of the state-action space . The doubling dimension of a metric space is the smallest positive integer, , such that every ball can be covered by balls of half the radius. Then we can show the following regret bound.

###### Theorem 4 (Optimal Regret for Metric Space).

Suppose the state-action space is compact with diameter and has a doubling dimension . Then after episodes, Algorithm 2 with a nearest-neighbor function approximator (5.3) obtains a regret bound

 Regret(K) =O(DLK)dd+1⋅H.

The regret bound is sub-linear with in the number of steps and linear with respect to the smoothness constant and diameter .

The regret depends on the doubling dimension . It is the intrinsic dimension of - often very small even though the observed state space has high dimensions. For example, the raw-pixel images in a video games often belong to a smooth manifold and has small intrinsic dimension. Our Algorithm 2 uses the nearest-neighbor function approximation. It can be thought of as learning the manifold state space at the same time when solving the dynamic program. It does not need any parametric model or feature map to capture the small intrinsic dimension.

### 6.2 Proofs of the Main Theorems

To prove the above theorems, we need several core lemmas. The following lemma shows that the approximated Q-function (in Algorithm 2) is always an upper bound of the optimal Q-function. Note that this lemma works for all function approximators that satisfies Assumption 2.

###### Lemma 5 (Optimism).

Suppose Assumption 2 holds for the FuncApprox in Algorithm 2. Then, for any , , and , we have

 Q∗h(s,a)≤Q(k)h(s,a)
###### Proof.

By the properties of FuncApprox, we have

 r≤ˆr(k)entriwisely.

We prove the result by induction: when , we have

 Q∗H(s,a)=r(s,a)≤ˆr(k)(s,a):=Q(k)H(s,a).

Suppose the relation holds for , then, we have

 Q(k)h←FuncApprox({(s,a),r(s,a) +supa′∈AQ(k)h+1(f(s,a),a′)}(s,a)∈B(k+1)).

Thus we have

 Q∗h(s,a)=r(s,a)+supa′∈AQ∗h+1(f(s,a),a′)≤r(s,a)+supa′∈AQ(k)h+1(f(s,a),a′)≤Q(k)h(s,a).

This completes the proof. ∎

Given a finite set , for any , we define the nearest neighbor operator NN and function as followings,

 NN(B,(s,a)) =argmin(s′,a′)∈Bdist((s,a),(s′,a′)), bB(s,a) =dist[(s,a),NN(B,(s,a))]

The next lemma shows that Algorithm 2 with function approximator (5.3) does not incur too much per-step error.

###### Lemma 6 (Induction).

Suppose the FuncApprox in Algorithm 2 is (5.3). Then for any , and , we have,

 Q(k)h(s(k)h,a(k)h) ≤r(s(k)h+1,a(k)h+1)+Q(k)h+1(s(k)h+1,a(k)h+1)+L⋅bB(k)(s(k)h,a(k)h)
###### Proof.

Let

 (s(k)∗h,a(k)∗h)=argmin(s′,a′)∈B(k)dist[(s(k)h,a(k)h),(s′,a′)].

By definition of , we have

 Q(k)h(s(k)h,a(k)h) ≤Q(k)h(s(k)∗h,a(k)∗h)+L1⋅% dist[(s(k)∗h,a(k)∗h),(s(k)h,a(k)h)](*by % Lemma~{}???*) ≤r(s(k)∗h,a(k)∗h)+supa′′∈AQ(k)h+1(f(s(k)∗h,a(k)∗h),a′′)+L1⋅dist[(s(k)∗h,a(k)∗h),(s(k)h,a(k)h)] (*by definition of Q(k)h in Line~{}\ref% {line:qkh}*) (*by Lipshitz continuity of Q(k)h+1 and% r*) ≤r(s(k)h,a(k)h)+Q(k)h+1(s(k)h+1,a(k)h+1)+L⋅bB(k)(s(k)h,a(k)h).

###### Proof of Theorem 3.

From Lemma 4, we have

 Q∗h(s,a)≤Q(k)h(s,a),∀(s,a)∈S×A,h∈[H],k∈[K].

Denote the policy at episode as . We can rewrite the the regret as

 Regret(K) =K∑k=1[V∗1(s(k)1)−H∑h=1r(s(k)h,a(k)h)] =K∑k=1[maxa∈AQ∗1(s(k)1,a)−Qπ(k)1(s(k)1,a(k)1)] ≤K∑k=1[maxa∈AQ(k)1(s(k)1,a)−Qπ(k)1(s(k)1,a(k)1)] =K∑k=1[Q(k)1(s(k)1,a(k)1)−Qπ(k)1(s(k)1,a(k)1)]

Next we consider . We have

 Q(k)h (s(k)h,a(k)h)−Qπ(k)h(s(k)h,a(k)h)(*by Lemma~{}???*) ≤L⋅bB(k)(s(k)h,a(k)h)+r(k)h+1(s(k)h,a(k)h)+Q(k)h+1(s(k)h+1,a(k)h+1)−r(k)h+1(s(k)h,a(k)h)−Qπ(k)h+1(s(k)h+1,a(k)h+1) ≤L⋅bB(k)(s(k)h,a(k)h)+Q(k)h+1(s(k)h+1,a(k)h+1)−Qπ(k)h+1(s(k)h+1,a(k)h+1) ≤L⋅bB(k)(s(k)h,a(k)h)+L⋅bB(k)(s(k)h+1,a(k)h+1)+Q(k)h+2(s(k)h+2,a(k)h+2)−Qπ(k)h+2(s(k)h+2,a(k)h+2) ≤… ≤L⋅H∑h′=hbB(k)(s(k)h′,a(k)h′).

Moreover, we immediately have

 Q(k)1(s(k)1,a(k)1)−Qπ(k)1(s(k)1,a(k)1)≤H.

Therefore,

 Regret(K)≤K∑k=1min{L⋅H∑h=1bB(k)(s(k)h,a(k)h), H}.

We consider an -net, , that covers . We now connect each to its nearest neighbor in . Denote

 NNϵ(s,a)=argmin(s′,a′)∈N(ϵ)dist[(s,a),(s′,a′)].

At episode , if for some and some there is , we call has been visited. Thus if , we can upper bound

 bB(k)(s(k)h,a(k)h) ≤dist[(s(k)h,a(k)h),(s(k′)h′,a(k′)h′)]≤dist[(s(k)h,a(k)h),(s,a)]+dist[(s(k′)h′,a(k′)h′),(s,a)] ≤2ϵ.

On the other hand, if has not been visited, we upper bound the regret of the entire episode by . However, such case can only happen at most times as for the next episode, will become visited. Therefore,

 Regret(K)≤H|N(ϵ)|+2ϵLKH

as desired.

We are now ready to prove Theorem 4.

###### Proof of Theorem 4.

Since the metric space has a doubling dimension , we can have an -net with size

 |N(ϵ)|=Θ(Dϵ)d.

By Theorem 3, the regret is upper bounded by

 Regret(