# Practical Open-Loop Optimistic Planning

We consider the problem of online planning in a Markov Decision Process when given only access to a generative model, restricted to open-loop policies - i.e. sequences of actions - and under budget constraint. In this setting, the Open-Loop Optimistic Planning (OLOP) algorithm enjoys good theoretical guarantees but is overly conservative in practice, as we show in numerical experiments. We propose a modified version of the algorithm with tighter upper-confidence bounds, KLOLOP, that leads to better practical performances while retaining the sample complexity bound. Finally, we propose an efficient implementation that significantly improves the time complexity of both algorithms.

## Authors

• 8 publications
• 17 publications
• ### Planning in Markov Decision Processes with Gap-Dependent Sample Complexity

We propose MDP-GapE, a new trajectory-based Monte-Carlo Tree Search algo...
06/10/2020 ∙ by Anders Jonsson, et al. ∙ 10

• ### Loop estimator for discounted values in Markov reward processes

At the working heart of policy iteration algorithms commonly used and st...
02/15/2020 ∙ by Falcon Z. Dai, et al. ∙ 0

• ### Open Loop Execution of Tree-Search Algorithms

In the context of tree-search stochastic planning algorithms where a gen...
05/03/2018 ∙ by Erwan Lecarpentier, et al. ∙ 0

• ### Adaptive Thompson Sampling Stacks for Memory Bounded Open-Loop Planning

We propose Stable Yet Memory Bounded Open-Loop (SYMBOL) planning, a gene...
07/11/2019 ∙ by Thomy Phan, et al. ∙ 0

• ### Non-asymptotic Performances of Robust Markov Decision Processes

In this paper, we study the non-asymptotic performance of optimal policy...
05/09/2021 ∙ by Wenhao Yang, et al. ∙ 4

• ### Efficient Planning under Partial Observability with Unnormalized Q Functions and Spectral Learning

Learning and planning in partially-observable domains is one of the most...
11/12/2019 ∙ by Tianyu Li, et al. ∙ 0

• ### Gaussian Process Bandits for Tree Search: Theory and Application to Planning in Discounted MDPs

We motivate and analyse a new Tree Search algorithm, GPTS, based on rece...
09/03/2010 ∙ by Louis Dorard, et al. ∙ 0

##### This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

## 1 Introduction

In a Markov Decision Process (MDP), an agent observes its current state from a state space and picks an action from an action space , before transitioning to a next state drawn from a transition kernel and receiving a bounded reward drawn from a reward kernel . The agent must act so as to optimise its expected cumulative discounted reward , also called expected return, where is the discount factor. In Online Planning [14], we do not consider that these transition and reward kernels are known as in Dynamic Programming [1], but rather only assume access to the MDP through a generative model (e.g. a simulator) which yields samples of the next state and reward when queried. Finally, we consider a fixed-budget setting where the generative model can only be called a maximum number of times, called the budget .

Monte-Carlo Tree Search (MCTS) algorithms were historically motivated by the application of computer Go, and made a first appearance in the CrazyStone software [8]. They were later reformulated in the setting of Multi-Armed Bandits by [12] with their Upper Confidence bounds applied to Trees (UCT) algorithm. Despite its popularity, UCT has been shown to suffer from several limitations: its sample complexity can be at least doubly-exponential for some problems (e.g. when a narrow optimal path is hidden in a suboptimal branch), which is much worse than uniform planning [7]. The Sparse Sampling algorithm of [11] achieves better worst-case performance, but it is still non-polynomial and doesn’t adapt to the structure of the MDP. In stark contrast, the Optimistic Planning for Deterministic systems (OPD) algorithm considered by [10] in the case of deterministic transitions and rewards exploits the structure of the cumulative discounted reward to achieve a problem-dependent polynomial bound on sample complexity. A similar line of work in a deterministic setting is that of SOOP and OPC by [3, 4] though they focus on continuous action spaces. OPD was later extended to stochastic systems with the Open-Loop Optimistic Planning (OLOP) algorithm introduced by [2] in the open-loop setting: we only consider sequences of actions independently of the states that they lead to. This restriction in the space of policies causes a loss of optimality, but greatly simplifies the planning problem in the cases where the state space is large or infinite. More recent work such as St0p [15] and TrailBlazer [9]

focus on the probably approximately correct (PAC) framework: rather than simply recommending an action to maximise the expected rewards, they return an

-approximation of the value at the root that holds with high probability. This highly demanding framework puts a severe strain on these algorithms that were developed for theoretical analysis only and cannot be applied to real problems.

##### Contributions

The goal of this paper is to study the practical performances of OLOP when applied to numerical problems. Indeed, OLOP was introduced along with a theoretical sample complexity analysis but no experiment was carried-out. Our contribution is threefold:

• First, we show that in our experiments OLOP is overly pessimistic, especially in the low-budget regime, and we provide an intuitive explanation by casting light on an unintended effect that alters the behaviour of OLOP.

• Second, we circumvent this issue by leveraging modern tools from the bandits literature to design and analyse a modified version with tighter upper-confidence bounds called KL-OLOP. We show that we retain the asymptotic regret bounds of OLOP while improving its performances by an order of magnitude in numerical experiments.

• Third, we provide a time and memory efficient implementation of OLOP and KL-OLOP, bringing an exponential speedup that allows to scale these algorithms to high sample budgets.

The paper is structured as follows: in section 2, we present OLOP, give some intuition on its limitations, and introduce KL-OLOP, whose sample complexity is further analysed in section 3. In section 4, we propose an efficient implementation of the two algorithms. Finally in section 6, we evaluate them in several numerical experiments.

#### 1.0.1 Notations

Throughout the paper, we follow the notations from [2] and use the standard notations over alphabets: a finite word of length represents a sequence of actions . Its prefix of length is denoted . denotes the set of infinite sequences of actions. Two finite sequences and can be concatenated as , the set of finite and infinite suffixes of are respectively such that and defined likewise, and the empty sequence is .

During the planning process, the agent iteratively selects sequences of actions until it reaches the allowed budget of actions. More precisely, at time during the sequence, the agent played and receives a reward

. We denote the probability distribution of this reward as

, and its mean as , where is the current state.

After this exploration phase, the agent selects an action so as to minimise the simple regret , where and refers to the value of a sequence of actions , that is, the maximum expected discounted cumulative reward one may obtain after executing :

 V(a)=supb∈aA∞∞∑t=1γtμ(b1:t), (1)

## 2 Kullback-Leibler Open-Loop Optimistic Planning

In this section we present KL-OLOP, a combination of the OLOP algorithm of [2] with the tighter Kullback-Leibler upper confidence bounds from [5]. We first frame both algorithms in a common structure before specifying their implementations.

### 2.1 General structure

First, following OLOP, the total sample budget is split in trajectories of length in the following way:

 M is the largest integer such that M⌈logM/(2log1/γ)⌉≤n; L=⌈logM/(2log1/γ)⌉.

The look-ahead tree of depth is denoted .

Then, we introduce some useful definitions. Consider episode . For any and , let

 Ta(m)def=m∑s=11{as1:h=a}

be the number of times we played an action sequence starting with , and the sum of rewards collected at the last transition of the sequence :

 Sa(m)def=m∑s=1Ysh1{as1:h=a}

The empirical mean reward of is if , and otherwise. Here, we provide a more general form for upper and lower confidence bounds on these empirical means:

 Uμa(m) def=max{q∈I:Ta(m)d(Sa(m)Ta(m),q)≤f(m)} (2) Lμa(m) def=min{q∈I:Ta(m)d(Sa(m)Ta(m),q)≤f(m)} (3)

where is an interval, is a divergence on and is a non-decreasing function. They are left unspecified for now and their particular implementations and associated properties will be discussed in the following sections.

These upper-bounds for intermediate rewards finally enable us to define an upper bound for the value of the entire sequence of actions :

 Ua(m)def=h∑t=1γtUμa1:t(m)+γh+11−γ (4)

where comes from upper-bounding by one every reward-to-go in the sum (1), for . In [2], there is an extra step to "sharpen the bounds" of sequences by taking:

 Ba(m)def=inf1≤t≤LUa1:t(m) (5)

The general algorithm structure is shown in Algorithm 1. We now discuss two specific implementations that differ in their choice of divergence and non-decreasing function . They are compared in Table 1.

### 2.2 Olop

To recover the original OLOP algorithm of [2] from Algorithm 1, we can use a quadratic divergence on and a constant function defined as follows:

Indeed, in this case can then be explicitly computed as:

 Uμa(m) =max{q∈R:2(Sa(m)Ta(m)−q)2≤4logMTa(m)}=^μa(m)+√2logMTa(m)

which is the Chernoff-Hoeffding bound used originally in section 3.1 of [2].

### 2.3 An unintended behaviour

From the definition of as an upper-bound of the value of the sequence , we expect increasing sequences to have non-increasing upper-bounds. Indeed, every new action encountered along the sequence is a potential loss of optimality. However, this property is only true if the upper-bound defined in (2) belongs to the reward interval .

###### Lemma 1

(Monotony of along a sequence)

• If it holds that for all , then for any the sequence is non-increasing, and we simply have .

• Conversely, if for all , then for any the sequence is non-decreasing, and we have .

###### Proof

We prove the first proposition, and the same reasoning applies to the second. For and , we have by (4):

 Ua1:h+1(m)−Ua1:h(m) =γh+1Uμa1:h+1(m)+γh+21−γ−γh+11−γ =γh+1(Uμa1:h+1(m)∈[0,1]−1)≤0

We can conclude that is non-increasing and that . ∎

Yet, the Chernoff-Hoeffding bounds used in OLOP start in the regime – initially – and can remain in this regime for a long time especially in the near-optimal branches where is close to one.

Under these circumstances, the Lemma 1 has a drastic effect on the search behaviour. Indeed, as long as a subtree under the root verifies for every sequence , then all these sequences share the same B-value . This means that OLOP cannot differentiate them and exploit information from their shared history as intended, and behaves as uniform sampling instead. Once the early depths have been explored sufficiently, OLOP resumes its intended behaviour, but the problem is only shifted to deeper unexplored subtrees.

This consideration motivates us to leverage the recent developments in the Multi-Armed Bandits literature, and modify the upper-confidence bounds for the expected rewards so that they respect the reward bounds.

### 2.4 Kl-Olop

We propose a novel implementation of Algorithm 1 where we leverage the analysis of the kl-UCB algorithm from [5]

for multi-armed bandits with general bounded rewards. Likewise, we use the Bernoulli Kullback-Leibler divergence defined on the interval

by:

 dBER(p,q)def=plogpq+(1−p)log1−p1−q

with, by convention, and for . This divergence and the corresponding bounds are illustrated in Figure 1.

and can be efficiently computed using Newton iterations, as for any the function is strictly convex and increasing (resp. decreasing) on the interval [p, 1] (resp. [0, p]).

Moreover, we use the constant function . This choice is justified in the end of section 5. Because is lower than , the Figure 1 shows that the bounds are tighter and hence less conservative than that of OLOP, which should increase the performance, provided that their associated probability of violation does not invalidate the regret bound of OLOP.

###### Remark 1 (Upper bounds sharpening)

The introduction of the B-values was made necessary in OLOP by the use of Chernoff-Hoeffding confidence bounds which are not guaranteed to belong to [0, 1]. On the contrary, we have in KL-OLOP that by construction. By Lemma 1, the upper bounds sharpening step in line 1 of Algorithm 1 is now superfluous as we trivially have for all .

## 3 Sample complexity

We say that if there exist such that . Let us denote the proportion of near-optimal nodes as:

###### Theorem 3.1 (Sample complexity)

We show that KL-OLOP enjoys the same asymptotic regret bounds as OLOP. More precisely, for any , KL-OLOP satisfies:

 Ern=⎧⎪ ⎪ ⎪ ⎪⎨⎪ ⎪ ⎪ ⎪⎩˜0(n−log1/γlogκ′),if γ√κ′>1˜0(n−12),if γ√κ′≤1

## 4 Time and memory complexity

After having considered the sample efficiency of OLOP and KL-OLOP, we now turn to study their time and memory complexities. We will only mention the case of KL-OLOP for ease of presentation, but all results easily extend to OLOP.

The Algorithm 1 requires, at each episode, to compute and store in memory of the reward upper-bounds and U-values of all nodes in the tree . Hence, its time and memory complexities are

 C(KL-OLOP)=O(M|T|)=O(MKL). (6)

The curse of dimensionality brought by the branching factor

and horizon makes it intractable in practice to actually run KL-OLOP in its original form even for small problems. However, most of this computation and memory usage is wasted, as with reasonable sample budgets the vast majority of the tree will not be actually explored and hence does not hold any valuable information.

We propose in Algorithm 2 a lazy version of KL-OLOP which only stores and processes the explored subtree, as shown in Figure 2, while preserving the inner workings of the original algorithm.

###### Theorem 4.1 (Consistency)

The set of sequences returned by Algorithm 2 is the same as the one returned by Algorithm 1. In particular, Algorithm 2 enjoys the same regret bounds as in Theorem 3.1.

###### Property 1 (Time and memory complexity)

Algorithm 2 has time and memory complexities of:

 C(Lazy KL-OLOP)=O(KLM2)

The corresponding complexity gain compared to the original Algorithm 1 is:

 C(Lazy KL-OLOP)C(KL-OLOP)=nKL−1

which highlights that only a subtree corresponding to the sample budget is processed instead of the search whole tree .

###### Proof

At episode , we compute and store in memory of the reward upper-bounds and U-values of all nodes in the subtree . Moreover, the tree is constructed iteratively by adding K nodes at most L times at each episode from 0 to . Hence, . This yields directly . ∎

## 5 Proof of Theorem 3.1

We follow step-by step the pyramidal proof of [2], and adapt it to the Kullback-Leibler upper confidence bound. The adjustments resulting from the change of confidence bounds are highlighted. The proofs of lemmas which are not significantly altered are listed in the Supplementary Material.

We start by recalling their notations. Let and such that . Considering sequences of actions of length , we define the subset of near-optimal sequences and the subset of sub-optimal sequences that were near-optimal at depth :

 Ih={a∈Ah:V−V(a)≤2γh+11−γ},Jh={a∈Ah:a1:h−1∈Ih−1% and a∉Ih}

By convention, . From the definition of , we have that for any , there exists a constant C such that for any ,

 |Ih|≤Cκ′h

Hence, we also have .

Now, for , with , , we define the set of suffixes of in that have been played at least a certain number of times:

 Pah,h′(m)={b∈aAh−t∩Jh:Tb(m)≥\definecolor[named]pgfstrokecolorrgb0,0.88,0\pgfsys@color@cmyk@stroke0.9100.880.12\pgfsys@color@cmyk@fill0.9100.880.122f(m)(h+1)2γ2(h′−h+1)+1}

and the random variable:

 τah,h′(m)=1{Ta(m−1)<\definecolor[named]pgfstrokecolorrgb0,0.88,0\pgfsys@color@cmyk@stroke0.9100.880.12\pgfsys@color@cmyk@fill0.9100.880.122f(m)(h+1)2γ2(h′−h+1)+1≤Ta(m)}
###### Lemma 2 (Regret and sub-optimal pulls)

The following holds true:

 rn≤2KγH+11−γ+3KMH∑h=1∑a∈Jhγh1−γTa(M)

The rest of the proof is devoted to the analysis of the term . The next lemma describes under which circumstances a suboptimal sequence of actions in can be selected.

###### Lemma 3 (Conditions for sub-optimal pull)

Assume that at step we select a sub-optimal sequence : there exist such that . Then, it implies that one of the following propositions is true:

 \definecolor[named]pgfstrokecolorrgb0,0.88,0\pgfsys@color@cmyk@stroke0.9100.880.12\pgfsys@color@cmyk@fill0.9100.880.12Ua∗(m)

or

 h∑t=1γt\definecolor[named]pgfstrokecolorrgb0,0.88,0\pgfsys@color@cmyk@stroke0.9100.880.12\pgfsys@color@cmyk@fill0.9100.880.12Lμa1:t(m)≥V(a), (LCB violation)

or

 h∑t=1γt\definecolor[named]pgfstrokecolorrgb0,0.88,0\pgfsys@color@cmyk@stroke0.9100.880.12\pgfsys@color@cmyk@fill0.9100.880.12(Uμa1:t(m)−Lμa1:t(m))>γh+11−γ (Large CI)
###### Proof

As and because the U-values are monotonically increasing along sequences of actions (see Remark 1 and Lemma 1), we have . Moreover, by Algorithm 1, we have and , so and finally .

Assume that (UCB violation) is false, then:

 h∑t=1γtUμa1:t(m)+γh+11−γ=Ua(m)≥Ua∗(m)≥V (7)

Assume that (LCB violation) is false, then:

 h∑t=1γtLμa1:t(m)

By taking the difference (7) - (8),

 h∑t=1γt(Uμa1:t(m)−Lμa1:t(m))+γh+11−γ>V−V(a)

But , so , which yields (Large CI) and concludes the proof. ∎

In the following lemma, for each episode we bound the probability of (UCB violation) or (LCB violation) by a desired confidence level , whose choice we postpone until the end of this proof. For now, we simply assume that we picked a function that satisfies . We also denote .

###### Lemma 4 (Boundary crossing probability)

The following holds true, for any and ,

 P((???) or (???) is true)=\definecolor[named]pgfstrokecolorrgb0,0.88,0\pgfsys@color@cmyk@stroke0.9100.880.12\pgfsys@color@cmyk@fill0.9100.880.12O((L+h)δm)
###### Proof

Since , we have,

 P((???)) =P(Ua∗(m)≤V) =P(L∑t=1γtUμa∗1:t(m)≤L∑t=1γtμ(a∗1:t)) ≤P(∃1≤t≤L:Uμa∗1:t(m)≤μ(a∗1:t)) ≤L∑t=1P(Uμa∗1:t(m)≤μ(a∗1:t))

In order to bound this quantity, we reduce the question to the application of a deviation inequality. For all , we have on the event that . Therefore, for all , by definition of :

 d(^μa∗1:t(m),Uμa∗1:t(m)+δ)>f(m)Ta∗1:t(m)

As is continuous on , we have by letting that:

 d(^μa∗1:t(m),Uμa∗1:t(m))≥f(m)Ta∗1:t(m)

Since d is non-decreasing on ,

 d(^μa∗1:t(m),μ(a∗1:t))≥d(^μa∗1:t(m),Uμa∗1:t(m))≥f(m)Ta∗1:t(m)

We have thus shown the following inclusion:

 {Uμa∗1:t(m)≤μ(a∗1:t)}⊆{μ(a∗1:t)>^μa∗1:t(m) and d(^μa∗1:t(m),μ(a∗1:t))≥f(m)Ta∗1:t(m)}

Decomposing according to the values of yields:

 {Uμa∗1:t(m)≤μ(a∗1:t)}⊆m⋃n=1{μ(a∗1:t)>^μa∗1:t,n and d(^μa∗1:t,n,μ(a∗1:t))≥f(m)n}

We now apply the deviation inequality provided in Lemma 2 of Appendix A in [5]: , provided that ,

 P(m⋃n=1{μ(a∗1:t)>^μa∗1:t,n and ndBER(^μa∗1:t,n,μ(a∗1:t))≥ε})≤e⌈εlogm⌉e−ε.

By choosing , it comes

 P((???)) ≤L∑t=1\definecolor[named]pgfstrokecolorrgb0,0.88,0\pgfsys@color@cmyk@stroke0.9100.880.12\pgfsys@color@cmyk@fill0.9100.880.12e⌈f(m)logm⌉e−f(m)=\definecolor[named]pgfstrokecolorrgb0,0.88,0\pgfsys@color@cmyk@stroke0.9100.880.12\pgfsys@color@cmyk@fill0.9100.880.12O(Lδm)

The same reasoning gives: . ∎

###### Lemma 5 (Confidence interval length and number of plays)

Let , and . Then (Large CI) is not satisfied if the following propositions are true:

 ∀0≤t≤h′,Ta1:t(m)≥\definecolor[named]pgfstrokecolorrgb0,0.88,0\pgfsys@color@cmyk@stroke0.9100.880.12\pgfsys@color@cmyk@fill0.9100.880.122f(m)(h+1)2γ2(t−h−1) (9)

and

 Ta(m)≥\definecolor[named]pgfstrokecolorrgb0,0.88,0\pgfsys@color@cmyk@stroke0.9100.880.12\pgfsys@color@cmyk@fill0.9100.880.122f(m)(h+1)2γ2(h′−h−1) (10)
###### Proof

We start by providing an explicit upper-bound for the length of the confidence interval

. By Pinsker’s inequality:

Hence for all ,

 dBER(p,q)≤C⟹2(q−p)2

And thus, for all , by definition of and :

 Uμb(m)−Lμb(m)

Now, assume that (9) and (10) are true. Then, we clearly have:

 h∑t=1γt(Uμa1:t(m)−Lμa1:t(m)) ≤h′∑t=1γt√2f(m)Ta1:t(m)+h∑t=h′+1γt√2f(m)Ta1:t(m) ≤1(h+1)γ−h−1h′∑t=11+1(h+1)γ−h−1h∑t=h′+1γt−h′ ≤γh+1h+1(h′+γ1−γ)≤γh+11−γ.\squareforqed
###### Lemma 6

Let and . Then implies that either equation (UCB violation) or (LCB violation) is satisfied or the following proposition is true:

 ∃1≤t≤h′:|Pa1:th,h′(m)|<γ2(t−h′) (11)
###### Lemma 7

Let and . Then the following holds true,

###### Lemma 8

Let . The following holds true,

 E∑a∈JhTa(M)=˜O(γ−2h+\definecolor[named]pgfstrokecolorrgb0,0.88,0\pgfsys@color@cmyk@stroke0.9100.880.12\pgfsys@color@cmyk@fill0.9100.880.12(κ′)h(1+MΔM+ΔM)+(κ′γ−2)hΔM)

Thus by combining Lemma 2 and 8 we obtain:

 Ern=