1 Introduction
The Monte Carlo Tree Search (MCTS) is a technique popularized by the artificial intelligence (AI) community (Coulom, 2007) for solving sequential decision problems with finite state and action spaces. To avoid searching through an intractably large decision tree, MCTS instead iteratively builds the tree and attempts to focus on regions composed of states and actions that an optimal policy might visit. A heuristic known as the default policy is used to provide Monte Carlo estimates of downstream values, which serve as a guide for MCTS to explore promising regions of the search space. When the allotted computational resources have been expended, the hope is that the best first stage decision recommended by the partial decision tree is a reasonably good estimate of the optimal decision that would have been implied by the full tree.
The applications of MCTS are broad and varied, but the strategy is traditionally most often applied to gameplay AI (Chaslot et al., 2008). To name a few specific applications, these include Go (Chaslot et al., 2006; Gelly and Silver, 2011; Gelly et al., 2012; Silver et al., 2016), Othello (Hingston and Masek, 2007; Nijssen, 2007; Osaki et al., 2008; Robles et al., 2011), Backgammon (Van Lishout et al., 2007), Poker (Maitrepierre et al., 2008; Van den Broeck et al., 2009; Ponsen et al., 2010), 16 16 Sudoku (Cazenave, 2009), and even general game playing AI (Méhat and Cazenave, 2010). We remark that a characteristic of games is that the transitions from state to state are deterministic; because of this, the standard specification for MCTS deals with deterministic problems. The “Monte Carlo” descriptor in name of MCTS therefore refers to stochasticity in the default policy. A particularly thorough review of both the MCTS methodology and its applications can be found in Browne et al. (2012).
The adaptive sampling algorithm by Chang et al. (2005), introduced within the operations research (OR) community, leverages a wellknown bandit algorithm called UCB (upper confidence bound) for solving MDPs. The UCB approach is also extensively used for successful implementations of MCTS (Kocsis and Szepesvári, 2006). Although the two techniques share similar ideas, the OR community has generally not taken advantage of the MCTS methodology in applications, with the exception of two recent papers. The paper Bertsimas et al. (2014)
compares MCTS with rolling horizon mathematical optimization techniques (a standard method in OR) on a large scale dynamic resource allocation problem, specifically that of tactical wildfire management.
AlKanj et al. (2016) applies MCTS to an informationcollecting vehicle routing problem, which is an extension of the classical vehicle routing model where the decisions now depend on a belief state. Not surprisingly, both of these problems are intractable via standard Markov decision process (MDP) techniques, and results from these papers suggest that MCTS could be a viable alternative to other approximation methods (e.g., approximate dynamic programming). However,
Bertsimas et al. (2014) finds that MCTS is competitive with rolling horizon techniques only on smaller instances of the problems and their evidence suggests that MCTS can be quite sensitive to large action spaces. In addition, they observe that large actions spaces are more detrimental to MCTS than large state spaces. These observations form the basis of our first research motivation: can we control the action branching factor by making “intelligent guesses” at which actions may be suboptimal? If so, potentially suboptimal actions can be ignored.Next, let us briefly review the currently available convergence theory. The work of Kocsis and Szepesvári (2006) uses the UCB algorithm to sample actions in MCTS, resulting in an algorithm called UCT (upper confidence trees). A key property of UCB is that every action is sampled infinitely often and Kocsis and Szepesvári (2006)
exploit this to show that the probability of selecting a suboptimal action converges to zero at the root of the tree.
Silver and Veness (2010) use the UCT result as a basis for showing convergence of a variant of MCTS for partially observed MDPs. Coutoux et al. (2011) extends MCTS for deterministic, finite state problems to stochastic problems with continuous state spaces using a technique called double progressive widening. The paper Auger et al. (2013) provides convergence results for MCTS with double progressive widening under an action sampling assumption. In these papers, the asymptotic convergence of MCTS relies on some form of “exploring every node infinitely often.” However, given that the spirit of the algorithm is to build partial trees that are biased towards nearly optimal actions, we believe that an alternative line of thinking deserves further study. Thus, our second research motivation is: can we design a version of MCTS that asymptotically does not expand the entire tree, yet is still optimal?By far, the most significant recent development in this area is Google Deepmind’s development of AlphaGo, the first computer to defeat a human player in the game of Go, of which MCTS plays a major role (Silver et al., 2016). The authors state, “The strongest current Go programs are based on MCTS, enhanced by policies that are trained to predict human expert moves.” To be more precise, the default policy
used by AlphaGo is carefully constructed through several steps: (1) a classifier to predict expert moves is trained using 29.4 million game positions from 160,000 games on top of a deep convolutional neural network (consisting of 13 layers); (2) the classifier is then played against itself and a policy gradient technique is used to develop a policy that aims to win the game rather than simply mimic human players; (3) another deep convolutional neural network is used to approximate the
value function of the heuristic policy; and (4) a combination of the two neural networks, dubbed the policy and value networks, provides an MCTS algorithm with the default policy and the estimated downstream values. The current discussion is a perfect illustration of the motivation behind our third research motivation: if such a remarkable amount of effort is used to design a default policy, can we develop techniques to further exploit this heuristic guidance within the MCTS framework?In this paper, we address each of these questions by proposing a novel MCTS method, called PrimalDual MCTS (the name is inspired by Andersen and Broadie (2004)), that takes advantage of the information relaxation bound idea (also known as martingale duality) first developed in Haugh and Kogan (2004) and later generalized by Brown et al. (2010). The essence of information relaxation is to relax nonanticipativity constraints (i.e., allow the decision maker to use future information) in order to produce upper bounds on the objective value (assuming a maximization problem). To account for the issue that a naive use of future information can produce weak bounds, Brown et al. (2010) describes a method to penalize the use of future information so that one may obtain a tighter (smaller) upper bound. This is called a dual approach and it is shown that value of the upper bound can be made equal to the optimal value if a particular penalty function is chosen that depends on the optimal value function of the original problem. Information relaxation has been used successfully to estimate the suboptimality of policies in a number of application domains, including option pricing (Andersen and Broadie, 2004), portfolio optimization (Brown and Smith, 2011), valuation of natural gas (Lai et al., 2011; Nadarajah et al., 2015), optimal stopping (Desai et al., 2012), and vehicle routing (Goodson et al., 2016). More specifically, the contributions of this paper are as follows.

We propose a new MCTS method called PrimalDual MCTS that utilizes the information relaxation methodology of Brown et al. (2010) to generate dual upper bounds. These bounds are used when MCTS needs to choose actions to explore (this is known as expansion in the literature). When the algorithm considers performing an expansion step, we obtain sampled upper bounds (i.e., in expectation, they are greater than the optimal value) for a set of potential actions and select an action with an upper bound that is better than the value of the current optimal action. Correspondingly, if all remaining unexplored actions have upper bounds lower than the value of the current optimal action, then we do not expand further. This addresses our first research motivation of reducing the branching factor in a principled way.

We prove that our method converges to the optimal action (and optimal value) at the root node. This holds even though our proposed technique does not preclude the possibility of a partially expanded tree in the limit. By carefully utilizing the upper bounds, we are able to “provably ignore” entire subtrees, thereby reducing the amount of computation needed. This addresses our second research motivation, which extends the current convergence theory of MCTS.

Although there are many ways to construct the dual bound, one special instance of PrimalDual MCTS uses the default policy (the heuristic for estimating downstream values) to induce a penalty function. This addresses our third research motivation: the default policy can provide actionable information in the form of upper bounds, in addition to the original intention of estimating downstream values.

Lastly, we present a model of the stochastic optimization problem faced by a single driver who provides transportation for farepaying customers while navigating a graph. The problem is motivated by the need for ridesharing platforms (e.g., Uber and Lyft) to be able to accurately simulate the operations of an entire ridesharing system/fleet. Understanding human drivers’ behaviors is crucial to a smooth integration of platform controlled driverless vehicles with the traditional contractor model (e.g., in Pittsburgh, Pennsylvania). Our computational results show that PrimalDual MCTS dramatically reduces the breadth of the search tree when compared to standard MCTS.
The paper is organized as follows. In Section 2, we describe a general model of a stochastic sequential decision problem and review the standard MCTS framework along with the duality and information relaxation procedures of Brown et al. (2010). We present the algorithm, PrimalDual MCTS, in Section 3, and provide the convergence analysis in Section 4. The ridesharing model and the associated numerical results are discussed in Section 5 and we provide concluding remarks in Section 6.
2 Preliminaries
In this section, we first formulate the mathematical model of the underlying optimization problem as an MDP. Because we are in the setting of decision trees and information relaxations, we need to extend traditional MDP notation with some additional elements. We also introduce the existing concepts, methodologies, and relevant results that are used throughout the paper.
2.1 Mathematical Model
As is common in MCTS, we consider an underlying MDP formulation with a finite horizon
where the set of decision epochs is
. Let be a state space and be an action space and we assume a finite state and action setting: and . The set of feasible actions for state is , a subset of . The set contains all feasible stateaction pairs.The dynamics from one state to the next depend on the action taken at time , written , and an exogenous (i.e., independent of states and actions) random process on taking values in a finite space . For simplicity, we assume that are independent across time . The transition function is given by . We denote the deterministic initial state by and let be the random process describing the evolution of the system state, where and
. To distinguish from the random variable
, we shall refer to a particular element of the state space by lowercase variables, e.g., . The contribution (or reward) function at stage is given by . For a fixed stateaction pair , the contribution is the random quantity , which we assume is bounded.Because there are a number of other “policies” that the MCTS algorithm takes as input parameters (to be discussed in Section 2.2), we call the main MDP policy of interest the operating policy. Let be the set of all policies for the MDP with a generic element . Each decision function is a deterministic map from the state space to the action space, such that for any state . Finally, we define the objective function, which is to maximize expected cumulative contribution over the finite time horizon:
(1) 
Let be the optimal value function at state and time . It can be defined via the standard Bellman optimality recursion:
The stateaction formulation of the Bellman recursion is also necessary for the purposes of MCTS as the decision tree contains both state and stateaction nodes. The stateaction value function is defined as:
For consistency, it is also useful to let for all . It thus follows that . Likewise, the optimal policy from the set is characterized by .
It is also useful for us to define the value of a particular operating policy starting from a state at time , given by the value function . If we let , then the following recursion holds:
(2)  
Similarly, we have
(3)  
the stateaction value functions for a given operating policy .
Suppose we are at a fixed time . Due to the notational needs of information relaxation, let be the deterministic state reached at time given that we are in state at time , implement a fixed sequence of actions , and observe a fixed sequence of exogenous outcomes
. For succinctness, the time subscripts have been dropped from the vector representations. Similarly, let
be the deterministic state reached at time if we follow a fixed policy .Finally, we need to refer to the future contributions starting from time , state , and a sequence of exogenous outcomes . For convenience, we slightly abuse notation and use two versions of this quantity, one using a fixed sequence of actions and another using a fixed policy :
Therefore, if we define the random process , then the quantities and represent the random downstream cumulative reward starting at state and time , following a deterministic sequence of actions or a policy . For example, the objective function to the MDP given in (1) can be rewritten more concisely as .
2.2 Monte Carlo Tree Search
The canonical MCTS algorithm iteratively grows and updates a decision tree, using the default policy as a guide towards promising subtrees. Because sequential systems evolve from a (predecision) state , to an action , to a postdecision state or a stateaction pair , to new information , and finally, to another state , there are two types of nodes in a decision tree: state nodes (or “predecision states”) and stateaction nodes (or “postdecision states”). The layers of the tree are chronological and alternate between these two types of nodes. A child of a state node is a stateaction node connected by an edge that represents a particular action. Similarly, a child of a stateaction node is a state node for the next stage, where the edge represents an outcome of the exogenous information process .
Since we are working within the decision tree setting, it is necessary to introduce some additional notation that departs from the traditional MDP style. A state node is represented by an augmented state that contains the entire path down the tree from the root node :
where and . Let be the set of all possible (representing all possible paths to states at time ). A stateaction node is represented via the notation where . Similarly, let be the set of all possible . We can take advantage of the Markovian property along with the fact that any node or contains information about to write (again, a slight abuse of notation)
At iteration of MCTS, each state node is associated with a value function approximation and each stateaction node is associated with the stateaction value function approximation . Moreover, we use the following shorthand notation:
There are four main phases in the MCTS algorithm: selection, expansion, simulation, and backpropagation
(Browne et al., 2012). Oftentimes, the first two phases are called the tree policy because it traverses and expands the tree; it is in these two phases where we will introduce our new methodology. Let us now summarize the steps of MCTS while employing double progressive widening (DPW) technique (Coutoux et al., 2011) to control the branching at each level of the tree. As its name suggests, DPW means we slowly expand the branching factor of the tree, in both state nodes and stateaction nodes. The following steps summarize the steps of MCTS at a particular iteration .
Selection. We are given a selection policy, which determines a path down the tree at each iteration. When no progressive widening is needed, the algorithm traverses the tree until it reaches a leaf node, i.e., an unexpanded state node, and proceeds to the simulation step. On the other hand, when progressive widening is needed, the traversal is performed until an expandable node, i.e., one for which there exists a child that has not yet been added to the tree, is reached. This could be either a state node or a stateaction node; the algorithm now proceeds to the expansion step.

Expansion. We now utilize a given expansion policy to decide which child to add to the tree. The simplest method, of course, is to add an action at random or add an exogenous state transition at random. Assuming that expansion of a stateaction node always follows the expansion of a state node, we are now in a leaf state node.

Simulation. The aforementioned default policy is now used to generate a sample of the value function evaluated at the current state node. The estimate is constructed using a sample path of the exogenous information process. This step of MCTS is also called a rollout.

Backpropagation. The last step is to recursively update the values up the tree until the root node is reached: for stateaction nodes, a weighted average is performed on the values of its child nodes to update , and for state nodes, a combination of a weighted average and maximum of the values of its child nodes is taken to update . These operations correspond to a backup operator discussed in Coulom (2007) that achieves good empirical performance. We now move on to the next iteration by starting once again with the selection step.
Once a prespecified number of iterations have been run, the best action out of the root node is chosen for implementation. After landing in a new state in the real system, MCTS can be run again with the new state as the root node. A practical strategy is to use the relevant subtree from the previous run of MCTS to initialize the new process (Bertsimas et al., 2014).
2.3 Information Relaxation Bounds
We next review the information relaxation duality ideas from Brown et al. (2010); see also Brown and Smith (2011) and Brown and Smith (2014). Here, we adapt the results of Brown et al. (2010) to our setting, where we require the bounds to hold for arbitrary subproblems of the MDP. Specifically, we state the theorems from the point of view of a specific time and initial stateaction pair . Also, we focus on the perfect information relaxation, where one assumes full knowledge of the future in order to create upper bounds. In this case, we have
which means that the value achieved by the optimal policy starting from time is upper bounded by the value of the policy that selects actions using perfect information. As we described previously, the main idea of this approach is to relax nonanticipativity constraints to provide upper bounds. Because these bounds may be quite weak, they are subsequently strengthened by imposing penalties for usage of future information. To be more precisely, we would like to subtract away a penalty defined by a function so that the righthandside is decreased to: .
Consider the subproblem (or subtree) starting in stage and state . A dual penalty is a function that maps an initial state, a sequence of actions , and a sequence of exogenous outcomes to a penalty . As we did in the definition of , the same quantity is written when the sequence of actions is generated by a policy . The set of dual feasible penalties for a given initial state are those that do not penalize admissible policies; it is given by the set
(4) 
where . Therefore, the only “primal” policies (i.e., policies for the original MDP) for which a dual feasible penalty could assign positive penalty in expectation are those that are not in .
We now state a theorem from Brown et al. (2010) that illuminates the dual bound method. The intuition is best described from a simulation point of view: we sample an entire future trajectory of the exogenous information and using full knowledge of this information, the optimal actions are computed. It is clear that after taking the average of many such trajectories, the corresponding averaged objective value will be an upper bound on the value of the optimal (nonanticipative) policy. The dual penalty is simply a way to improve this upper bound by penalizing the use of future information; the only property required in the proof of Theorem 1 is the definition of dual feasibility. The proof is simple and we repeat it here so that we can state a small extension later in the paper (in Proposition 1). The righthandside of the inequality below is a penalized perfect information relaxation.
Theorem 1 (Weak Duality, Brown et al. (2010)).
Fix a stage and initial state . Let be a feasible policy and be a dual feasible penalty, as defined in (4). It holds that
(5) 
where .
Proof.
By definition, . Thus, it follows by dual feasibility that
The second inequality follows by the property that a policy using future information must achieve higher value than an admissible policy. In other words, is contained within the set of policies that are not constrained by nonanticipativity. ∎
Note that the lefthandside of (5) is known as the primal problem and the righthandside is the dual problem
, so it is easy to see that the theorem is analogous to classical duality results from linear programming. The next step, of course, is to identify some dual feasible penalties. For each
, let be any function and define(6) 
Brown et al. (2010) suggests the following additive form for a dual penalty:
(7) 
and it is shown in the paper that this form is indeed dual feasible. We refer to this as the dual penalty generated by . The standard dual upper bound is accomplished without penalizing, i.e., by setting for all . As we will show in our empirical results on the ridesharing model, this upper bound is simple to implement and may be quite effective.
However, in situations where the standard dual upper bound is too weak, a good choice of can generate tighter bounds. It is shown that if the optimal value function is used in place of in (6), then the best upper bound is obtained. In particular, a form of strong duality holds: when Theorem 1 is invoked using the optimal policy and , the inequality (5) is achieved with equality. The interpretation of is that can be thought of informally as the “value gained from knowing the future.” Thus, the intuition behind this result is as follows: if one knows precisely how much can be gained by using future information, then a perfect penalty can be constructed so as to recover the optimal value of the primal problem.
However, strong duality is hard to exploit in practical settings, given that both sides of the equation require knowledge of the optimal policy. Instead, a viable strategy is to use approximate value functions on the righthandside of (6) in order to obtain “good” upper bounds on the optimal value function on the lefthandside of (5). This is where we can potentially take advantage of the default policy of MCTS to improve upon the standard dual upper bound; the value function associated with this policy can be used to generate a dual feasible penalty. We now state a specialization of Theorem 1 that is useful for our MCTS setting.
Proposition 1 (StateAction Duality).
Proof.
Choose a policy (restricted to stage onwards) such that the first decision function maps to and the remaining decision functions match those of the optimal policy :
Using this policy and the separability of given in (7), an argument analogous to the proof of Theorem 1 can be used to obtain the result. ∎
For convenience, let us denote the dual upper bound generated using the functions by
Therefore, the dual bound can be simply stated as . For a stateaction node in the decision tree, we use the notation . The proposed algorithm will keep estimates of the upper bound on the righthandside of (8) in order to make tree expansion decisions. As the algorithm progresses, the estimates of the upper bound are refined using a stochastic gradient method.
3 PrimalDual MCTS Algorithm
In this section, we formally describe the proposed PrimalDual MCTS algorithm. The core of the algorithm is MCTS with double progressive widening (Coutoux et al., 2011), except in our case, the dual bounds generated by the functions play a specific role in the expansion step. Let be the set of all possible state nodes and let be the set of all possible stateaction nodes. At any iteration , our tree is described by the set of expanded state nodes, the set of expanded stateaction nodes, the value function approximations and , the estimated upper bounds , the number of visits to expanded nodes, and the number of information relaxation upper bounds, or “lookaheads,” performed on unexpanded nodes. The terminology “lookahead” is used to mean a stochastic evaluation of the dual upper bound given in Proposition 1. In other words, we “lookahead” into the future and then exploit this information (thereby relaxing nonanticipativity) to produce an upper bound.
The root node of , for all , is . Recall that any node contains full information regarding the path from the initial state . Therefore, in this paper, the edges of the tree are implied and we do not need to explicitly refer to them; however, we will use the following notation. For a state node , let be the child stateaction nodes (i.e., already expanded nodes) of at iteration (dependence on is suppressed) and be the unexpanded stateaction nodes of :
Furthermore, we write .
Similarly, for , let be the child state nodes of and be the unexpanded state nodes of :
For mathematical convenience, we have , , , , and taking the value zero for all elements of their respective domains. For each and , let and represent the estimates of and , respectively. Note that although is defined (and equals zero) prior to the expansion of , it does not gain meaning until . The same holds for the other quantities.
Each unexpanded state node is associated with an estimated dual upper bound . A state node is called expandable on iteration if is nonempty. Similarly, a stateaction node is expandable on iteration if is nonempty. In addition, let and count the number of times that and are visited by the selection policy (so becomes positive after expansion). The tally counts the number of dual lookaheads performed at each unexpanded state. We also need stepsizes and to track the estimates generated by for leaf nodes and for leaf nodes .
Lastly, we define two sets of progressive widening iterations, and . When , we consider expanding the state node (i.e., adding a new stateaction node stemming from ), and when , we consider expanding the stateaction node (i.e., adding a downstream state node stemming from ).
3.1 Selection
Let be a selection policy that steers the algorithm down the current version of the decision tree. It is independent from the rest of the system and depends only on the current state of the decision tree. We use the same notation for both types of nodes: for and , we have
Let us emphasize that contains no logic for expanding the tree and simply provides a path down the partial tree . The most popular MCTS implementations (Chang et al., 2005; Kocsis and Szepesvári, 2006) use the UCB1 policy (Auer et al., 2002) for when acting on state nodes. The UCB1 policy balances exploration and exploitation by selecting the stateaction node by solving
(9) 
The second term is an “exploration bonus” which decreases as nodes are visited. Other multiarmed bandit policies may also be used; for example, we may instead prefer to implement an greedy policy where we exploit with probability and explore with probability (w.p.) :
When acting on stateaction nodes, selects a downstream state node; for example, given , the selection policy may select with probability , normalized by the total probability of reaching expanded nodes . We require the condition that once all downstream states are expanded, the sampling probabilities match the transition probabilities of the original MDP. We now summarize the selection phase of PrimalDual MCTS.

Start at the root node and descend the tree using the selection policy until one of the following is reached: Condition (S1), an expandable state node with ; Condition (S2), an expandable stateaction node with ; or Condition (S3), a leaf state node is reached.

If the selection policy ends with conditions (S1) or (S2), then we move on to the expansion step. Otherwise, we move on to the simulation and backpropagation steps.
3.2 Expansion
Case 1: First, suppose that on iteration , the selection phase of the algorithm returns to be expanded, for some . Due to the possibly large set of unexpanded actions, we first sample a subset of candidate actions (e.g., a set of actions selected uniformly at random from those in that have not been expanded). Application specific heuristics may be employed when sampling the set of candidates. Then, for each candidate, we perform a lookahead to obtain an estimate of the perfect information relaxation dual upper bound. The lookahead is evaluated by solving a deterministic optimization problem on one sample path of the random process . In the most general case, this is a deterministic dynamic program. However, other formulations may be more natural and/or easier to solve for some applications. If the contribution function is linear, the deterministic problem could be as simple as a linear program (for example, the asset acquisition problem class described in Nascimento and Powell (2009)). See also AlKanj et al. (2016) for an example where the information relaxation is a mixedinteger linear program. The resulting stochastic upper bound is then smoothed with the previous estimate via the stepsize . We select the action with the highest upper bound to expand, but only if the upper bound is larger than the current best value function . Otherwise, we skip the expansion step because our estimates tell us that none of the candidate actions are optimal. The following steps comprise of the expansion phase of PrimalDual MCTS for a state node .

Sample a subset of candidate actions according to a prespecified sampling policy and consider those actions that are unexpanded:

Obtain a single sample path of the exogenous information process. For each candidate action , compute the optimal value of the deterministic optimization “inner” problem of (8):

For each candidate action , smooth the newest observation of the upper bound with the previous estimate via a stochastic gradient step:
(10) Stateaction nodes elsewhere in the tree that are not considered for expansion retain the same upper bound estimates, i.e., .

Let be the candidate action with the best dual upper bound. If no candidate is better than the current best, i.e., , then we skip this potential expansion and return to the selection phase to continue down the tree.

Otherwise, if the candidate is better than the current best, i.e., , then we expand action by adding the node as a child of . We then immediately sample a downstream state using from the set and add it as a child of (every stateaction expansion triggers a state expansion). After doing so, we are ready to move on to the simulation and backpropagation phase from the leaf node .
Case 2: Now suppose that we entered the expansion phase via a stateaction node . In this case, we simply sample a single state from such that
and add it as a child of . Next, we continue to the simulation and backpropagation phase from the leaf node .
3.3 Simulation and Backpropagation
We are now at a leaf node , for some . At this point, we cannot descend further into the tree so we proceed to the simulation and backpropagation phase. The last two steps of the algorithm are relatively simple: first, we run the default policy to produce an estimate of the leaf node’s value and then update the values “up” the tree via equations resembling (2) and (3). The steps are as follows.

Obtain a single sample path of the exogenous information process and using the default policy , compute the value estimate
(11) If , then the value estimate is simply the terminal value of zero. The value of the leaf node is updated by taking a stochastic gradient step that smooths the new observation with previous observations according to the equation

After simulation, we backpropagate the information up the tree. Working backwards from the leaf node, we can extract a “path,” or a sequence of state and stateaction nodes (each of these elements is a “subsequence” of the vector , starting with ). For , the backpropagation equations are:
(12) (13) (14) where and is a mixture parameter. Nodes and that are not part of the path down the tree retain their values, i.e.,
(15)
The first update (12) maintains the estimates of the stateaction value function as weighted averages of child node values. The second update (13) similarly performs a recursive averaging scheme for the state nodes and finally, the third update (14) sets the value of a state node to be a mixture between the weighted average of its child stateaction node values and the maximum value of its child stateaction nodes.
The naive update for is to simply take the maximum over the stateaction nodes (i.e., following the Bellman equation), removing the need to track . Empirical evidence from Coulom (2007), however, shows that this type of update can create instability; furthermore, the authors state that “the mean operator is more accurate when the number of simulations is low, and the max operator is more accurate when the number of simulations is high.” Taking this recommendation, we impose the property that so that asymptotically we achieve the Bellman update yet allow for the averaging scheme to create stability in the earlier iterations. The update (14) is similar to “mix” backup suggested by Coulom (2007) which achieves superior empirical performance.
The end of the simulation and backpropagation phase marks the conclusion of one iteration of the PrimalDual MCTS algorithm. We now return to the root node and begin a new selection phase. Algorithm 1 gives a concise summary of PrimalDual MCTS. Moreover, Figure 1 illustrates some aspects of the algorithm and emphasizes two key properties:

The utilization of dual bounds allows entire subtrees to be ignored (even in the limit), thereby providing potentially significant computational savings.

The optimal action at the root node can be found without its subtree necessarily being fully expanded.
We will analyze these properties in the next section, but we first present an example that illustrates in detail the steps taken during the expansion phase.
Example 1 (Shortest Path with Random Edge Costs).
In this example, we consider applying the PrimalDual MCTS to a shortest path problem with random edge costs (note that the algorithm is stated for maximization while shortest path is a minimization problem). The graph used for this example is shown in Figure 1(a). An agent starts at vertex 1 and aims to reach vertex 6 at minimum expected cumulative cost. The cost for edge (from vertex to ) is distributed and independent from the costs of other edges and independent across time. At every decision epoch, the agent chooses an edge to traverse out of the current vertex without knowing the actual costs. After the decision is made, a realization of edge costs is revealed and the agent incurs the onestage cost associated with the traversed edge.
The mean of the cost distributions are also shown in Figure 1(a) and we assume that . The optimal path is , which achieves an expected cumulative cost of 3.5. Consider applying PrimalDual MCTS at vertex 1, meaning that we are choosing between traversing edges , , , and . The shortest paths after choosing , , and are (cost of 4), (cost of 5), and (cost of 5.5), respectively. Hence, ,
Comments
There are no comments yet.