1 Introduction
MonteCarlo tree search (MCTS) (Coulom, 2006; Browne et al., 2012), or more specifically its most common variant UCT (Upper Confidence Trees; see Section 2) (Kocsis and Svepesvári, 2006), has seen great successes recently and has propelled, especially in combination with deep neural networks, the performance of computer Go past professional levels (Silver et al., 2016, 2017). The robust nature of MCTS, versus a traditional approach like depthfirst search in alphabeta pruning, has not only enabled a leapfrog in performance in computer Go, but has also led to its utilization in other games where it is difficult to evaluate states, as well as in other domains (Browne et al., 2012).
However, MCTS is known to suffer from slow convergence in certain situations (Coquelin and Munos, 2007), in particular when the precise calculation of a narrow tactical sequence is critical for success. For example in boardgames, (Ramanujan et al., 2010) defines a level search trap for player after a move as a state of the game where the opponent of has a guaranteed move winning strategy. More relevantly, they show through a series of experiments that MCTS performs poorly even in shallow traps, in contrast to regular minimax search; see also (Ramanujan et al., 2011; Ramanujan and Selman, 2011).
To better understand this phenomenon, we take a closer look at the update rule
(1) 
which is performed during the backpropagation phase of MCTS. Here, the current estimate of the value of a state is taken to be the simple average of all previous returns accrued upon visiting that state. Proceeding, we discuss various methods which seek to improve backpropagation by challenging the basic assumptions implied by (1):

Value estimation by averaging returns:
Instead of updating a parent node’s value with that of its MAX (MIN) child as in minimax search, backpropagation in MCTS averages all returns to obtain a good signal in noisy environments (this is equivalent to setting the value of the parent node to be the weighted average (by visits) of its children’s values). 
Stationarity:
The returns are assumed to follow a stationary distribution.
With regard to the first point, one of the first published works on MCTS (Coulom, 2006) posits that taking the value of the best child leads to an overestimation (cf. (Blumenthal and Cohen, 1968)
) of the value of a MAX node, whereas taking the weighted average (by number of visits) of the children’s values leads to an underestimation. The paper proposes using an interpolated value, with weights dependent on the current number of visits of the best child:
(2) 
Here, and respectively denote the number of visits and backedup value of the best child, and is a variable which slowly increases after some fixed threshold to dampen the increasing weight.
Similarly in (Khandelwal et al., 2016), a backup strategy MaxMCTS() is proposed where an eligibility parameter can be adjusted to strike a balance between taking the weighted average of the children’s values () and taking the value of the best child (). In addition, they show that the optimal value for depends on the context; e.g. in Grid World experiments, it is demonstrated that the more obstacles that are present in the grid, the more has to be lowered in order to maintain good performance. This corresponds with the findings in (Ramanujan et al., 2010; Ramanujan and Selman, 2011; Ramanujan et al., 2011), in that standard MCTS may perform well in environments, for example in the opening stages of Go, where global strategy is more important, but its performance tends to degrade in highly tactical situations; see also (Baier and Winnands, 2018).
Moving on to the second premise, while stationarity may be a viable assumption in multiarmed bandit problems, and although MCTS can be viewed as a sequential multiarmed bandit problem, it is evident that the later simulations explore a larger tree than the earlier simulations. This implies that the sequence of rewards follows a nonstationary distribution, where the returns from later simulations are more informative than the earlier ones, and hence it would be natural to weight them more heavily.
One way of doing this is to simply employ the exponential recencyweighted average update (ERWA) (Sutton and Barto, 2018) where (1) is replaced by
(3) 
see also (Hashimoto et al., 2011) where they employ a similar backup strategy.
A more sophisticated method called feedback adjustment policy is explored in (Xie and Liu, 2009), where here they test four different weight profiles of varying shapes. The following figure provides an illustration.
Experiments on 9x9 Go show that gives the highest winning rate over original MCTS. Just as importantly, they show that in spite of the fact that the functions all monotonically weight the later simulations more heavily, the differences in their particular shapes have a big impact on performance; was found to provide no significant advantage over standard MCTS, in contrast to a 26.6% boost when using (Xie and Liu, 2009).
Despite differences in these various methods, we can summarize the overarching principles they have in common as follows:

the best child should be weighted more heavily as the number of simulations increase;

later simulations should be weighted more strongly than earlier ones.
Taking this into account, in this paper we propose Monotone MCTS and Softmax MCTS, two backpropagation strategies which aim to generalize and improve upon the previous methods. We first represent the weights as a function of the number of visits of the node in question, and naturally constrain it to be a monotone function. We then propose to use blackbox Bayesian optimization to find these optimal monotone functions.
The rest of the paper is structured as follows. In Section 2, we give a brief review of MCTS and Bayesian optimization using a Gaussian process prior. In Section 3, we go into the details of Monontone MCTS and Softmax MCTS. We show the effectiveness of our approach in experiments on 9x9 and 19x19 Go in Section 4. Finally, we conclude in the last section with some direction on future work.
2 Preliminaries
2.1 MonteCarlo tree search
In comparison to depthfirst search in alphabeta pruning, MCTS uses bestfirst search to gather information for planning the next action. This is important particularly in computer Go where the branching factor is large, and the tree is best explored asymmetrically to strike a balance between searching deep sequences in tactical situations and searching wide options in factoring in strategic considerations. This also allows it to be an anytime algorithm, in that terminating the search prematurely can still yield acceptable results.
MCTS consists of the following four steps:

Selection: Starting from the root node, the search process descends down the tree by successively selecting child nodes according to the tree policy. In particular, if Upper Confidence Bound 1 (UCB1)
(4) is used as the tree policy, then this variant of MCTS is called UCT (Kocsis and Svepesvári, 2006). More recently, PUCT (Silver et al., 2016; Auger et al., 2013)
(5) has been gaining popularity. Here, and denote the mean and visits respectively of child , and is a constant that can be tuned to balance exploration vs exploitation.

Expansion: When the simulation phase reaches a leaf node, children of the leaf node are added to the tree, and one of them is selected by the tree policy.

Simulation: One (or multiple) random playout is performed until a terminal node is reached. More recently, simulation can be augmented or even replaced by a suitable evaluation function such as a neural network.

Backpropagation: The result of the playout is computed and (1) is used to update each node visited in the selection phase.
Averaging the results in each node is essential in noisy environments and when it is critical not to back up values in a manner such that outliers affect the algorithm adversely. However, it can be slow to converge to the optimal value in minmax trees, particularly in nodes where the siblings of an optimal child node are all lower in value
(Fu, 2017).2.2 Bayesian optimization with a Gaussian process prior
Given an index set , is a Gaussian process if for any finite set of indices of ,
is a multivariate normal random variable. By specifying a mean function
and a symmetric, positive semidefinite kernel function , one can uniquely define a Gaussian process by settingfor any finite subset of . Here, the covariance function refers to
In addition, we assume that the model is perturbed with noise,
where , and is assumed to be independent between samples.
In many machine learning problems, the objective function
to optimize is a blackbox function which does not have an analytic expression, or may have one that is too costly to compute. Hence, a Gaussian processes are used as surrogate models to approximate the true function as they yield closedform solutions. For example, if we stipulate that , then given a history of inputobservation pairs , and a new input point , we can predict by computing the posterior distributionwhich is Gaussian with mean
and variance
given by the formulas(6)  
(7) 
Here, we denote
and is the covariance matrix corresponding the first inputs. For more information on Gaussian processes for machine learning, we refer the reader to (Williams and Rasmussen, 2006).
Another reason for Bayesian optimization becomes apparent when finding the optimal value of
is costly, for example in highdimensional problems where performing a grid search to find the optimal value is prohibitive, eg. hyperparameter tuning in deep learning models.
In such cases, an acquisition function
is selected to guide sampling to areas where one will have an increased probability of finding the optimum. Two common examples are Expected Improvement(EI)
where denotes the maximum value of found so far, and Upper Confidence Bound (UCB)
In both examples, and are obtained from (6), and there is a tradeoff between exploration and exploitation in the selection of the next point.
For greater efficiency, we use Spearmint (Snoek et al., 2012), which allows the optimization procedure to be run in parallel on multiple cores. Spearmint adopts the Matérn kernel
for the Gaussian process prior, and chooses the next point based on the expected acquisition function under all possible outcomes of the pending evaluations. It was shown to be effective for many algorithms including latent Dirichlet allocation and hyperparameter tuning in convolutional neural networks
(Snoek et al., 2012).In the context of computer Go, Bayesian optimization with a Gaussian process prior has also previously been used in (Chen et al., 2018). With regard to MCTS, they perform optimization over the UCT exploration constant and the mixing ratio between fast rollouts and neural network evaluations.
3 Methods
To find optimal backpropagation strategies, we first parameterize a family of smooth monotone functions, then perform Bayesian optimization with a Gaussian process prior as reviewed in the previous section. To describe this family of functions, we invoke the following lemma.
Lemma 3.1.
is continuously differentiable and strictly monotonic if and only if there exists a continuous function such that
(8) 
Proof.
By the meanvalue theorem, for all . Thus we have
∎
Finding the optimal monotone function, even when restricted to continuously differentiable ones, is a functional Bayesian optimization (Vien et al., 2018) problem as the optimization is taking place over an infinite dimensional Hilbert space of functions. However, for practical reasons, we instead restrict the class of functions we are optimizing over to be
where
is the dimensional space of functions obtained from linearly interpolating between the points which are uniformly separated by an interval , and denotes the number of simulations.
3.1 Monotone MCTS
The first backpropagation strategy we propose is Monotone MCTS. We first run the optimization procedure, with respect to winrate, over parameters . Each set of parameters yields a continuous function by interpolation and a monotone weight function using (8) with set to 1. Upon choosing the optimal set of parameters, the update rule is then modified to be
(9) 
where we denote .
Despite being a subset of all possible monotone functions, we consider to be sufficiently rich as it contains all increasing linear functions (starting at 1)
all exponential functions
as well as their linear combinations and other monotone functions such as those in (Xie and Liu, 2009). As an example, the following simple Proposition shows how to convert between ERWA with parameter and our formulation.
Proposition 3.2.
ERWA with parameter can be obtained by setting , where .
3.2 Softmax MCTS
For our second backpropagation strategy, we draw inspiration from the softmax distribution
which converges as to , with 1 in the position, when is the maximum of . We develop a new robust method, Softmax MCTS, for interpolating between the theoretical minimax value of the node and the original averaged value in standard MCTS as follows.
Let and respectively denote the mean and number of visits of the child. In Softmax MCTS, we define the backpropagation update after every simulation for every parent node as
where
Here, is a monotonically increasing function of the number of visits of the parent node, which will be optimized in the same manner as given in the previous subsection, with the difference that now is set to . In early stages when is close to 0, is approximately , which means that
This is equivalent to the weightedaverage update rule of standard MCTS. As increases with the number of visits, the weights will gradually favour the child with the maximum mean (minimum if the parent is a MIN node).
We believe that this is more robust than the method given in (Coulom, 2006) as at any given time the interpolation is taken between the soft maximum and the averaged value, rather than between the averaged value and the hard maximum which is volatile to outliers in the returns.
Another noteworthy point is that in our method, as well as in (Coulom, 2006) and (Khandelwal et al., 2016), the best performance comes from an update rule which invariably underestimates the max function, which leads us to hypothesize that the main tradeoff between weighting the best child more or less heavily may not be so much a question of overestimation or underestimation, but rather one of robustness. The experiments in the next section will demonstrate that Softmax MCTS outperforms the method in (Coulom, 2006) over a number of different parameters.
4 Experiments
In this section, we use 9x9 and 19x19 Go as a testbed to run Monotone MCTS and Softmax MCTS against several methods in the literature. We first establish a baseline by running these methods against standard MCTS. The table below records the winrates (%).
9 x 9 Go  19 x 19 Go  
Coulom (2, 16)  44.9  53.3 
Coulom (4, 32)  45.8  51.2 
Coulom (8, 64)  48.3  50.2 
50.2  56.1  
51.8  54.5  
ERWA,  51.2  53.9 
ERWA,  50.3  56.9 
ERWA,  52.9  55.1 
Coulom refers to the method in (Coulom, 2006), where is proportional to the meanweight parameter in (2) and controls when it begins increasing. and are the best feedback adjustment policies in (Xie and Liu, 2009) (see Figure 1), and we also test ERWA at various parameters of .
All experiments are run with 5000 moves per simulation for 9x9 Go, and 1600 moves per simulation for 19x19 Go. For this and all subsequent tests, the winrates are computed based on 1000 games.
We follow the architecture in (Silver et al., 2017)
for the neural nets. This consists of an input convolutional layer, followed by several layers of residual blocks with batch normalization (4 layers for 9x9, 10 layers for 19x19), followed by two ”heads”, one which outputs the policy vector and the other the value of the position. Both heads start with a convolutional layer, and is followed by a fullyconnected layer before the output. The input layer has 10 (18 for 19x19) channels encoding the current position and the previous 4 (8 for 19x19) positions. In each residual block, we used 64 filters for 19x19 in the convolutional layers and 32 filters for 9x9.
The neural net for 9x9 was trained tabula rasa using reinforcement learning
(Silver et al., 2017)over 600,000 training steps, with each step processing a minibatch of 16 inputs. We trained the 19x19 neural net by supervised learning over the GoGod database of approximately 15 million datapoints from 80,000 games.
In all tests, we use PUCT (5) with the exploration constant set to 0.5, and weight the exploration term by the distribution given by the policy vector output of the neural networks (Silver et al., 2016, 2017).
4.1 Monotone MCTS
We run Spearmint with parameters to optimize the winrate of Monotone MCTS against standard MCTS. For every set of parameters, 400 games were run to determine the winrate during the optimization phase (we run 1000 games for testing). The tables below record the results for 9x9 Go of Monotone MCTS versus standard MCTS, ERWA and the feedback adjustment policies in (Xie and Liu, 2009) for the two best sets of parameters found.
Standard MCTS  53.1 

ERWA,  50.3 
ERWA,  52.8 
ERWA,  52.8 
51.5  
51.0 
Standard MCTS  54.5 

ERWA,  52.3 
ERWA,  50.5 
ERWA,  53.1 
52.3  
51.1 
We also test Monotone MCTS against standard MCTS in 19x19 Go and find that the first set of parameters achieves a winrate of 56.0%, whereas the second set of parameters achieves a winrate of 54.3%.
The figure below shows a graph of the weight profiles.
4.2 Softmax MCTS
As in the previous section, Spearmint was run with parameters to optimize the winrate of Softmax MCTS versus standard MCTS. We present the results of the two best sets of parameters in the tables below.
Standard MCTS  56.3 

Coulom(2,16)  59.3 
Coulom(4, 32)  55.5 
Coulom(8, 64)  55.1 
Standard MCTS  57.8 

Coulom(2,16)  57.6 
Coulom(4,32)  59.5 
Coulom(8,64)  51.9 
We also test Softmax MCTS against standard MCTS in 19x19 Go and find that the first set of parameters achieves a winrate of 53.2%, whereas the second set of parameters achieves a winrate of 55.9%.
5 Conclusion and Future Work
In this paper, we present a unifying framework for backpropagation strategies in MCTS for minmax trees. Our proposed method allows one to perform optimization in two orthogonal directions. The first algorithm Softmax MCTS allows one to find the optimal schedule that weights the best child more gradually as the tree grows, and the second method Monotone MCTS generalizes previous work in adapting the update rule to get the most accurate estimate of a node’s value in a nonstationary setting.
Doing so requires optimization over the space of monotone functions, a highdimensional problem we overcome efficiently by using parallelized Bayesian optimization over a Gaussian process prior. Once the parameters that define the optimal monotone function are found, they can be incorporated into MCTS with negligible overhead. Our experiments show that this new approach is superior to previous methods in the literature.
To conclude, we would like to note also that it is possible, indeed advisable, to perform the optimization in conjunction with the exploration constant in the selection phase, but we have decided in this paper to focus solely on the backpropagation phase and to elucidate the effects of different monotone weight profiles on the winrate. Combining these optimal backup strategies with other phases of MCTS will be the topic of future work.
References
 Auger et al. (2013) Auger, D., Couetoux, A., and Teytaud, O. (2013). Continuous upper confidence trees with polynomial exploration–consistency. In Joint European Conference on Machine Learning and Knowledge Discovery in Databases, pages 194–209. Springer.

Baier and Winnands (2018)
Baier, H. and Winnands, M. H. M. (2018).
MCTSminimax hybrids with state evaluations.
Journal of Artificial Intelligence Research
, 62:193–231.  Blumenthal and Cohen (1968) Blumenthal, S. and Cohen, A. (1968). Estimation of the larger of two normal means. Journal of the American Statistical Association, 63(323):861–876.
 Browne et al. (2012) Browne, C. B., Powley, E., Whitehouse, D., Lucas, S. M., Cowling, P. I., Rohlfshagen, P., Tavener, S., Perez, D., Samothrakis, S., and Colton, S. (2012). A survey of MonteCarlo tree search methods. IEEE Transactions on Computational Intelligence and AI in games, 4(1):1–43.
 Chen et al. (2018) Chen, Y., Huang, A., Wang, Z., Antonoglou, I., Schrittwieser, J., Silver, D., and de Freitas, N. (2018). Bayesian optimization in AlphaGo. arXiv preprint 1812.06855v1.
 Coquelin and Munos (2007) Coquelin, P.A. and Munos, R. (2007). Bandit algorithms for tree search. In Proceedings of the TwentyThird Conference on Uncertainty in Artificial Intelligence, pages 67–74.
 Coulom (2006) Coulom, R. (2006). Efficient selectivity and backup operators in MonteCarlo tree search. In International conference on computers and games, pages 72–83. Springer.
 Fu (2017) Fu, M. (2017). MonteCarlo tree search and minimax combination, MSc thesis. University of Maryland at College Park.
 Hashimoto et al. (2011) Hashimoto, J., Kishimoto, A., Yoshizoe, K., and Ikeda, K. (2011). Accelerated UCT and its application to twoplayer games. In Advances in computer games, pages 1–12. Springer.
 Khandelwal et al. (2016) Khandelwal, P., Liebman, E., Niekum, S., and Stone, P. (2016). On the analysis of complex backup strategies in MonteCarlo tree search. In International Conference on Machine Learning, pages 1319–1328.
 Kocsis and Svepesvári (2006) Kocsis, L. and Svepesvári, C. (2006). Bandit based MonteCarlo planning. In Proceedings of the 17th European Conference on Machine Learning, pages 282–293.
 Ramanujan et al. (2010) Ramanujan, R., Sabharwal, A., and Selman, B. (2010). On adversarial search spaces and samplingbased planning. In Twentieth International Conference on Automated Planning and Scheduling.
 Ramanujan et al. (2011) Ramanujan, R., Sabharwal, A., and Selman, B. (2011). On the behavior of UCT in synthetic search spaces. In Proc. 21st Int. Conf. Automat. Plan. Sched., Freiburg, Germany.
 Ramanujan and Selman (2011) Ramanujan, R. and Selman, B. (2011). Tradeoffs in samplingbased adversarial planning. In TwentyFirst International Conference on Automated Planning and Scheduling.
 Silver et al. (2016) Silver, D., Huang, A., Maddison, C. J., Guez, A., Sifre, L., Van Den Driessche, G., Schrittwieser, J., Antonoglou, I., Panneershelvam, V., Lanctot, M., et al. (2016). Mastering the game of Go with deep neural networks and tree search. Nature, 529:484–489.
 Silver et al. (2017) Silver, D., Schrittwieser, J., Simonyan, K., Antonoglou, I., Huang, A., Guez, A., Hubert, T., Baker, L., Lai, M., Bolton, A., et al. (2017). Mastering the game of Go without human knowledge. Nature, 550:354–359.
 Snoek et al. (2012) Snoek, J., Larochelle, H., and Adams, R. P. (2012). Practical Bayesian optimization of machine learning algorithms. In Advances in neural information processing systems, pages 2951–2959.
 Sutton and Barto (2018) Sutton, R. S. and Barto, A. G. (2018). Reinforcement learning: An introduction. MIT press.
 Vien et al. (2018) Vien, N. A., Zimmermann, H., and Toussaint, M. (2018). Bayesian functional optimization. In ThirtySecond AAAI Conference on Artificial Intelligence.
 Williams and Rasmussen (2006) Williams, C. K. and Rasmussen, C. E. (2006). Gaussian processes for machine learning, volume 2. MIT press Cambridge, MA.
 Xie and Liu (2009) Xie, F. and Liu, Z. (2009). Backpropagation modification in MonteCarlo game tree search. In 2009 Third International Symposium on Intelligent Information Technology Application, volume 2, pages 125–128. IEEE.
References
 Auger et al. (2013) Auger, D., Couetoux, A., and Teytaud, O. (2013). Continuous upper confidence trees with polynomial exploration–consistency. In Joint European Conference on Machine Learning and Knowledge Discovery in Databases, pages 194–209. Springer.

Baier and Winnands (2018)
Baier, H. and Winnands, M. H. M. (2018).
MCTSminimax hybrids with state evaluations.
Journal of Artificial Intelligence Research
, 62:193–231.  Blumenthal and Cohen (1968) Blumenthal, S. and Cohen, A. (1968). Estimation of the larger of two normal means. Journal of the American Statistical Association, 63(323):861–876.
 Browne et al. (2012) Browne, C. B., Powley, E., Whitehouse, D., Lucas, S. M., Cowling, P. I., Rohlfshagen, P., Tavener, S., Perez, D., Samothrakis, S., and Colton, S. (2012). A survey of MonteCarlo tree search methods. IEEE Transactions on Computational Intelligence and AI in games, 4(1):1–43.
 Chen et al. (2018) Chen, Y., Huang, A., Wang, Z., Antonoglou, I., Schrittwieser, J., Silver, D., and de Freitas, N. (2018). Bayesian optimization in AlphaGo. arXiv preprint 1812.06855v1.
 Coquelin and Munos (2007) Coquelin, P.A. and Munos, R. (2007). Bandit algorithms for tree search. In Proceedings of the TwentyThird Conference on Uncertainty in Artificial Intelligence, pages 67–74.
 Coulom (2006) Coulom, R. (2006). Efficient selectivity and backup operators in MonteCarlo tree search. In International conference on computers and games, pages 72–83. Springer.
 Fu (2017) Fu, M. (2017). MonteCarlo tree search and minimax combination, MSc thesis. University of Maryland at College Park.
 Hashimoto et al. (2011) Hashimoto, J., Kishimoto, A., Yoshizoe, K., and Ikeda, K. (2011). Accelerated UCT and its application to twoplayer games. In Advances in computer games, pages 1–12. Springer.
 Khandelwal et al. (2016) Khandelwal, P., Liebman, E., Niekum, S., and Stone, P. (2016). On the analysis of complex backup strategies in MonteCarlo tree search. In International Conference on Machine Learning, pages 1319–1328.
 Kocsis and Svepesvári (2006) Kocsis, L. and Svepesvári, C. (2006). Bandit based MonteCarlo planning. In Proceedings of the 17th European Conference on Machine Learning, pages 282–293.
 Ramanujan et al. (2010) Ramanujan, R., Sabharwal, A., and Selman, B. (2010). On adversarial search spaces and samplingbased planning. In Twentieth International Conference on Automated Planning and Scheduling.
 Ramanujan et al. (2011) Ramanujan, R., Sabharwal, A., and Selman, B. (2011). On the behavior of UCT in synthetic search spaces. In Proc. 21st Int. Conf. Automat. Plan. Sched., Freiburg, Germany.
 Ramanujan and Selman (2011) Ramanujan, R. and Selman, B. (2011). Tradeoffs in samplingbased adversarial planning. In TwentyFirst International Conference on Automated Planning and Scheduling.
 Silver et al. (2016) Silver, D., Huang, A., Maddison, C. J., Guez, A., Sifre, L., Van Den Driessche, G., Schrittwieser, J., Antonoglou, I., Panneershelvam, V., Lanctot, M., et al. (2016). Mastering the game of Go with deep neural networks and tree search. Nature, 529:484–489.
 Silver et al. (2017) Silver, D., Schrittwieser, J., Simonyan, K., Antonoglou, I., Huang, A., Guez, A., Hubert, T., Baker, L., Lai, M., Bolton, A., et al. (2017). Mastering the game of Go without human knowledge. Nature, 550:354–359.
 Snoek et al. (2012) Snoek, J., Larochelle, H., and Adams, R. P. (2012). Practical Bayesian optimization of machine learning algorithms. In Advances in neural information processing systems, pages 2951–2959.
 Sutton and Barto (2018) Sutton, R. S. and Barto, A. G. (2018). Reinforcement learning: An introduction. MIT press.
 Vien et al. (2018) Vien, N. A., Zimmermann, H., and Toussaint, M. (2018). Bayesian functional optimization. In ThirtySecond AAAI Conference on Artificial Intelligence.
 Williams and Rasmussen (2006) Williams, C. K. and Rasmussen, C. E. (2006). Gaussian processes for machine learning, volume 2. MIT press Cambridge, MA.
 Xie and Liu (2009) Xie, F. and Liu, Z. (2009). Backpropagation modification in MonteCarlo game tree search. In 2009 Third International Symposium on Intelligent Information Technology Application, volume 2, pages 125–128. IEEE.