Log In Sign Up

Train on Small, Play the Large: Scaling Up Board Games with AlphaZero and GNN

by   Shai Ben-Assayag, et al.

Playing board games is considered a major challenge for both humans and AI researchers. Because some complicated board games are quite hard to learn, humans usually begin with playing on smaller boards and incrementally advance to master larger board strategies. Most neural network frameworks that are currently tasked with playing board games neither perform such incremental learning nor possess capabilities to automatically scale up. In this work, we look at the board as a graph and combine a graph neural network architecture inside the AlphaZero framework, along with some other innovative improvements. Our ScalableAlphaZero is capable of learning to play incrementally on small boards, and advancing to play on large ones. Our model can be trained quickly to play different challenging board games on multiple board sizes, without using any domain knowledge. We demonstrate the effectiveness of ScalableAlphaZero and show, for example, that by training it for only three days on small Othello boards, it can defeat the AlphaZero model on a large board, which was trained to play the large board for 30 days.


A Technique to Create Weaker Abstract Board Game Agents via Reinforcement Learning

Board games, with the exception of solo games, need at least one other p...

Learning to Play Othello with Deep Neural Networks

Achieving superhuman playing level by AlphaGo corroborated the capabilit...

Warm-Start AlphaZero Self-Play Search Enhancements

Recently, AlphaZero has achieved landmark results in deep reinforcement ...

Polygames: Improved Zero Learning

Since DeepMind's AlphaZero, Zero learning quickly became the state-of-th...

Systematic N-tuple Networks for Position Evaluation: Exceeding 90 the Othello League

N-tuple networks have been successfully used as position evaluation func...

SAI, a Sensible Artificial Intelligence that plays Go

We propose a multiple-komi modification of the AlphaGo Zero/Leela Zero p...

Finite Group Equivariant Neural Networks for Games

Games such as go, chess and checkers have multiple equivalent game state...

1 Introduction

Learning a simple instance of a problem with the goal of solving a more complicated one is a common approach within various fields. Both humans and AI programs use such incremental learning, particularly when the large-scale problem instance is too hard to learn from scratch or too expensive. This paper is concerned with applying incremental learning to the challenge of mastering board games. When playing board games, humans have the advantage of being able to learn the game on a small board, recognize the main patterns, and then implement the strategies they have acquired, possibly with some adjustments, on a larger board. In contrast, machine learning algorithms usually cannot generalize well between board sizes. While simple heuristics, such as zero padding of the board or analyzing local neighborhoods, can alleviate this generalization problem, they do not scale well for enlarged boards (see, e.g., Section 


In this paper we propose ScalableAlphaZero (SAZ), a deep reinforcement learning (RL) based model that can generalize to multiple board sizes of a specific game. SAZ is trained on small boards and is expected to scale successfully to larger ones. Our technique should be usable for scalable board games, whose rules for one board size apply to all feasible board sizes (typically, infinitely many). For instance, Go is scalable but standard chess is not. A strong motivation for finding such a model is a potential substantial reduction in training time. As we demonstrate in this paper, training a model on small boards takes an order of magnitude less time than on large ones. The reason is that the dimension of states is significantly smaller, and gameplay requires fewer turns to complete.

The proposed model is based on two modifications of the well-known AlphaZero (AZ) algorithm (Silver et al., 2017a)

. To the best of our knowledge, presently AZ is the strongest superhuman RL based system for two-player zero-sum games. The main drawback of AZ is that it limits the user to training and playing only on a specific board size. This is the result of using a convolutional neural network (CNN)

(Atlas et al., 1987) for predictive pruning of the AZ tree. To overcome this obstacle, in SAZ we replace the CNN by a graph neural network (GNN) (Scarselli et al., 2008). The GNN is a scalable neural network, i.e., it is an architecture that is not tied to a fixed input dimension. GNN’s scalability enables us to train and play on different board sizes and allows us to scale up to arbitrarily large boards with a constant number of parameters. To further improve the AZ tree search pruning, we propose an ensemble-like node prediction using subgraph sampling; namely, we utilize the same GNN for evaluating a few subgraphs of the full board and then combine their scores to reduce the overall prediction uncertainty.

We conduct experiments on three scalable board games and measure the quality of SAZ by comparing it to various opponents on different board sizes. Our results indicate that SAZ, trained on a maximal board size of , can generalize well to larger boards (e.g., ). Furthermore, we evaluate it by competing against the original AZ player, trained on a large board. Our model, with around ten times less training (computation) time on the same hardware, and without training at all on the actual board size that was used for playing, performs surprisingly well and achieves comparable results.

The main contributions of this work are: (1) a model that is capable of successfully scaling up board game strategies. As far as we know this is the first work that combines RL with GNNs for this task; (2) a subgraph sampling technique that effectively decreases prediction uncertainty of GNNs in our context and is of potential independent interest; (3) the presentation of extensive experiments, demonstrated on three different board games, showing that our model requires an order of magnitude less training time than the original AZ but, still, can defeat AZ on large boards.

2 Related work

The solution proposed in this paper instantiates a GNN model inside the AlphaZero model for the task of scalable board game playing. In this section, we briefly review early work in AI and board games, focusing on the AlphaZero (Silver et al., 2017a) algorithm. We further describe the GNN design and review various works that use GNN to guide an RL model. Finally, we summarize existing methods that aim to deal with scalable board games and accelerate the generalization between sizes.

2.1 AlphaZero for board games

Given an optimization problem, deep RL aims at learning a strategy for maximizing the problem’s objective function. The majority of RL programs do not use any expert knowledge about the environment, and learn the optimal strategy by exploring the state and action spaces with the goal of maximizing their cumulative reward.

AlphaGo (AG) (Silver et al., 2016) is an RL framework that employs a policy network trained with examples taken from human games, a value network trained by selfplay, and Monte Carlo tree search (MCTS) (Coulom, 2006), which defeated a professional Go player in 2016. About a year later, AlphaGo Zero (AGZ) (Silver et al., 2017b) was released, improving AlphaGo’s performance with no handcrafted game specific heuristics; however, it was still tested only on the game of Go. AlphaZero (Silver et al., 2017a)

validated the general framework of AGZ by adapting the same mechanism to the games of Chess and Shogi. AG and AGZ have a three-stage training pipeline: selfplay, optimization and evaluation, whereas AZ skips the evaluation step. AGZ and AZ do not use their neural network to make move decisions directly. Instead, they use it to identify the most promising actions for the search to explore, as well as to estimate the values of nonterminal states.

2.2 Graph neural networks

GNNs, introduced in Scarselli et al. (2008)

, are a promising family of neural networks for graph structured data. GNNs have shown encouraging results in various fields including natural language processing, computer vision, logical reasoning and combinatorial optimization. Over the last few years, several variants of GNNs have been developed (e.g.,

Hamilton et al. (2017); Gilmer et al. (2017); Li et al. (2015); Veličković et al. (2017); Defferrard et al. (2016)), while the selection of the actual variant that suits the specific problem depends on the particularities of the task.

In their basic form, GNNs update the features associated with some elements of an input graph denoted by , based on the connections between these elements in the graph. A message passing algorithm iteratively propagates information between nodes, updates their state accordingly, and uses the final state of a node, also called “node embedding”, to compute the desired output. Appendix B.1 provides more details about the message passing procedure. In this paper we use graph isomorphism networks (GINs) (Xu et al., 2018), which are a powerful well-known variant of GNNs. For further details about GINs, see Appendix B.2.

2.3 Scalable deep reinforcement learning

Recently, several works tackled the problem of scalability in RL in the context of combinatorial optimization using GNNs that are natural models to deal with such challenges. For example, Lederman et al. (2018) utilized the REINFORCE algorithm (Williams, 1992) for clause selection in a QBF solver using a GNN, and successfully solved arbitrary large formulas. Abe et al. (2019) combined Graph Isomorphism Networks (Xu et al., 2018) and the AGZ framework for solving small instances of NP-complete combinatorial problems on graphs. Dai et al. (2017) proposed a framework that combines RL with structure2vec graph embedding (Dai et al., 2016), to construct incremental solutions for Traveling Salesman and other problems. Other RL models that deal with combinatorial optimization problems include Yolcu and Póczos (2019); Xing and Tu (2020).

A fundamental difference between trying to scale combinatorial optimization problems and our task is that a reductionist approach is much less intuitive for scaling up board games. For example, when trying to solve a large-scale SAT instance (as in Yolcu and Póczos (2019)), the problem necessarily gets smaller as long as the search advances. More specifically, by setting a literal to or , all clauses that contain or can be deleted (either in conjunctive normal form or disjunctive normal form). In contrast, in a board game, the problem size remains the same during the entire search, with much more challenging rare boards that have not yet been encountered.

Among existing work on using learning to scale up board games, the most similar to our approach is that of Schaul and Schmidhuber (2009)

. To enable size generalization for Go-inspired board games, they presented MDLSTM, a scalable neural network based on MDRNNs and LSTM cells, computing four shared-weight swiping layers, one for each diagonal direction on the board. For each position on the board, they combine these four values into a single output representing the action probabilities. Their results show that MDLSTM transfers the strategies learned on small boards to large ones, leading to a level of play on a

board that is on par with human beginners. Some other similar approaches include those of Gauci and Stanley (2010); Wu and Baldi (2007). Gauci and Stanley extrapolated Go solutions to , thus speeding up the training. Wu and Baldi designed a DAG-RNN for Go and demonstrated that systems trained using a set of amateur games achieve surprisingly high correlation to the strategies obtained by a professional players’ test set.

All the above models, aimed at scaling up board games, do not incorporate an RL framework within their model, neither for training nor playing. In contrast to our model, which starts its training as a tabula rasa (i.e., without using any specific domain knowledge), the training processes of Schaul and Schmidhuber and Gauci and Stanley are based on playing against a fixed heuristic based opponent, while Wu and Baldi trained their model using records of games played by humans.

3 Scalable AlphaZero for board games

In this section we describe in detail our RL based model for scalable board games. Our model is based on AZ, equipped with additional components that allow it to train on small board sizes and play on larger ones. The board game environment encodes the rules of the game and maintains the board state. We denote by the set of possible actions and by the set of possible board states.

As mentioned in Section 2.1, the AZ player is an RL model consisting of a combined neural network, , with parameters and an MCTS. The network takes as input the raw board representation of the current state , and outputs

, where the probability vector

represents the probabilities of selecting each action on the board, and the value estimates the chances of the current player winning the game (i.e., for losing, for a tie and for winning), given its current state. At each state , an -guided MCTS is activated. The MCTS procedure then outputs the probability for playing each valid move. For a full description of the MCTS procedure, see Appendix A.1. The pseudocode for our model, including the MCTS procedure, is provided in Appendix C.

To summarize, the main changes we made to the original AZ are

  • [leftmargin=1cm]

  • Replacing the CNN by our GNN.

  • Adding subgraph sampling for guidance of the MCTS search.

  • Removing rotation and reflection augmentations in the training set.

The next sections elaborate on each of these components.

3.1 Replacing the CNN

The main difference between our scalable RL player and AZ comes from choosing the specific neural network type. AZ uses a CNN as the network . As already mentioned, CNN architectures are limited due to the specific input they require, thus they do not enjoy the potential computational benefits of scalable methods. The message passing technique used in a GNN (Gilmer et al., 2017) (see Section 2.2) allows the network to get a variable sized graph with no limitation on either the number of nodes or the number of edges. In fact, a GNN only requires a fixed size of feature dimension for each node (and each edge, if edge features are used). This last observation makes a GNN a scalable neural network according to the definition above. Consequently, replacing the original CNN in the AZ framework with a GNN is a key step toward our construction of a scalable player mechanism.

To instantiate as a GNN, we first need to translate the board state into a graph. We define the graph where nodes in are the positions on the board (usually, for a grid-like square board of size ), and the edges in connect “geographically” adjacent positions on the board (for the grid-like example above we connect only vertical and horizontal neighbors and discard diagonal neighbors). For a node we denote by the initial feature representing the current piece placed on ( for a light piece, for a dark piece and for an empty square). Last, we add a dummy node (as demonstrated in Gilmer et al. (2017)) that is connected to all other nodes in , allowing us to improve the long-distance data flow between nodes. The dummy node has an initial feature . Figure 1 illustrates the graph generation procedure, which corresponds to the initial board of the Othello game.

(a) Othello initial board of size
(b) Corresponding graph
Figure 1: Illustration of the graph generated based on the initial board state, , of the game Othello. The blue and green nodes correspond to the light and dark pieces, respectively. Our additional dummy node is the central red with black border node connected to all other nodes. It is the only node that does not represent any square on the board.

Our GNN receives the generated graph as input and outputs both the probability for playing the specific action corresponding to the node and the value of the current state (i.e., the whole graph). The final GNN architecture, which is based on the GIN model (see the discussion in Section 2.2) with extra skip connections, is illustrated in Figure 2

. The architecture was implemented using PyTorch Geometric

(Fey and Lenssen, 2019). It contains the following modules:

  1. [leftmargin=*]

  2. Three GIN layers with layer normalization and a activation function.

  3. Concatenation of all previous intermediate representations.

  4. Two fully-connected layers with batch normalization,

    activation function and dropout.

  5. The computation is separated into two different heads, for computing the policy and the value . is computed using one fully-connected layer, followed by a - operation, yielding the probability vector. is computed using one fully-connected layer, followed by a global mean pooling layer (i.e., the mean among all nodes) and, finally, a nonlinearity function.

Figure 2: Neural network architecture

3.2 Guiding MCTS

The second change we made refers to the guidance of the MCTS by the network . According to MCTS, is computed for each nonterminal leaf node discovered during the game. These values are used for updating the MCTS variables , propagating along the path seen in the current game simulation, and updating their values accordingly.

Here we can take advantage of the scalability of our network , and enhance the performance of the tree search. Upon arriving at , we sample a few subgraphs of the graph generated by and send them to . For each subgraph we first sample the subgraph size and then sample nodes present in the subgraph. The subgraph size should be large enough to form an “interesting” new state and include enough legal actions. The subgraphs’ size range (

) as well as the number of sampled subgraphs are two hyperparameters of our model. Note that sending more than one graph to the network for each newly visited leaf node can be implemented efficiently using batches, which increases the prediction time by only a small factor. Our experiments show that using a small number of subgraphs improves the player’s performance remarkably.

The MCTS variables are updated in our model according to , where is the probability vector taken from the evaluation , is the scatter mean/max of the probability vectors computed on the subgraphs (i.e., it takes into account how many times a node was sampled), and stands for element-wise multiplication. Propagating remains unchanged.111GitHub repository: pytorch_scatter (released under the MIT license)

3.3 Training pipeline

The training pipeline, as in the AZ model, comprises a loop between the selfplay and optimization stages. The game result, , of each selfplay is propagated to all the states visited during the game. The player plays against itself, thus accumulating positive and negative examples. The neural network parameters are optimized at the end of the selfplay stage to match the MCTS probabilities and the winner . For more details about the AZ training pipeline, see Appendix A.2.

For each training example produced during selfplay, AGZ generates extra examples by looking at rotations and reflections of the board. In contrast, AZ did not use these extra training examples, demonstrating the strength of their guiding network. By looking at the board as a graph, our GNN takes these invariances into account, thus justifying the removal of extra examples without the need to enhance the performance of the guiding network (e.g., by increasing the number of parameters). Consequently, removing rotation and reflection examples results in a massive reduction in the required training resources and substantially speeds up training time (by 5x).

4 Evaluation

We conduct our experiments on three scalable board games: (1) Othello (Landau, 1985): also known as Reversi. Players alternately place stones on the board trying to “capture” the opponent’s stones. Any straight line sequence of stones belonging to the opponent, lying between the just placed stone and another stone of the current player, are turned over and switch colors. The winner is determined by the majority stones’ color. (2) Gomoku: also known as ‘Five in a row’ or Gobang. Players take turns placing stones on the board. The first player to place (here ) stones in a row, a column or a diagonal, wins. (3) Go: the well-known game of Go (Smith, 1908). Two players alternately place stones on intersections of the board with the goal of surrounding more territory than the opponent. Table 1 analyzes the game complexity of the games used for testing.

Othello Gomoku/ Go
Table 1: Strategic complexity of small/large Othello, Gomoku and Go games given by evaluations of their state and action space size (upper bound).

We define two reference opponents for each game: a random player that randomly chooses a legal move, and a greedy player that chooses his action based on a hand-coded tactical heuristic score. The specific heuristics for each game is described in Appendix D. The greedy opponent provides a sufficient challenge to demonstrate the utility of generalization. Note that both reference players can play on every board size without making any changes to the action-choosing mechanism.

As a measure of success we use the average outcome of 100 games against one of the reference opponents, counted as for a win, for a tie and for a loss. Each player plays half the time with dark pieces (plays first) and half with light pieces (plays second). We also analyze individually each main change we made. Furthermore, we play against the original AZ player that was trained to play on a large board, which enables us to measure the effect of our improvements on the training speed and realtime playing performance. Full CNN architecture of the AZ player in described in Appendix D

. All tables and graphs provided include standard errors (five independent runs).

4.1 Experimental setup

Our RL infrastructure runs over a physical computing cluster. To train SAZ, we use one GPU (TITAN X(Pascal)/PCIe/SSE2) and one CPU (Intel Core i7), referred to as one resource unit. For each experiment conducted, we use the same resources to train. Our Othello player model was trained for three days on boards of all sizes, between and . Our Gomoku player was trained for days on boards of random sizes, between and . The hyperparameters are selected via preliminary results on small boards. The training parameters for SAZ and the original AZ are presented in Appendix D.222Both the code and the model weights will be available upon acceptance.

4.2 Model analysis

For the model analysis we define some baseline players, each trained for three days (unless otherwise specified), as our model was:

  • [leftmargin=*]

  • Model1 refers to training the original AZ (with a CNN replacing the GNN) on the actual board size used for testing. We used a shallower CNN than the one used in the AZ model, due to our limited computational resources (the architecture is described in Appendix D). Note that because we failed to train a competitive AZ player with the shallow CNN, we reused symmetries of the training examples (see Section 3.3) as proposed in AGZ model.

  • Model2 refers to training SAZ on the actual board size used for testing, rather than smaller boards.

  • Model3 is the same player as SAZ without the subgraph sampling component, i.e., the action probabilities are taken directly from the output of on the full graph.

  • Model4 is the same as SAZ except here we discard the output of on the full graph; thus, the action probabilities are calculated only according to the sampled subgraphs’ mean.

  • Model5 refers to an MCTS guided by a small CNN. The small CNN was trained by the AZ model on a smaller board of size . The action probabilities are taken as the scatter mean of the network output on all the sub-boards of size of the state that is evaluated.

The merits of our modified components:

We start with a small ablation study, where we evaluate the contributions of our main changes. We start with the complete SAZ and leave one component out each time, both for training and realtime playing purposes. Note that in this experiment, we focus on the first two changed components presented in Section 3. Removal of the third component was tested as well, but, as expected, it has no effect on the performance, as the GNN framework has the property of rotation and reflection invariant. It does, however, increase the training time significantly.

Table 2 shows the average outcome (see definition in Section 4) of each model playing against the greedy opponent on a board for Othello, and for Gomoku. Blue and red colors represent whether or not a player wins more than of the games against the greedy opponent. In general, it can be seen that removing each component results in a decrease in performance. Both model1 and model2 produce the poorest results, probably due to insufficient training time on the large board. Model3 is already achieving fair results, while our SAZ slightly improves its performance. We will further discuss the subgraph sampling contribution in the next experiment.

Model Othello Gomoku
SAZ [complete model]
AZ trained on tested board [model1]
SAZ trained on tested board [model2]
only full graph [model3]
only subgraphs [model4]
Table 2: Leave-one-out study (test average outcome against the greedy opponent)

Generalization to larger boards:

As mentioned, SAZ was designed to allow training and playing on different sizes of input. The generalization study is presented in Figure 3 and shows the average outcome against the reference opponents for Othello and Gomoku, on various board sizes. We also include other baseline players’ performance. All models tested in this experiment were trained for three days on our machine. Overall, SAZ performs significantly better than other methods, consistently winning over of the games against the greedy opponent in all cases.

Among all baseline players, model4 and model5 exhibit the worst performance against both opponents and suffer the greatest performance decrease as the board gets larger. The results of both models suggest that using a small network, applied only on local areas of the full board, does not provide good generalization power, probably because long-term relations are necessary to fully observe the state. Model3 is pretty stable along board sizes, reasonably achieving its best results playing on the board sizes on which it was trained. Observe that our Othello SAZ reaches its peak efficacy on a board size that it had not seen during training.

Figure 3: Average outcome of scalable players against the reference opponents on various board sizes and games. The shadowed areas represent the standard errors (5 independent runs).

We further examine the generalization power geometrically by considering the GNN actions’ latent space. We constructed synthetic Othello boards of specific form, shown in Figure 3(a), in different sizes from to

. We apply Principal Component Analysis (PCA)

(Wold et al., 1987) on the embedding provided by the GNN for two specific actions – one that we consider a “good action” (top-left corner, capturing all opponent pieces in the first column) and a second that we deem a “bad action” (bottom-right corner, which does not capture pieces at all). Figure 3(b)

shows the first two components of the PCA analysis of both actions (on the X,Y plane) as a function of the board size (Z axis). Clearly, except for a few outliers, most of the good actions (blue) are separated easily from the bad ones (red), showing that the latent space successfully encodes the underlying structure of the actions on the board, even for massive board sizes.

(a) Synthetic Othello boards
(b) 2d PCA projection of good (blue) and bad (red) action embeddings as a function of the board size.
Figure 4: (a) The synthetic Othello boards of increasing sizes we created. A similar board of the same form was created for all board sizes between and . (b) The first two principal components of the embeddings provided by our GNN (X,Y plane) as a function of the board size (Z axis). Blue points refer to the embedding of the “good action” of placing a dark piece in the top-left corner. Red ones refer to the “bad action” of placing a dark piece in the bottom-right corner.

Training time analysis:

Figure 5 shows the progression of our GNN during training. We measure the GNN skill by evaluating the average outcome of model3 (i.e., an MCTS guided by the GNN), at each training stage, against the greedy opponent on a Othello board and a Gomoku board. Since we test the GNN on a larger board than the ones used for training, it can be seen as another measure of the generalization power. As a comparison we train model1 (i.e., original CNN) on the larger boards for days and evaluate it along the training time as well.

We observe that as training advances, model3 gets stronger, achieving around an win rate at the end of training, and reaching parity with the greedy player after a few hours of training. In contrast, to achieve parity, model1 needed between four to five days of training, and achieving model3’s final win rate against the greedy player only after days (Othello) and days (Gomoku).

Figure 5: Progression of GNN skill along training. The average outcome is evaluated by playing against the greedy opponent on Othello (board size ) and Gomoku (board size ).

Comparison to AZ:

Table 3 shows the average outcome of various scalable players (rows) against the original AZ guided by a CNN (columns). Entries in the table represent the average outcome of the game with respect to the row player. Blue and red colors represent whether or not a specific (row) player wins more than of the games against AZ. The scalable players include our model as well as other baseline players, all trained for three days on small boards (up to ). AZ players were trained for days on the large board of the size that was used for testing ( or ).

The results show that SAZ wins all competitions, with a more than win rate on Othello and on Gomoku. Model3, which does not use the subgraph sampling technique, also competes fairly well with AZ, but still reduces the performance by on Othello. Both model4 and model5 Othello players are not competitive compared to AZ, showing again that global dependencies on the board are critical for gameplay. Nevertheless, both models produce a positive win rate against AZ on Gomoku, showing that local structures are more helpful for mastering this game. To further illustrate the capabilities of SAZ compared to AZ, we conduct the same experiment with Othello and Gomoku boards. The effect is much stronger, as SAZ wins of Othello games against AZ. The AZ Gomoku player performs poorly in all cases, suggesting that enlarging the board should be accompanied either with a more powerful CNN architecture or with more training.

Othello AZ Gomoku AZ
SAZ [complete model]
only full graph [model3]
only subgraphs [model4]
small CNN [model5]
Table 3: Average outcome of scalable players (rows), trained on small boards, against the original AZ players (columns), trained on the tested board size over nearly more training time.

Go evaluation:

Training AZ to the game of Go with full boards is computationally challenging with our available resources. Recall that Deepmind used TPUs for days to train AZ Go player. We therefore trained our SAZ for three days on Go boards of maximal size . To test our model we trained two AZ players on boards of sizes and for and days, respectively. Our analysis suggests that SAZ wins around (on a board) and (on a board) of the games against AZ. These results as well as the extensive experiments on Othello and Gomoku, which have some similarity to the properties of Go, indicate that our method can lead to solutions that master the game of Go with much less computational overhead.

5 Conclusion and future work

In this paper we presented an end-to-end RL model for training on and playing scalable board games. Central to our approach is the combination of a scalable neural network (GNN), and the AZ algorithm. The use of GNNs facilitated the enhancement of the model by the subgraph sampling technique, and enabled scaling from small boards to large ones. Through extensive experimental evaluation, we demonstrated the effectiveness of our method in learning game strategies, which we validated using different games and various board sizes. The generalization analysis suggests that learning on small boards is faster and more practical than learning solely on large boards. The experiments shown in this paper suggest that SAZ offers a promising new technique for learning to play on large boards, requiring an order of magnitude less training, while keeping the performance level intact.

We have left a number of potential improvements to future work. First, to date we have focused on board games whose actions refer to the nodes on the graph. This focus was natural because GNNs output the feature vector for each node. Nevertheless, we can use the same approach for another family of board games by using GNNs that estimate edge features (e.g., the game of Chess can be formulated as a graph problem where edges correspond to the actions on the board). A promising approach to achieve this could be to use the method of Berg et al. (2017) who employ the incident node features to derive edge representations. Furthermore, our subgraph sampling technique, which effectively improved our model performance in our context by reducing the GNN’s uncertainty, is of potential independent interest. It would be interesting to validate this approach in different domains. Another promising idea would be to use a model pretrained with our approach and then finetune it to a larger board. The finetuned model would possibly enhance the performance on that size. Finally, it would be important to consider deeper GNN architectures, which will possibly enable discovering longer term dependencies on the board.


  • K. Abe, Z. Xu, I. Sato, and M. Sugiyama (2019) Solving np-hard problems on graphs by reinforcement learning without domain knowledge. arXiv preprint arXiv:1905.11623. Cited by: §2.3.
  • L. Atlas, T. Homma, and R. Marks (1987) An artificial neural network for spatio-temporal bipolar patterns: application to phoneme classification. In Neural Information Processing Systems, pp. 31–40. Cited by: §1.
  • R. v. d. Berg, T. N. Kipf, and M. Welling (2017) Graph convolutional matrix completion. arXiv preprint arXiv:1706.02263. Cited by: §5.
  • R. Coulom (2006) Efficient selectivity and backup operators in monte-carlo tree search. In International conference on computers and games, pp. 72–83. Cited by: §2.1.
  • H. Dai, B. Dai, and L. Song (2016) Discriminative embeddings of latent variable models for structured data. In International conference on machine learning, pp. 2702–2711. Cited by: §2.3.
  • H. Dai, E. B. Khalil, Y. Zhang, B. Dilkina, and L. Song (2017) Learning combinatorial optimization algorithms over graphs. arXiv preprint arXiv:1704.01665. Cited by: §2.3.
  • M. Defferrard, X. Bresson, and P. Vandergheynst (2016) Convolutional neural networks on graphs with fast localized spectral filtering. arXiv preprint arXiv:1606.09375. Cited by: §2.2.
  • M. Fey and J. E. Lenssen (2019) Fast graph representation learning with pytorch geometric. arXiv preprint arXiv:1903.02428. Cited by: Appendix D, §3.1.
  • J. Gauci and K. O. Stanley (2010) Indirect encoding of neural networks for scalable go. In International Conference on Parallel Problem Solving from Nature, pp. 354–363. Cited by: §2.3, §2.3.
  • J. Gilmer, S. S. Schoenholz, P. F. Riley, O. Vinyals, and G. E. Dahl (2017) Neural message passing for quantum chemistry. In International Conference on Machine Learning, pp. 1263–1272. Cited by: §2.2, §3.1, §3.1.
  • W. L. Hamilton, R. Ying, and J. Leskovec (2017) Inductive representation learning on large graphs. arXiv preprint arXiv:1706.02216. Cited by: §2.2.
  • T. Landau (1985) Othello: brief & basic. US Othello Association 920, pp. 22980–23425. Cited by: §4.
  • G. Lederman, M. N. Rabe, E. A. Lee, and S. A. Seshia (2018) Learning heuristics for quantified boolean formulas through deep reinforcement learning. arXiv preprint arXiv:1807.08058. Cited by: §2.3.
  • Y. Li, D. Tarlow, M. Brockschmidt, and R. Zemel (2015) Gated graph sequence neural networks. arXiv preprint arXiv:1511.05493. Cited by: §2.2.
  • F. Scarselli, M. Gori, A. C. Tsoi, M. Hagenbuchner, and G. Monfardini (2008) The graph neural network model. IEEE transactions on neural networks 20 (1), pp. 61–80. Cited by: §1, §2.2.
  • T. Schaul and J. Schmidhuber (2009) Scalable neural networks for board games. In International Conference on Artificial Neural Networks, pp. 1005–1014. Cited by: §2.3, §2.3.
  • D. Silver, A. Huang, C. J. Maddison, A. Guez, L. Sifre, G. Van Den Driessche, J. Schrittwieser, I. Antonoglou, V. Panneershelvam, M. Lanctot, et al. (2016) Mastering the game of go with deep neural networks and tree search. nature 529 (7587), pp. 484–489. Cited by: §2.1.
  • D. Silver, T. Hubert, J. Schrittwieser, I. Antonoglou, M. Lai, A. Guez, M. Lanctot, L. Sifre, D. Kumaran, T. Graepel, et al. (2017a) Mastering chess and shogi by self-play with a general reinforcement learning algorithm. arXiv preprint arXiv:1712.01815. Cited by: Appendix A, §1, §2.1, §2.
  • D. Silver, J. Schrittwieser, K. Simonyan, I. Antonoglou, A. Huang, A. Guez, T. Hubert, L. Baker, M. Lai, A. Bolton, et al. (2017b) Mastering the game of go without human knowledge. nature 550 (7676), pp. 354–359. Cited by: §2.1.
  • A. Smith (1908) The game of go: the national game of japan. Moffat, Yard. Cited by: §4.
  • P. Veličković, G. Cucurull, A. Casanova, A. Romero, P. Lio, and Y. Bengio (2017) Graph attention networks. arXiv preprint arXiv:1710.10903. Cited by: §2.2.
  • R. J. Williams (1992) Simple statistical gradient-following algorithms for connectionist reinforcement learning. Machine learning 8 (3-4), pp. 229–256. Cited by: §2.3.
  • S. Wold, K. Esbensen, and P. Geladi (1987) Principal component analysis. Chemometrics and intelligent laboratory systems 2 (1-3), pp. 37–52. Cited by: §4.2.
  • L. Wu and P. Baldi (2007) A scalable machine learning approach to go. Advances in Neural Information Processing Systems 19, pp. 1521. Cited by: §2.3, §2.3.
  • Z. Xing and S. Tu (2020) A graph neural network assisted monte carlo tree search approach to traveling salesman problem. IEEE Access 8, pp. 108418–108428. Cited by: §2.3.
  • K. Xu, W. Hu, J. Leskovec, and S. Jegelka (2018) How powerful are graph neural networks?. arXiv preprint arXiv:1810.00826. Cited by: §B.2, §2.2, §2.3.
  • E. Yolcu and B. Póczos (2019) Learning local search heuristics for boolean satisfiability.. In NeurIPS, pp. 7990–8001. Cited by: §2.3, §2.3.

Appendix A AlphaZero

As we mentioned in the Section 2.1, Silver et al. [2017a] proposed an RL algorithm for board game playing. It uses a neural network , which is used for guiding the internal steps of an MCTS. gets as input a state and outputs a probability vector for all possible moves , and a scalar , which corresponds to the network’s confidence regarding the current player’s chances winning the game.

a.1 Monte Carlo tree search:

The tree search is designed to explore the game states and actions, and to provide an improved probability vector . Here we describe the MCTS variant used in the AZ framework. For each pair , corresponding to the state and action, it stores the following variables:

  • : The action value.

  • : The probability of choosing from the state

  • : The visit count of the pair .

  • : Upper confidence bound of the pair computed by:

    where is a hyperparameter that controls the exploration and exploitation.

Each round of MCTS consists of:

  1. Selection: Start at the root and select a child node maximizing until an unexpanded node is reached.

  2. Expansion: If is a terminal state (i.e., has a decisive result , a win, a tie or a loss), let . Otherwise, evaluate and store

  3. Backpropagation: Traverse all the pairs visited along the path to and update:


After a predefined number of rounds, calculate the improved probability vector . The vector element in the location corresponding to the action is:

where is a temperature parameter. When

is large, the probability vector is much closer to a uniform distribution; when

, the probability of the most visited action is closer to . Usually, we reduce as the learning advances.

a.2 Training pipeline:

The training is composed of a loop between two independent stages:

  • Selfplay: The player plays against itself, using MCTS guided by the latest weights of . The selfplay accumulates training examples of the form , where is the state (usually in a canonical form), is the probability vector obtained from MCTS and is the final result of the game (when using the canonical form for , we always take the perspective of a specific player). At the end of this stage, AZ updates the training set to include all the boards that can be constructed by a rotation or reflection of an example in the training set.

  • Optimization: After constructing the training set in the previous stage, the neural network is trained to maximize the similarities between and , and to minimize the difference between and

    . The loss function used to achieve this goal (for a single example) is:

    where is a regularization factor.

The training examples are kept between iterations. When one iteration ends, the oldest training examples are partially removed.

Appendix B Graph neural networks

b.1 Message passing procedure

The message passing algorithm is a central component of graph neural networks. It uses a predefined number of iterations to propagate information between nodes on the graph. Here we describe it in details. In its basic form, the message passing algorithm receives as input a graph and the number of overall iterations

, and stores hidden representations

of the graph nodes, where , and is the hidden dimension of layer .

At iteration , each node receives messages from its graph neighbors, denoted by . Messages are generated by applying a message function to the hidden states of nodes in the graph, and then are combined by an aggregation function , e.g., a sum or a mean (Equation 3). An update function is later used to compute a new hidden state for every node (Equation 4). Finally, after iterations, a readout function outputs the final prediction, based on the final node embeddings (see Equation 5 for node prediction and Equation 6 for graph prediction). Neural networks are often used for both , and .


b.2 Graph isomorphism networks

Xu et al. proved that the graph isomorphism network (GIN) model is as powerful as the Weisfeiler-Lehman graph isomorphism test and is the most expressive among the class of GNNs. We describe a hidden feature update layer of GIN, from a message passing perspective. At iteration number , each node is updated by:

where node features are aggregated by a summation operation, is either a learnable parameter or a fixed scalar and denotes a neural network (i.e., an MLP). The same update rule can be computed in a matrix form as:

where is the adjacency matrix of and

is the identity matrix. Note that in our GNN architecture we used a two headed network for computing the policy (node regression task) and the value (graph classification task).

Appendix C Pseudocode

Here we provide the pseudocode for our ScalableAlphaZero model. Algorithm 1 describes the MCTS parameters update starting from an initial state , Algorithm 2 describes the update rule for the policy vector based on the updated MCTS and Algorithm 3 describes the training pipeline.

Input: an initialized MCTS tree , a state (root), number of subgraph to use , size of subgraphs .
if  is terminal then
       get game result of
convert to graph
while  is not terminal do
       if  is not expanded then
             sample subgraphs of size between and
       action that maximizes
       next state of after selecting
end while
while  do
       previous action
       previous state
end while
Output: an updated .
Algorithm 1
Input: a state , a temperature , the number of MCTS simulations .
for  to  do
end for
Algorithm 2 compute
Input: maximal board size for training (squared), the number of AZ iterations , a GNN , an MCTS , the number of AZ iterations to include in history .
for  to  do
       sample board size w.r.t the probability vector
       training examples selfplay()
       add training examples to history
       if length(history)> then
             pop history
       for batch in shuffled history do
       end for
end for
compute total loss
optimize GNN parameters to minimize total loss
Output: optimized GNN
Algorithm 3 train

Appendix D Global setup


We used three layers of GIN with a nonlinearity and a hidden dimension of .


The number of MCTS simulations was set to . We used for the exploration and exploitation parameter. The temperature was set to at the beginning of the tree search and, after search iterations, was changed to (i.e., the action is chosen by argmax). Consider a board of size . The number of sampled subgraphs is larger when the board size increased and is set to . For the parameter that controls the subgraphs size we used or .

CNN architecture:

CNN architecture is relevant to the experiments that include the original AZ player (i.e., model1). It contains the following modules:

  1. [leftmargin=*]

  2. 2d convolutional layers with

    channels, a kernel of size three, stride

    and padding, followed by 2d batch normalization and a activation function.

  3. 2d convolutional layers with channels, a kernel of size three and stride, followed by 2d batch normalization and a activation function.

  4. A fully-connected layer with hidden dimension of size 1024 and dropout, followed by 1d batch normalization and a activation function.

  5. A fully-connected layer with hidden dimension of size 512 and dropout, followed by 1d batch normalization and a activation function.

  6. The computation is separated into two different heads, for computing the policy and the value . is computed using one fully-connected layer from input of size to output of size (number of possible actions), followed by a - operation, yielding the probability vector. is computed using one fully-connected layer from input of size to output of size , followed by a nonlinearity function.

Greedy players heuristics:

As mentioned in Section 4, for our challenging baseline opponent we defined a greedy player, which chooses his actions based on a hand-coded heuristic score. The heuristics are unique for each game: for the game of Othello, the state score is the difference between the player’s stones and those of his opponent; for the game of Gomoku, the score is the length of the maximal sequence of the current players’ stones minus the length of the maximal sequence of opponents’ stones; for the game of Go, the score is evaluated by the difference between the player’s territories and those of his opponent.

Training and environment:

Our loss function did not include a regularization term (i.e., ). The training set included examples from iterations of selfplay and optimization.

For our multiple-sized SAZ training we randomly sampled a board size at the beginning of each game in the selfplay procedure (see Section A

), taken from a probability distribution that is proportional to the board size. For example for training GoMoku we used boards of sizes

and the probability vector for choosing each size was . The full algorithm is described in Section C.

We used PyTorch Geometric [Fey and Lenssen, 2019] for the implementation of the GNN. We used alpha-zero-general for the re-implementation of the AlphaZero model with our modified components,333GitHub repository: alpha-zero-general (released under the MIT license). and used the Go environment from alpha-zero-general-with-go-game.444GitHub repository: alpha-zero-general-with-go-game (released under the MIT license).