Systematic N-tuple Networks for Position Evaluation: Exceeding 90 the Othello League

N-tuple networks have been successfully used as position evaluation functions for board games such as Othello or Connect Four. The effectiveness of such networks depends on their architecture, which is determined by the placement of constituent n-tuples, sequences of board locations, providing input to the network. The most popular method of placing n-tuples consists in randomly generating a small number of long, snake-shaped board location sequences. In comparison, we show that learning n-tuple networks is significantly more effective if they involve a large number of systematically placed, short, straight n-tuples. Moreover, we demonstrate that in order to obtain the best performance and the steepest learning curve for Othello it is enough to use n-tuples of size just 2, yielding a network consisting of only 288 weights. The best such network evolved in this study has been evaluated in the online Othello League, obtaining the performance of nearly 96 player to date.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

06/01/2014

Evolutionary Search in the Space of Rules for Creation of New Two-Player Board Games

Games have always been a popular test bed for artificial intelligence te...
07/18/2021

Train on Small, Play the Large: Scaling Up Board Games with AlphaZero and GNN

Playing board games is considered a major challenge for both humans and ...
11/17/2017

Learning to Play Othello with Deep Neural Networks

Achieving superhuman playing level by AlphaGo corroborated the capabilit...
11/14/2014

Automatic Generation of Alternative Starting Positions for Simple Traditional Board Games

Simple board games, like Tic-Tac-Toe and CONNECT-4, play an important ro...
08/25/2019

Exploring the Performance of Deep Residual Networks in Crazyhouse Chess

Crazyhouse is a chess variant that incorporates all of the classical che...
01/27/2020

Polygames: Improved Zero Learning

Since DeepMind's AlphaZero, Zero learning quickly became the state-of-th...
01/25/2019

Evaluation Function Approximation for Scrabble

The current state-of-the-art Scrabble agents are not learning-based but ...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

Board games have always attracted attention in AI due to they clear rules, mathematical elegance and simplicity. Since the early works of Claude Shannon on Chess [1] and Arthur Samuel on Checkers [2], a lot of research have been conducted in the area of board games towards finding either perfect players (Connect-4, [3]), or stronger than human players (Othello, [4]

). The bottom line is that board games still constitute valuable test-beds for improving general artificial and computational intelligence game playing methods such as reinforcement learning, Monte Carlo tree search, branch and bound, and (co)evolutionary algorithms.

Most of these techniques employ a position evaluation function to quantify the value of a given game state. In the context of Othello, one of the most successful position evaluation functions is tabular value function [5] or n-tuple network [6]. It consists of a number of -tuples, each associated with a look up table, which maps contents of board fields into a real value. The effectiveness of n-tuple network highly depends on the placement of n-tuples [7]. Typically, n-tuples architectures consist of a small number of long, randomly generated, snake-shaped n-tuples [8, 7, 9].

Despite the importance of network architecture, to the best of our knowledge no study exist that studies and evaluates different ways of placing n-tuples on the board.

In this paper, we propose an n-tuple network architecture consisting of a large number of short, straight n-tuples, generated in a systematic way. In the extensive computational experiments, we show that for learning position evaluation for Othello, such an architecture is significantly more effective than the one involving randomly generated n-tuples. We also investigate how the length of n-tuples affects the learning results. Finally, the performance of the best evolved n-tuple network is evaluated in the online Othello League.

Ii Methods

Ii-a Othello

Figure 1: An Othello position, where white has legal moves (dashed gray circles). If white places a piece on e, the pieces on d, d, and e are reversed to white.

Othello (a.k.a. Reversi) is a two player, deterministic, perfect information strategic game played on an board. There are pieces being black on one side and white on the other. The game starts with two white and two black pieces forming an askew cross in the center on the board. The players take turns putting one piece on the board with their color facing up. A legal move consists in placing a piece on a field so that it forms a vertical, horizontal, or diagonal line with another player’s piece, with a continuous, non-empty sequence of opponent’s pieces in between (see Fig. 1), which are reversed after the piece is placed. Player passes if and only if it cannot make a legal move. The game ends when both players passed consecutively. Then, the player having more pieces with their color facing up wins.

Othello has been found to have around legal positions [10] and has is not been solved; this is one reason why it has become such a popular domain for computational intelligence methods [11, 12, 13, 14, 15, 16, 7, 17].

Ii-B Position Evaluation Functions

In this paper, our goal is not to design state-of-the-art Othello players, but to evaluate position evaluation functions. That is why our players are simple state evaluators in a -ply setup: given the current state of the board, a player generates all legal moves and applies the position evaluation function to the resulting states. The state gauged as the most desirable determines the move to be played. Ties are resolved at random.

The simplest position evaluation function is position-weighted piece counter (WPC), which is a linear weighted board function. It assigns a weight to a board location and uses scalar product to calculate the utility of a board state :

where is in the case of an empty location, if a black piece is present or in the case of a white piece.

A WPC player often used in Othello research as an expert opponent [18, 6, 19, 13, 16, 7]

is Standard WPC Heuristic Player (

swh). Its weights, hand-crafted by Yoshioka et al. [20], are presented in Table I.

Table I: The weights of the Standard WPC Heuristic player (swh)

Ii-C Othello Position Evaluation Function League

WPC is only one of the possible position evaluation functions. Others popular ones include neural networks and n-tuple networks. To allow direct comparison between various position evaluation functions and algorithms capable of learning their parameters, Lucas and Runarsson

[18] have appointed the Othello Position Evaluation Function League 111http://algoval.essex.ac.uk:8080/othello/League.jsp. Othello League, for short, is an on-line ranking of Othello 1-ply state evaluator players. The players submitted to the league are evaluated against SWH (the Standard WPC Heuristic Player).

Both the game itself and the players are deterministic (with an exception of the rare situation when at least two positions have the same evaluation value). Therefore, to provide more continuous performance measure, Othello League introduces some randomization to Othello. Both players are forced to make random moves with the probability of

. As a consequence the players no longer play (deterministic) Othello, but stochastic -Othello. However, it was argued that the ability to play -Othello is highly correlated with the ability to play Othello [18].

The performance in Othello League is determined by the number of wins against SWH player in -Othello in double games, each consisting of two single games played once white and once black. To aggregate the performance into a scalar value, we assume that a win counts as point, while a draw points. The average score obtained in this way against SWH constitutes the Othello League performance measure, which we incorporate in this paper.

Ii-D N-tuple Network

Figure 2: An -tuple employed eight times to take advantage of board symmetries (symmetric sampling). The eight symmetric expansions of the -tuple return, in total, for the given board state.

The best performing evaluation function in the Othello League is n-tuple network [16]. N-tuple networks have been first applied to optical character recognition problem by Bledsoe and Browning [21]. For games, it have been used first by Buro under the name of tabular value functions [5], and later popularized by Lucas [6]. According to Szubert et al. their main advantages of n-tuple networks “include conceptual simplicity, speed of operation, and capability of realizing nonlinear mappings to spaces of higher dimensionality” [7].

(a) rand- network consisting of randomly generated snake-shaped -tuples ( weights).
(b) all- network consisting of all straight -tuples ( weights).
Figure 3: Comparison of rand-* and all-* n-tuple network architectures. “Main” n-tuples have been shown by red, while their symmetric expansions by light gray.

N-tuple network consists of -tuples, where is tuple’s size. For a given board position , it returns the sum of values returned by the individual n-tuples. The th -tuple, for , consists of a predetermined sequence of board locations , and a look up table . The latter contains values for each board pattern that can be observed on the sequence of board locations. Thus, n-tuple network is a function

Among possible ways to map the sequence to an index in the look up table, the following one is arguably convenient and computationally efficient:

where is a constant denoting the number of possible values on a single board square, and is the sequence of board values (the observed pattern) such that for . In the case of Othello, , and white, empty, and black squares are encoded as , , and , respectively.

The effectiveness of n-tuple networks is improved by using symmetric sampling, which exploits the inherent symmetries of the Othello board [11]. In symmetric sampling, a single n-tuple is employed eight times, returning one value for each possible board rotation and reflection. See Fig. 2 for an illustration.

Ii-E N-tuple Network Architecture

Due to the spatial nature of game boards, n-tuples are usually consecutive snake-shaped sequences of locations, although this is not a formal requirement. If each n-tuple in a network is of the same size, we denote it as -tuple network, having weights. Apart from choosing and , an important design issue of n-tuples network architecture is the location of individual n-tuples on the board [7].

Ii-E1 Random Snake-shaped N-tuple Network

Thus it is surprising that so many investigations in game strategy learning have involved randomly generated snake-shaped n-tuple networks. Lucas [6] generated individual n-tuples by starting from a random board location, then taking a random walk of steps in any of the eight orthogonal or diagonal directions. The repeated locations were ignored, thus the resulting n-tuples were from to squares long. The same method Krawiec and Szubert used for generating , and -tuple networks [22, 7], and Thill et al. [23] for generating tuple networks playing Connect Four.

An -tuple network generated in this way we will denote as rand- (see Fig. (a)a for an example).

Ii-E2 Systematic Straight N-tuple Network

Alternatively, we propose a deterministic method of constructing n-tuple networks. Our systematic straight n-tuple networks consist of all possible vertical, horizontal or diagonal n-tuples placed on the board. Its smallest representative is a network of -tuples. Thanks to symmetric sampling, only of them is required for an Othello board, and such -tuple network, which we denote as all- contains weights. all- network containing -tuples is shown in Fig. (b)b. Table II shows the number of weights in selected architectures of rand-* and all-* networks.

Ii-E3 Other Approaches

Logistello [4], computer player, which beat the human Othello world champion in 1997, used -tuples of , hand-crafted by an expert. External knowledge has also been used by Manning [8], who, generated a diverse

-tuple network using random inputs method from Breiman’s Random Forests basing on a set of

labeled random games.

Ii-F Learning to Play Both Sides

When a single player defined by its evaluation function is meant to play both as black and white, it must interpret the result of the evaluation function complementary depending on the color it plays. There are three methods serving this purpose.

The first one is doubled function (e.g., [23]), which simply employs two separate functions: one for playing white and the other for playing black. It allows to fully separate the strategy for white and black players. However, its disadvantage consists in that two times more weights must be learned, and the experience learned when playing as black does not used when playing as white and vice versa.

Output negation and board inversion (e.g., [9]) are alternatives to doubled function. They use only single set of weights, reducing the search space and allowing to transfer the experience between the white and black player. When using output negation, black selects the move leading to a position with the maximal value of the evaluation function whereas white selects the move leading to a position with the minimal value.

If a player uses board inversion it learns only to play black. As the best black move it selects the one leading to the position with the maximum value. If it has to play white, it temporarily flips all the pieces on the board, so it can interpret the board as if it played black. Then it selects the best ‘black’ move, flips all the pieces back, and plays the white piece in the selected location.

The swh player uses output negation.

Iii Experiments and Results

architecture weights architecture weights
all- () rand-
all- () rand-
all- () rand-
Table II: The number of weights for three pairs of systematic straight (all-*) and random snake-shaped (rand-*) n-tuple networks architectures.

Iii-a Common Settings

Iii-A1 Evolutionary Setup

In order to compare different n-tuple network architectures, we performed several computational experiments. In each of them the weights of n-tuple networks have been learned by evolution strategy [24] for generations. The weights of individuals in the initial population were drawn from the interval. Evolution strategy used Gaussian mutation with

. The individual’s fitness was calculated using the Othello League performance measure estimated over

double games (cf. II-C).

In total, games were played in each evolutionary run. This makes our experiments exceptionally large compared to the previous studies. For example, in a recent study concerning n-tuple networks [7] games were played. Also, despite using the much simpler WPC representation, Samothrakis et al. [16] performed games per run.

Such extensive experiment was possible due to efficient n-tuple network and Othello implementation in Java, which is capable of running about games per second on a single CPU core. Thanks to it, we were able to finish one evolutionary run in hours on a -core Intel(R) Core(TM) i7-2600 CPU @GHz.

Iii-A2 Performance Evaluation

We repeated each evolutionary times. Every generations, we measured the (Othello League) performance of the fittest individual in the population using double games. The performance of the fittest individual from the last generation is identified with method’s performance. Since, the sample size is only per method, for statistical analysis of the following experiments, we used non-parametric Wilcoxon rank sum test (a.k.a. the Mann-Whitney U test) with the significance level and Holm’s correction when comparing more than two methods at once.

Iii-B Preliminary: Board Inversion vs. Output Negation

Figure 4 presents the results of learning with board inversion against output negation for representatives of both types of n-tuple networks architectures: rand- having , and all- with weights.

The figure shows that board inversion surpasses output negation regardless of the player architecture, which confirms a previous study of the two methods for preference learning [9]. The differences between the methods are statistically significant (see also the detailed results in Table IV).

Moreover, visual inspection of the violin plots reveals that board inversion leads to more robust learning, since the variance of performances is lower. Therefore, in the following experiments we employ exclusively board inversion.

rand-8x4-inv

rand-8x4-neg

all-1-inv

all-1-neg

performance
Figure 4: Comparison of output negation against board inversion for two n-tuples architectures. The performance is measured as the average score obtained against the Standard WPC Heuristic Player at

(Othello League performance). In each violin shape, the white dot marks the median, the black boxes range from the lower to the upper quartile, while the thin black lines represent

interquartile range. Outliers beyond this range are denoted by black dots. The outer shape shows the probability density of the data.

Iii-C All Short Straight vs. Random Long Snake-shaped N-tuples

In the main experiment, we compare n-tuple networks consisting of all possible short straight n-tuples (all-, all-, and all-) with long random snake-shaped ones (rand-, rand- and rand-). We chosen the number of n-tuples and size of them to make the number of weights in of corresponding architectures are equal, or, if impossible, similar (see Table II).

all-2-inv

rand-10x3-inv

all-3-inv

rand-8x4-inv

all-4-inv

rand-7x5-inv

performance
Figure 5: The comparison of all short straight n-tuple networks (all-*) with random long snake-shaped n-tuple networks (rand-*). The distribution of performances is presented as violin plots (see Fig. 4 for explanation).

The results of the experiment are shown in Figure 5 as violin plots. Statistical analysis of three pairs having equal or similar number of weights reveals that:

  • all- is better than rand-,

  • all- is better than rand-, and

  • all- is better than rand-.

Let us notice that the differences in performance are substantial: for the pair all- vs. rand-, where the difference in performance is the lowest, the best result obtained by rand- is still lower than the worst result obtained by all- (see Table IV for details).

All-* architectures are also more robust, due to lower variances than rand-* architectures (cf. Fig. 5). This is because the variance of rand-* architectures is attributed to both its random initialization and non-deterministic learning process, while the variance of all-* is only due to the latter.

Iii-D 2-tuples are Long Enough

Intuitively, longer n-tuples should lead to higher network’s performance, since they can ‘react’ to patterns that the shorter ones cannot. However, the results presented in Fig. 5 show no evidence that this is a case. Despite having two times more weights, all- does not provide better performance than all- (no statistical difference). Furthermore, all- is significantly worse than both than all- and all-.

Figure 6 shows the pace of learning for each of six analyzed architectures. It plots methods’ performance as a function of computational effort, which is proportional to the number of generations.

The figure suggests that all- is not only the best (together with all-) in the long run, but it is also the method that learns the quickest. all- catches up all- eventually, but it does not seem to be able to surpass it. all- learns even slower than all-. Although the gap between all- and all- decreases over time, it is still noticeable after generations.

Thus, our results suggest that for Othello, all- with just weights, the smallest among the six considered n-tuple network architectures, is also the best one.

Figure 6: Pace of learning of six analyzed n-tuple networks architectures. Each point on the plot denotes the average performance of method’s fittest individual in a given generation.

Iii-E Othello League Results

The best player obtained in this research consists of all -tuples; its performance is with confidence delta of . This result is significantly higher than the best results reported to this date in the Othello League (see III). Notice also how small it is (in terms of the number of weights) compared to other players in the league. Unfortunately, the on-line Othello League accepts only players employing output negation; it does not allow for board inversion. Thus, our player could not be submitted to the Othello League.

date player name encoding weights performance
n/a all--inv n-tuple network 288 0.9592
2013-09-17 wj-1-2-3-tuples n-tuple network 966 0.9149
2011-01-30 epTDLmpx_12x6 [7] n-tuple network
2011-01-28 prb_nt15_001 n-tuple network
2011-01-25 epTDLxover [7] n-tuple network
2008-05-03 t15x6x8 n-tuple network
2008-05-03 x30x6x8 n-tuple network
2008-03-28 Stunner n-tuple network
2007-09-14 MLP(…)312-ties0.FF neural network
Table III: Selected milestones (improvements) in the on-line Othello Position Evaluation Function League since September 2007. The table consists also all-, not submitted to the League since it uses board inversion. The performances of all but the three best players come from the Othello League website and have been estimated using double games. The performances of all-2 and wj-1-2-3-tuples players have been estimated using double games, and the performance of epTDLmpx_12x6 has been reported in [7].

To be accepted in the Othello League, we performed some experiments also with output negation. The best output negation player we were able to evolve was submitted under the name of wj-1-2-3-tuples. It consists of all straight 1-, 2-, and 3-tuples, thus having weights in total.

wj-1-2-3-tuples took the lead in the league and is the first player exceeding the performance of . It obtained in the league, but this result should be taken with care, since to evaluate player’s performance Othello League plays just games. We estimate its performance to basing on double games.

We suspect that the performance of ca. against Standard WPC Heuristic player that all- and all- converge to, cannot be significantly improved at -ply. random moves using in all games leads to the situation when even a perfect-playing player cannot guarantee not losing a game.

Despite the first place obtained in the Othello League, the evolved player is not good in ‘general’, against a variety of opponents, because is was evolved specifically to play against the Standard WPC Heuristic player. When evaluated against random WPC players (the expected utility measure [14, 25]), the best all- player obtains a score of only . This is not much, since with considerably less computational effort that used in this paper, it is possible to evolve an n-tuple player scoring [26, 7]. However, our goal here was not to design good players in general, but to compare different position evaluation functions.

The best all- player evolved in this paper is printed in Fig. 7.

Iv Discussion: the more weights, the worse for evolution?

We have shown that among all-* methods, the more weights the worse results; the same applies to rand-* methods (see Fig. 5). This finding confirms the one of Szubert et al. [7], who found out that among the networks of rand- ( weights), rand- ( weights), and rand- ( weights), it is the latter that allows (co)evolutionary algorithm for obtaining best results. The authors stated that this effect it due to the higher dimensionality of the search space, for which “the weight mutation operator is not sufficiently efficient to elaborate fast progress”.

Although we do not challenge this claim, our results suggest that the number of weights in a network is not the only performance factor. all- has weights, thus, the dimensionality of its search space is considerably higher than the one for rand- and rand-, which have and weights, respectively. Nonetheless, among these three architectures, it is the all- network that obtains the highest performance (see Fig. 5). Therefore, the second performance factor in learning an n-tuple network is its (proper or not) architecture.

Finally, let us notice that an alternative to a fixed n-tuple network architecture is a self-adaptive one, which can change in response to variation operators [7], such as mutation or crossover. Although such architecture is, in principle, more flexible, it adds another dimension to the search space, making the learning problem even harder.

V Conclusions

In this paper, we have analyzed n-tuple network architectures for position evaluation function in board games. We have shown that a network consisting of all possible, systematically generated, short n-tuples leads to a significantly better play than long random snake-shaped tuples originally used by Lucas [11]. With a simple network consisting of all possible straight -tuples (with just weights) we were able to beat the best result in the on-line Othello League (having usually many times more weights).

Moreover, our results suggest that tuples longer than give no advantage, causing slower learning rate, at the same time. This is surprising, since capturing opponent’s pieces in Othello requires a line of at least three pieces (e.g. white, black, white).

Let us emphasize that our result has been obtained in an intensive computational experiment involving generations, an order of magnitude more than other studies in this domain. Nevertheless, it remains to be seen whether they hold for different experimental settings. We used evolution against an expert player in -ply -Othello. The interesting questions are: i) whether our systematic short -tuple network is also advantageous for reinforcement learning, such as temporal difference learning, and ii) whether such networks are also profitable for other board games, e.g. Connect Four.

Acknowledgment

This work has been supported by the Polish National Science Centre grant no. DEC-2013/09/D/ST6/03932. The computations have been performed in Poznań Supercomputing and Networking Center. The author would like to thank Marcin Szubert for his helpful remarks on an earlier version of this article.

mean median
all-2-inv
all-3-inv
all-4-inv
rand-10x3-inv
rand-8x4-inv
rand-7x5-inv
rand-8x4-neg
all-1-inv
all-1-neg
Table IV: Performances obtained in evolutionary runs of all n-tuple network architectures considered in this study. Each value is an average score in double games against Standard WPC Heuristic in -Othello, where .
{ 32
  { 2 8 { 6 7 } { 55 63 } { 7 15 } { 56 57 } { 62 63 } { 0 1 } { 48 56 } { 0 8 }
    { 57.64 -91.70 111.05 -82.30 74.42 -96.30 211.47 -53.91 142.45 }  }
  { 2 4 { 49 56 } { 0 9 } { 7 14 } { 54 63 }
    { 84.51 -33.37 -29.83 -72.21 -199.31 -18.72 -98.04 22.34 185.84 }  }
  { 2 8 { 1 2 } { 15 23 } { 8 16 } { 40 48 } { 57 58 } { 5 6 } { 47 55 } { 61 62 }
    { -42.97 109.76 -66.10 84.67 158.09 -148.21 -11.94 -94.92 111.52 }  }
  { 2 8 { 48 49 } { 54 55 } { 8 9 } { 54 62 } { 6 14 } { 14 15 } { 49 57 } { 1 9 }
    { -144.66 16.16 -78.97 -83.93 -7.37 -15.97 -102.98 -51.68 -3.99 }  }
  { 2 8 { 41 48 } { 8 17 } { 6 13 } { 50 57 } { 53 62 } { 1 10 } { 46 55 } { 15 22 }
    { -115.74 -43.49 -117.28 33.32 -14.29 12.85 -18.33 2.95 91.17 }  }
  { 2 8 { 4 5 } { 16 24 } { 39 47 } { 58 59 } { 60 61 } { 32 40 } { 2 3 } { 23 31 }
    { 31.28 -32.65 37.46 -20.60 273.39 -90.25 60.16 -151.55 -30.45 }  }
  { 2 8 { 2 10 } { 16 17 } { 46 47 } { 22 23 } { 50 58 } { 40 41 } { 5 13 } { 53 61 }
    { 93.18 -27.12 -16.53 -2.52 -80.43 62.23 36.46 39.86 -97.27 }  }
  { 2 8 { 52 61 } { 23 30 } { 51 58 } { 16 25 } { 5 12 } { 2 11 } { 38 47 } { 33 40 }
    { -42.75 36.91 44.65 -49.19 -19.01 -55.54 10.60 -36.50 74.37 }  }
  { 2 4 { 24 32 } { 59 60 } { 3 4 } { 31 39 }
    { 43.77 -122.45 55.43 -5.38 284.39 -103.51 79.44 -94.92 157.02 }  }
  { 2 8 { 52 60 } { 24 25 } { 51 59 } { 32 33 } { 4 12 } { 30 31 } { 38 39 } { 3 11 }
    { 99.45 20.89 -10.97 15.39 -28.77 16.65 -16.89 -12.47 -18.57 }  }
  { 2 8 { 30 39 } { 25 32 } { 3 12 } { 51 60 } { 24 33 } { 31 38 } { 52 59 } { 4 11 }
    { 14.85 88.04 -3.28 26.88 33.46 -19.67 -5.65 28.85 -43.03 }  }
  { 2 8 { 39 46 } { 53 60 } { 4 13 } { 17 24 } { 50 59 } { 32 41 } { 3 10 } { 22 31 }
    { -32.77 22.27 -51.36 -2.01 -65.96 -33.23 16.39 8.59 -28.07 }  }
  { 2 8 { 47 54 } { 5 14 } { 40 49 } { 49 58 } { 2 9 } { 14 23 } { 9 16 } { 54 61 }
    { -82.49 -5.56 10.19 -68.91 29.84 -37.59 -69.56 -0.20 10.63 }  }
  { 2 4 { 6 15 } { 48 57 } { 55 62 } { 1 8 }
    { -34.15 20.89 -135.36 79.22 30.55 20.13 35.27 -11.57 16.43 }  }
  { 2 8 { 13 14 } { 9 10 } { 41 49 } { 9 17 } { 53 54 } { 46 54 } { 49 50 } { 14 22 }
    { 20.98 -44.43 30.94 -64.79 -27.71 -37.59 17.05 4.34 -17.03 }  }
  { 2 4 { 45 54 } { 9 18 } { 14 21 } { 42 49 }
    { 43.08 104.43 14.69 24.49 31.96 -14.64 -51.11 -22.12 14.48 }  }
  { 2 8 { 38 46 } { 12 13 } { 52 53 } { 22 30 } { 50 51 } { 10 11 } { 33 41 } { 17 25 }
    { 19.75 39.63 -16.15 1.75 -38.84 9.21 6.77 14.85 19.99 }  }
  { 2 8 { 41 42 } { 13 21 } { 45 46 } { 45 53 } { 10 18 } { 21 22 } { 17 18 } { 42 50 }
    { 5.28 -38.02 12.64 -90.60 60.60 5.96 60.38 27.61 3.00 }  }
  { 2 8 { 37 46 } { 10 19 } { 17 26 } { 43 50 } { 22 29 } { 44 53 } { 13 20 } { 34 41 }
    { -13.44 19.48 -13.49 0.72 -59.65 -3.23 45.27 45.31 30.39 }  }
  { 2 4 { 25 33 } { 30 38 } { 11 12 } { 51 52 }
    { 8.66 14.83 15.73 -34.15 32.08 -9.15 15.15 41.61 66.03 }  }
  { 2 8 { 25 26 } { 12 20 } { 11 19 } { 33 34 } { 44 52 } { 43 51 } { 37 38 } { 29 30 }
    { -71.70 12.07 -54.50 18.12 86.36 22.27 -56.07 -4.46 -43.54 }  }
  { 2 8 { 30 37 } { 43 52 } { 29 38 } { 26 33 } { 44 51 } { 25 34 } { 11 20 } { 12 19 }
    { 11.34 25.64 -28.34 41.82 73.33 -26.18 -0.64 -25.88 -29.12 }  }
  { 2 8 { 42 51 } { 21 30 } { 45 52 } { 11 18 } { 12 21 } { 18 25 } { 38 45 } { 33 42 }
    { -32.37 28.60 13.65 -48.41 -13.25 -63.15 -30.60 18.99 22.64 }  }
  { 2 4 { 10 17 } { 13 22 } { 46 53 } { 41 50 }
    { 31.89 -78.88 -32.75 44.88 -42.65 39.91 26.48 -12.34 -46.59 }  }
  { 2 8 { 34 42 } { 21 29 } { 20 21 } { 18 26 } { 18 19 } { 42 43 } { 37 45 } { 44 45 }
    { 0.66 -29.13 12.95 -17.71 -71.59 11.40 31.52 -4.66 79.89 }  }
  { 2 4 { 35 42 } { 18 27 } { 21 28 } { 36 45 }
    { 76.53 141.05 -5.04 61.89 16.13 43.95 -4.87 182.85 -92.46 }  }
  { 2 4 { 26 34 } { 43 44 } { 29 37 } { 19 20 }
    { -10.60 11.60 -8.56 -6.06 -137.99 -8.81 -3.62 3.30 -39.96 }  }
  { 2 8 { 20 28 } { 34 35 } { 28 29 } { 36 37 } { 26 27 } { 35 43 } { 36 44 } { 19 27 }
    { 38.03 56.90 -26.64 -61.84 21.69 -116.98 7.91 53.48 -58.83 }  }
  { 2 8 { 19 28 } { 36 43 } { 29 36 } { 20 27 } { 26 35 } { 27 34 } { 35 44 } { 28 37 }
    { -17.67 81.78 -63.71 15.62 2.16 16.63 -14.70 43.61 -24.67 }  }
  { 2 4 { 34 43 } { 37 44 } { 20 29 } { 19 26 }
    { -18.74 -52.99 -3.10 -31.68 -181.22 -24.62 32.63 58.19 40.79 }  }
  { 2 4 { 35 36 } { 27 28 } { 28 36 } { 27 35 }
    { -100.00 -28.48 -4.99 -63.27 -80.05 -55.95 22.63 22.57 133.65 }  }
  { 2 2 { 28 35 } { 27 36 }
    { 20.00 -52.52 -12.54 40.64 192.61 14.14 4.70 -1.81 -42.70 }  }
}
Figure 7: N-tuple network representing the best evolved all- player in the online Othello League format. The player contains -tuples. Each one has at most symmetric expansions (sometimes or ). The fields are numbered from to in row-wise fashion. The network has weights. The player uses board inversion. Its Othello League performance is .

References

  • [1] C. E. Shannon, “XXII. Programming a computer for playing chess,” Philosophical magazine, vol. 41, no. 314, pp. 256–275, 1950.
  • [2]

    A. L. Samuel, “Some studies in machine learning using the game of checkers,”

    IBM Journal of Research and Development, vol. 3, no. 3, pp. 211–229, 1959.
  • [3] L. V. Allis, A knowledge-based approach of connect-four.   Vrije Universiteit, Subfaculteit Wiskunde en Informatica, 1988.
  • [4] M. Buro, “Experiments with Multi-ProbCut and a new high-quality evaluation function for Othello,” Games in AI Research, pp. 77–96, 2000.
  • [5] ——, “An evaluation function for othello based on statistics,” NEC, Princeton, NJ, NECI 31, Tech. Rep., 1997
  • [6] S. Lucas, “Learning to play Othello with n-tuple systems,” Australian Journal of Intelligent Information …, no. 4, pp. 1–20, 2008
  • [7] M. Szubert, W. Jaśkowski, and K. Krawiec, “On Scalability, Generalization, and Hybridization of Coevolutionary Learning: a Case Study for Othello,” IEEE Transactions on Computational Intelligence and AI in Games, 2013.
  • [8] E. P. Manning and A. Othello, “Using Resource-Limited Nash Memory to Improve an Othello Evaluation Function,” IEEE Transactions on Computational Intelligence and AI in Games, vol. 2, no. 1, pp. 40–53, 2010.
  • [9] T. Runarsson and S. Lucas, “Preference Learning for Move Prediction and Evaluation Function Approximation in Othello,” Computational Intelligence and AI in Games, IEEE Transactions on, 2014.
  • [10]

    V. L. Allis, “Searching for solutions in games and artificial intelligence,” Ph.D. dissertation, University of Limburg, Maastricht, The Netherlands, 1994.

  • [11] S. M. Lucas, “Learning to play Othello with N-tuple systems,” Australian Journal of Intelligent Information Processing Systems, Special Issue on Game Technology, vol. 9, no. 4, pp. 01–20, 2007.
  • [12] Y. Osaki, K. Shibahara, Y. Tajima, and Y. Kotani, “An Othello evaluation function based on Temporal Difference Learning using probability of winning,” 2008 IEEE Symposium On Computational Intelligence and Games, pp. 205–211, Dec. 2008
  • [13] E. P. Manning, “Using resource-limited nash memory to improve an othello evaluation function,” Computational Intelligence and AI in Games, IEEE Transactions on, vol. 2, no. 1, pp. 40 –53, march 2010.
  • [14] S. Y. Chong, P. Tino, D. C. Ku, and Y. Xin, “Improving Generalization Performance in Co-Evolutionary Learning,”

    IEEE Transactions on Evolutionary Computation

    , vol. 16, no. 1, pp. 70–85, 2012
  • [15] S. van den Dries and M. A. Wiering, “Neural-Fitted TD-Leaf Learning for Playing Othello With Structured Neural Networks,” IEEE Transactions on Neural Networks and Learning Systems, vol. 23, no. 11, pp. 1701–1713, Nov. 2012
  • [16] S. Samothrakis, S. Lucas, T. Runarsson, and D. Robles, “Coevolving Game-Playing Agents: Measuring Performance and Intransitivities,” IEEE Transactions on Evolutionary Computation, no. 99, pp. 1–15, 2012
  • [17] W. Jaśkowski, M. Szubert, and P. Liskowski, “Multi-criteria comparison of coevolution and temporal difference learning on othello,” in EvoGames, ser. Lectures Notes in Computer Science, 2014.
  • [18] S. M. Lucas and T. P. Runarsson, “Temporal difference learning versus co-evolution for acquiring othello position evaluation,” in IEEE Symposium on Computational Intelligence and Games.   IEEE, 2006, pp. 52–59.
  • [19] M. Szubert, W. Jaśkowski, and K. Krawiec, “Coevolutionary Temporal Difference Learning for Othello,” in IEEE Symposium on Computational Intelligence and Games, 2009, Conference proceedings (article), pp. 104–111
  • [20] T. Yoshioka, S. Ishii, and M. Ito, “Strategy acquisition for the game,” Strategy Acquisition for the Game "Othello" Based on Reinforcement Learning, vol. 82, no. 12, pp. 1618–1626, 1999.
  • [21]

    W. W. Bledsoe and I. Browning, “Pattern recognition and reading by machine,” in

    Proc. Eastern Joint Comput. Conf., 1959, pp. 225–232.
  • [22] K. Krawiec and M. Szubert, “Learning n-tuple networks for othello by coevolutionary gradient search,” in GECCO 2011 Proceedings, N. K. et al, Ed., ACM.   ACM, 2011, pp. 355–362.
  • [23] M. Thill, P. Koch, and W. Konen, “Reinforcement Learning with N-tuples on the Game Connect-4,” in Parallel Problem Solving from Nature - PPSN XII, ser. Lecture Notes in Computer Science, C. A. C. Coello, V. Cutello, K. Deb, S. Forrest, G. Nicosia, and M. Pavone, Eds., vol. 7491.   Springer, 2012, pp. 184–194.
  • [24] H.-G. Beyer and H.-P. Schwefel, “Evolution strategies–a comprehensive introduction,” Natural computing, vol. 1, no. 1, pp. 3–52, 2002.
  • [25] W. Jaśkowski, P. Liskowski, M. Szubert, and K. Krawiec, “Improving coevolution by random sampling,” in GECCO’13: Proceedings of the 15th annual conference on Genetic and Evolutionary Computation, C. Blum, Ed.   Amsterdam, The Netherlands: ACM, July 2013, pp. 1141–1148.
  • [26] P. Liskowski, “Co-Evolution Versus Evolution with Random Sampling for Acquiring Othello Position Evaluation,” Ph.D. dissertation, Poznan University of Technology, 2012.