Search Algorithms for Mastermind

by   Anthony D. Rhodes, et al.

his paper presents two novel approaches to solving the classic board game mastermind, including a variant of simulated annealing (SA) and a technique we term maximum expected reduction in consistency (MERC). In addition, we compare search results for these algorithms to two baseline search methods: a random, uninformed search and the method of minimizing maximum query partition sets as originally developed by both Donald Knuth and Peter Norvig.


A comparison of semi-deterministic and stochastic search techniques

This paper presents an investigation of two search techniques, tabu sear...

Towards solving the 7-in-a-row game

Our paper explores the game theoretic value of the 7-in-a-row game. We r...

Solving a Mathematical Problem in Square War: a Go-like Board Game

In this paper, we present a board game: Square War. The game definition ...

Efficient optimisation of structures using tabu search

This paper presents a novel approach to the optimisation of structures u...

Query Clustering using Segment Specific Context Embeddings

This paper presents a novel query clustering approach to capture the bro...

Minimum Regret Search for Single- and Multi-Task Optimization

We propose minimum regret search (MRS), a novel acquisition function for...

A New Parallel Message-distribution Technique for Cost-based Steganography

This paper presents two novel approaches to increase performance bounds ...

I Introduction

Mastermind is a popular code-breaking two player game originally invented in the 1970s. The gameplay closely resembles the antecedent pen and paper game called ”Bulls and Cows”, which dates back at least a century.
Mastermind consists of three components: a decoding board which includes a dozen or so rows of holes for query pegs, in addition to smaller holes for key pegs; the board also contains a space for the placement of the master code provided by the code-maker. In addition, the game is played with code pegs of different colors placed by the code-breaker (six is the default number of colors, although many variations exist) and key pegs consisting of two colors placed by the code-maker (see Figure 1).
The code-maker chooses a pattern of four code pegs for the master code. The master code is placed in the four holes covered by the shield, visible to the code-maker but not to the code-breaker. For each turn of the game, the code-breaker attempts to guess the master code with respect to both order and color. Each query is made by placing a row of code pegs on the decoding board adjacent to the last query row. After each guess, the code-maker provides feedback to the code-breaker in the form of key pegs. Between zero and four key pegs are placed next to the query code on the current turn to indicate the fidelity of the current query – a colored or black key peg connotes a query code peg that is correct in both color and position, whereas a white key peg indicates the existence of a correct color code peg placed in the wrong position. The goal of the game is for the code-maker to determine the master code using a minimal number of queries.
Due to its status as a relatively simple, query-based game of incomplete information, mastermind has served as an enduring test-bed for a diverse array of search algorithms in A.I. and related fields.

Fig. 1: Mastermind game schematic, including decoding board, code pegs and key pegs. A completed game is shown.

Ii Previous Work and Preliminaries

Mastermind and its variants have inspired a good deal of research, particularly in the domains of combinatorics and search algorithms.
With four pegs and six colors (which we henceforth denote ) there are possible codes. One of the most essential properties surrounding efficient search in mastermind is the notion of code consistency:


where above connotes the query operation, outputting a 2-d key code for a given query code input (e.g. indicates the query code generated two black and one white key code responses, respectively, for the given master code); in equation (1) denotes the master code. In short, code consistency forms an equivalence relation (indicated by ) over the set of all possible codes.
It is not difficult to determine the total number of distinct query partition classes for a generic game. Notice that all key code combinations, where are possible, with the exception of [p-1,1]. Thus the total number of distinct query partition classes for is given by:


In particular, for , ; see Table 1 for more details.

bw 0 1 2 3 4
0 [0,0] [0,1] [0,2] [0,3] [0,4]
1 [1,0] [1,1] [1,2] [1,3] X
2 [2,0] [2,1] [2,2] X X
3 [3,0] X X X X
4 [4,0] X X X X
TABLE I: Legal query partition classes for , where

One may, alternatively, generate general formulae for the cardinality of all possible codes for by appealing to elementary combinatorics. [2]
Consider the total number of possible codes as the sum of all of the possible codes containing exactly i letters. We denote the number of possibles codes of p pegs of exactly i colors as . Each such code for a fixed number of distinct colors i amounts to a multinomial coefficient, whereby:


Confirming the above formula for yields:
An early and remarkable result for the mastermind search problem was provided by Knuth [3] which proves that optimally five questions suffices to guarantee a solution to We now consider a simple generalized lower bound related to this claim.
Theorem. For the minimum number of guesses required to guarantee solution a for , i.e. the general 2-position mastermind game (which we denote ) is .[2]
Proof. Guess times using two new colors each turn. From these guesses, the code-breaker can receive a positive response (i.e. a black or white key peg) at most twice. If the code-breaker receives two key pegs of any color in response on two occasions from these queries, one can show that there are a most two possible consistent master codes. Conversely, if the code-breaker receives no positive responses in total, then

must be odd and the master code consists of the lone unused color. In either case, we have:

= , as desired. It therefore follows follows that , , yielding a general lower bound. In particular, for Knuth’s result, the bound is tight, as =
Bondt [11] showed that solving a mastermind board for is nevertheless an NP-complete problem, by reducing it to the 3SAT problem. Moreover, the mastermind satisfiability problem which asks, given a set of queries and corresponding key peg replies, whether there exists a master code that satisfies this set of query-key conditions, has been shown to be NP-complete [12].

A broad range of search algorithms have been previously applied to mastermind and its variants. Knuth’s method from 1977 (detailed in the next section) applies a minimax search using a heuristic based on the size of query-partitions. This method yielded 4.467 expected queries, with a maximum of five queries for all possible codes in

. Of note, the variation of mastermind was effectively solved in 1993 by Koyama and Lai using exhaustive, depth-first search achieving 4.34 expected queries – the search time per puzzle at publication was, however, on the order of several hours.

Beyond complete search, genetic algorithms (GAs) have been applied extensively to mastermind, including [6], [7] and [10]. In the GA paradigm, a large set of ”eligible” codes (e.g. consistent codes – although several approaches show that inconsistent codes are sometimes more informative for mastermind search) is considered for each generation. The ”goodness” of these codes is determined using a fitness function which assigns a score to each code based on its probability of being the master code using various meta-heuristics. At each generation, standard genetic operations including crossover, mutation and inversion are applied in order to render the new population. [7] In particular achieved 4.39 expected queries for

using a fitness function defined by a weighted sum of L1 key peg differences between candidate codes and query codes.
Shapiro [13] adopts a method which simply draws queries from the set of codes consistent with all previous queries; Blake et al [5] use an MCMC approach; [6] combine hill-climbing and heuristics, while Cover and Thomas [14] introduce an information theoretical strategy.
In the current work we apply Simulated Annealing to the mastermind problem in addition to introducing a novel search heuristic which aims to maximize the expected reduction in the set of codes consistent with the master code.

Iii Knuth’s Method

Knuth’s method, the ”five guess algorithm” for works as follows. The first guess is deterministically chosen as 1122 (Knuth provides examples showing that beginning with a different choice such as 1123 or 1234 can lead to situations where more than five queries are required to solve the puzzle). Following this initial guess, the code-maker responds with the corresponding key pegs, and using this response, the code-breaker generates a set of consistent codes Next, the code-breaker applies a minimax technique to the set S, where each node in the search tree is evaluated according to the expected size of the various query-partitions; in particular, the evaluation function returns the size of the maximum partition for each consistent code, and the code with the maximum partition of (expected) minimal size is chosen as the next query. This process is repeated until termination.

  Code-breaker sets initial query code: .
  Code-maker replies with key pegs: ;
  while  != 4 do
     Generate consistency set
     Compute expected size of maximum query-partition for each code in :
     Code-maker replies with key pegs: ;
  end while
Algorithm 1 Knuth’s Method

To further illustrate Knuth’s method, Table II explicitly shows the size of each query-partition for (recall that ) for generic initial query types: , , , , and . Note that according to Knuth’s criterion, would be chosen in this case because it yields the smallest of all maximum partition sets of the given queries.

1111 1112 1122 1123 1234
[0,0] 625 256 256 81 16
[0,1] 0 308 256 276 152
[0,2] 0 61 96 222 312
[0,3] 0 0 16 44 136
[0,4] 0 0 1 2 9
[1,0] 500 317 256 182 108
[1,1] 0 156 208 230 252
[1,2] 0 27 36 84 132
[1,3] 0 0 0 4 8
[2,0] 150 123 114 105 96
[2,1] 0 24 32 40 48
[2,2] 0 3 4 5 6
[3,0] 20 20 20 20 20
[4,0] 1 1 1 1 1
TABLE II: Enumeration of all query-partition sizes for various initial codes: 1111, 1112, 1122, 1123, 1234. Based on the heuristic used in Knuth’s method, code 1122 is chosen because it generated the smallest maximum size partition.

Iv Simulated Annealing

We apply Simulated Annealing (SA) to the mastermind search problem. More concretely, our method combines elements of stochastic, local hill-climbing (a la SA) with non-local, consistency-based search. At each step of the algorithm we construct the the consistency set comprising the subset of codes consistent with all previous queries. We augment this set with a ”neighborhood” consisting of the set of codes with Hamming distance one from the current query code (observe that these codes need not be consistent with the given queries) to form the set . We score these neighbors using the following scoring function:


Where and indicate the number of black and white key pegs generated for a given query code, respectively; observe that Finally, we randomly sample codes from the augmented set . If the code is consistent with the query histories it is automatically chosen as the next query; otherwise, it is accepted as the next query with acceptance probability: (we use in experiments). This acceptance probability formula encodes an implicit ”temperature”, since the acceptance property of inconsistent codes decreases in proportion to the algorithm step number – which is to say the algorithm ”anneals” over time. We gives a pseudo-code description of our SA procedure below.

  Code-breaker sets initial query code randomly.
  Code-maker replies with key pegs: ;
  while  != 4 do
     Generate consistency set
     Generate neighborhood of , using Hamming distance of one
     Form augmented set:
     Sample random code
     if  then
        with probability ,
     end if
  end while
Algorithm 2 Simulated Annealing

V Maximum Expected Reduction in Consistency

We introduce a novel search heuristic defined as the expected reduction in the size of the consistent code set for mastermind search. At each step of the MERC algorithm, we first determine the expected reduction in the size of the consistency set for each code . Concretely, we generate responses over all possible candidate master codes Next, for each such we compute and count the size of the set of codes such that . The cardinality of this set represents the expected size of the consistency set with respect to code . By choosing the code for which the cardinality of the expected size of the consistency set is smallest, we render the maximum expected reduction in the consistency set.

  Code-breaker sets initial query code: .
  Code-maker replies with key pegs: ;
  while  != 4 do
     Generate consistency set
     for  do
        generate responses over all possible candidate master codes
        for  do
           compute – with fixed – and count the size of the set of codes such that
        end for
     end for
     for c for which the cardinality of the expected size of the consistency set is smallest:
  end while
Algorithm 3 Maximum Expected Reduction in Consistency (MERC)

Vi Experimental Results

We generated experimental results using simulations of games for the , and

variants of mastermind. In addition to Knuth’s method, the SA algorithm and MERC algorithm described previously, we also implemented a baseline uninformed random search algorithm. For each experiment, we report the mean, median, standard deviation and maximum number of queries required. The Knuth and MERC algorithms used a deterministic choice of initial code (1122), while the random and SA algorithms used a random initial code. Our experimental results are summarized in Table III.

Algorithm MM(6,4) MM(5,4) MM(4,4)
Random mean: 639.9
Max: 1296
Median: 634.5
STD: 370.8
mean: 315.1
Max: 625
Median: 315
STD: 180.2
mean: 131.2
Max: 256
Median: 132
STD: 74.13
Knuth mean: 4.468
Max: 7
Median: 5.0
STD: .7322
mean: 4.105
Max: 6
Median: 4.0
STD: .7321
mean: 3.631
Max: 5
Median: 4.0
STD: .6843
SA mean: 5.7916
Max: 13
Median: 6.0
STD: 1.673
mean: 5.1306
Max: 12
Median: 5.0
STD: 1.488
mean: 4.3826
Max: 11
Median: 4.0
STD: 1.229
MERC mean: 4.714
Max: 7
Median: 5.0
STD: .8954
mean: 4.206
Max: 7
Median: 4.0
STD: .8472
mean: 3.751
Max: 6
Median: 4.0
STD: .820
TABLE III: Experimetal results summary for Random, Knuth, SA and MERC algorithms applied to , and using sample games.

Vii References

[1] Goddard, Wayne. Mastermind Revisited. 2004.

[2] Ville, Geoffroy. An Optimal Mastermind (4,7) Strategy and More Results in the Expected Case, 2013.

[3] Knuth, D.E. The Computer as Mastermind. Journal of Recreational Mathematics, 1976-77, 1–6.

[4] Peter Norvig. 1984. Playing Mastermind optimally. SIGART Bull. 90 (October 1984), 33-34.

[5] Blair, Nathan, et al. Mastering Mastermind with MCMC. 2018.

[6] Temporel, Alexandre, et al. A Heuristic Hill-Climbing Algorithm for Mastermind. 2004.

[7] Berghman, Lotte, et al. Efficient Solutions for Mastermind Using Genetic Algorithms. Computers and Operations Research, Volume 36, Issue 6, June 2009.

[8] Kooi, Barteld.Yet another Mastermind strategy. ICGA Journal. 28. 10.3233/ICG-2005-28105. 2005.

[9] Singley, Andrew. Heuristic Solution Methods for the 1-Dimensional and the 2-Dimensional Mastermind Problem. 2005 (MS thesis).

[10] Kalisker, Tom, et al. Solving Mastermind Using Genetic Algorithms. GECCO 2003, LNCS 2724. 2003.

[11] Bondt, Michiel. NP-completeness of Master Mind and Minesweeper. Journal of Physical Chemistry A - J PHYS CHEM A. 2004.

[12] Stuckman, J., and Zhang, G. Mastermind is NP-Complete. ArXiv. 2005.

[13] Shapiro, Ehud. Playing Mastermind Logically. SIGART Bull. 85 (July 1983).

[14] Cover, Thomas and Joy A. Thomas.Elements of Information Theory (Wiley Series in Telecommunications and Signal Processing). Wiley-Interscience, New York, NY, USA. 2006.