Tester versus Bug: A Generic Framework for Model-Based Testing via Games

We propose a generic game-based approach for test case generation. We set up a game between the tester and the System Under Test, in such a way that test cases correspond to game strategies, and the conformance relation ioco corresponds to alternating refinement. We show that different test assumptions from the literature can be easily incorporated, by slightly varying the moves in the games and their outcomes. In this way, our framework allows a wide plethora of game-theoretic techniques to be deployed for model based testing.



page 1

page 2

page 3

page 4


Alternative Effort-optimal Model-based Strategy for State Machine Testing of IoT Systems

To effectively test parts of the Internet of Things (IoT) systems with a...


Examining games from a fresh perspective we present the idea of game-ins...

Towards Automated Video Game Testing: Still a Long Way to Go

As the complexity and scope of game development increase, playtesting re...

Test Case Prioritization Techniques for Model-Based Testing: A Replicated Study

Recently, several Test Case Prioritization (TCP) techniques have been pr...

What's in a game? A theory of game models

Game semantics is a rich and successful class of denotational models for...

Kaya: A Testing Framework for Blockchain-based Decentralized Applications

In recent years, many decentralized applications based on blockchain (DA...

A Generic Metaheuristic Approach to Sequential Security Games

The paper introduces a generic approach to solving Sequential Security G...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Games

We consider games played by two players on a game graph. In each state, both players choose one of their enabled actions, and together these determine the next states the game can be in. Since both actions are used simultaneously by a Moves function to decide on the next state, game arenas (Definition 1) describe concurrent 2-player games.

Definition 1.

A game arena is a tuple where, for :

  • is a finite set of states,

  • is the initial state,

  • is a finite and non-empty set of Player actions,

  • is an enabling condition, which assigns to each state a non-empty set of actions available to Player in that state, and

  • is a function that given the actions of Player 1 and 2 determines the set of next states the game can be in. We require that iff .

A play is an infinite path in a game arena, i.e., a sequence of states and actions of both players. We consider prefixes of plays as their finite description. A play is winning if it visits some state in a reachability goal .

Definition 2.

A play of a game arena is an infinite sequence:

with , , and for all . We write , , and for the -th state, player 1 action, and player 2 action respectively. The set of all plays of is denoted .

We define as the prefix of play up to the -th state. With we denote the length of a prefix , i.e. the number of states in . The set of all prefixes of a set of plays of is denoted . We define .

Definition 3.

A play of a game arena is winning with respect to reachability goal , if reaches some state in , i.e., there exist a such that . We write for the set of winning plays with respect to .

The players will choose their actions for making a move according to a strategy (Definition 4). When the players execute their strategies in the game, we obtain a set of plays, called game outcomes. A strategy is winning if all the game outcomes are winning, no matter how the other player plays.

Definition 4.

A strategy for player in game is a function , such that for any . We write for the set of all player strategies in . The outcome of two strategies and is the set of plays that occur when Player 1 plays according to and Player 2 according to :

The plays occurring for strategy are:

Strategy is winning with respect to reachability goal , if . denotes all winning player 1 strategies. Game is winning for player 1 w.r.t. goal iff .

2 Model-based testing

Model-based testing is a smart way of testing that improves test efficiency by automatic test case generation, and execution, from a specification model, such that much manual (repetitive) labour can be prevented. The specification is given as an automaton with input and outputs , that describes a restriction of desired system behavior. The goal of model-based testing is then to determine whether a given System Under Test (SUT) behaves as described by its specification, i.e. where the SUT conforms to .

Figure 0(a) illustrates the model-based testing process. A test case generation algorithm derives a set of test cases from the specification. Here, test cases are finite scenarios composed from inputs and outputs. Then, these test cases are executed automatically on the SUT. The SUT is considered as a black box, since we only see its input/output behavior, but not the internal workings, or source code. Finally, every test case is assigned a test verdict, being Pass or Fail. This verdict is determined in accordance with a conformance relation; in Section 5 we consider the input/output conformance relation ioco [18, 17].

Test cases

System Under Test



Test execution

Test generation

Conformance relation
(a) Model-based testing











(b) SA specification of an MP3 player.

2.1 Specifications as suspension automata

Following [5], we use suspension automata (SAs) as system specifications. These are determinised variants of the labeled transition systems with inputs and outputs from [18, 16].

For a partial function , we write to denote that is defined, and to denote that is undefined.

Definition 5.

A suspension automaton (SA) is a 5-tuple where

  • is a non-empty finite set of states,

  • is a finite set of input labels,

  • with a finite set of output labels, , and ,

  • is a partial transition function , and

  • is an initial state.

We write . For , we denote the set of enabled inputs and outputs in by , and respectively. We require that an SA is non-blocking: .

We assume that any SA uses a special output label to indicate quiescence, i.e., the absence of an observable output [18]. Handling quiescence is crucial in testing: if the SUT does not respond with any output, we must know whether or not this is allowed by the specification, otherwise we cannot come up with the correct verdict. We formalized this with the non-blocking requirement in Definition 5.

Example 1.

Figure 0(b) shows a specification of an MP3 player as an SA. In initial state , no songs are played. Hence, this state has a self-loop labeled . After a play? input, the system moves to state , in which songs are being played, until either endPlayList! occurs, or the quit? button is pressed. The MP3 player also features a repeat function, which can be switched on and off via the repeat? and quitRepeat? actions respectively. Thus, in state , songs are played continuously, until the quit? action occurs.

2.2 Test assumptions in case of input-output conflicts

SAs may feature states that enable both inputs and (non-quiescent) output actions. We call such a state mixed: formally, is mixed if and . States and in Figure 0(b) are mixed. Mixed states may give rise to input-output conflicts. If the tester wants to take an input action, and the SUT wants at the same time to take an output action, the question arises which of the actions will be carried out. The literature introduced different ways to handle input/outputs conflicts, i.e. test assumptions on the interaction between the tester and the SUT. Note that there is no ‘best’ test assumption; this depends on the ‘hostility’ of the SUT against the tester. We list four test assumptions below.

  • A test interaction is input-eager (IE) if the tester is always able to provide an input, even when the SUT wants to produce an output. This assumption prioritizes inputs over outputs and thereby makes mixed states fully controllable for the tester. This underlies the framework in [15].

  • The converse of input-eager is an output-eager (OE) test interaction: the SUT always produces an output, unless is the only possible output. The authors of [12] use such an assumption. This assumption prioritizes outputs over inputs and thereby makes mixed states fully uncontrollable for the tester.

  • A test interaction is nondeterministic (ND) if it is determined nondeterministically whether the SUT is able to take an output transition, or the tester to take an input transition in a mixed state. No guarantees are given on whether the tester is able to take an input transition in a mixed state, though this is not excluded as with the output-eager assumption. This assumption is similar to the test interaction used in the original theory for labeled transition systems with ioco [18].

  • A test interaction is input fair (IF) if the tester is eventually able (after trying finitely many times) to take any input transition in a mixed state. Hence, mixed states are controllable for the tester, at the expense of trying multiple times. This assumption is made in [5].

In all test interactions of the assumptions above, either an input or an output action is ignored. One could also take both into account, by executing them both, but in nondeterministic order. This is a concurrent interpretation of an input-output conflict, which may be well suited for systems dealing with concurrent processes. If the SUT is able to receive more inputs than its specification specifies, this may lead to different interpretations of this assumption. Therefore, we will delay formalizing this assumption to future work. All test assumptions from the list above are formalized in Section 3, by incorporating the assumption in the Moves function from the underlying game arena of the specification.

Example 2.

An MP3 player, as described in Figure 0(b), does not function properly, when taking the output-eager or nondeterministic test assumption. A real world MP3 player normally responds to input (though there may be some delay). With the printer from Figure 0(c), we show that all test assumptions can impose a useful interpretation on the test interaction between the tester and the SUT. This specification models a printer which can handle printing and scanning in an interleaved way, i.e. printing does not need to be finished before scanning to be started and vice versa.

  • The input-eager assumption allows the tester to always provide the inputs print? and scan?, before receiving outputs printed! and scanned!. Only if the tester decides to wait for an output, the SUT can produce these.

  • The output-eager test assumption expresses that the specification is too complicated for the SUT that is being tested: the SUT cannot print and scan at the same time, because a printed! or scanned! output will occur after the tester has sent the respective input.

  • With the nondeterministic test assumption, the tester may succeed in sending both inputs print? and scan? before receiving outputs, but no guarantees on success can be given. Furthermore, providing a print? input in mixed state may result in taking this transition, but instead, the scanned! output transition may also be taken.

  • The input-fair test assumption is similar to the nondeterministic test assumption, but the difference is that it guarantees that the two input transitions print? and scan? can be taken from and , before the outputs are produced, after trying a few times.













(c) SA specification of a printer. Mixed states are black.





















(d) Test case of the SA of Figure 0(c). For readability, Pass and Fail are displayed multiple times, and their self-loops have been omitted. See Example 3 for dotted and dashed transitions.

2.3 Test cases

We give a definition of test cases in the spirit of [18]. As shown in Figure 0(d), a test case is a finite and acyclic SA . A test case is constructed by repeatedly taking either of the following test steps:

  1. Choose an input from the input actions enabled in the current specification state, execute the input action, and move to the next state in . If the specification is in a mixed state, then it may happen that an output was observed before was observed. Therefore, if is enabled in a state of the test case, then all outputs from are enabled as well.

  2. Observe an output from the SUT. In case an output is observed that is prohibited by the specification, a fail verdict is emitted. Otherwise, one moves to the next state in .

  3. Stop testing and emit a pass verdict. Note that all fail verdicts are handled in step 2.

Before defining test cases formally, we first define some auxiliary notation.

Definition 6.

Let be an SA, , , , , and the empty sequence. Then we define:

Definition 7.

A test case for an SA is an SA such that:

  • There are two special states Pass, Fail such that and for all , and for .

  • has no cycles except those in Pass and Fail.

  • Every state enables all outputs , and either one input (matching step 1 above) or (matching step 2): . Exception: in case of an input-eager test interaction, we only require: .

  • Traces of leading to Pass, are traces of , while traces to Fail are not.

Example 3.

Figure 0(d) shows a test case for the printer specification of Figure 0(c). In state , the tester provides the input print?. This state also has all non-quiescence outputs printed! and scanned!, because some test assumptions allow these to occur instead of the SUT accepting input print?. Since none of these actions are allowed by the specification in Figure 0(c), these actions lead to a Fail verdict. In state , the tester provides input scan?, but now one of the non-quiescent outputs is allowed, namely printed!. In state , the tester decides to observe an output from the SUT. She may see either of three things: (1) quiescence (i.e. ), leading to a Pass verdict, since quiescence is allowed in the specification, (2) printed!, which is not allowed, and (3) scanned!, which is also not allowed. After observing quiescence from state , the tester decides to stop testing and conclude verdict Pass.

Test cases are constructed without taking into account any specific test assumption, by including all inputs and outputs relevant for any of them. The dashed output transitions of and (and all of ) can be omitted in case of the input-eager test assumption. In case of the output-eager test assumption, input {scan?} cannot be taken after {print?}, because specification state is mixed. One can adapt test case to be relevant for the output-eager test assumption, by omitting the scan? transition from (and all of ), while adding the dotted transition in for satisfying the third rule of Definition 7.

3 Specifications are game arenas

To study the connection between test cases and games, we associate to each specification SA a game arena . In , the tester (player 1) and the SUT (player 2) play on the state space given by , extended with a sink state , and a number indicating whether the state was reached via a player 1 (input) or player 2 (output) action. The latter is important, because testers see the SUT via their traces, so we must record whose action was carried out.

To advance the game , both the tester and the SUT choose an action from the current state:

  • The tester chooses either an enabled input from the specification SA , or one of the special inputs and stop?. The action expresses that the tester desires to take no input, and allows the SUT to execute any output he wishes; the stop? action indicates that the tester wants to stop testing, which brings the game to the state .

  • The SUT chooses one of the enabled outputs from the specification.

Then the game moves to a next state, according the function Moves, which reflects how the tester and the SUT interact. Hence, different test assumptions made in the literature give rise to different definitions of the Moves function.

The explicit game definition (Definition 8) and encoding of test assumptions (Section 3.1) set our work apart from earlier game-based approaches to testing. In particular, [7] derive optimal test cases from a specification that is already given as a game. The encoding also allows us to study the relation between test cases and games strategies (Section 4), and between alternating refinement and conformance (Section 5).

Definition 8.

Let be an SA. The game arena underlying is defined by where:

  • and ,

  • for all and , we take and ,

  • we take and , and

  • the function encodes one of the different test assumptions and is given in Subsection 3.1. Besides the requirement from Definition 1 for moves with undefined action, we require , and .

For the remainder of the paper, we fix a specification , and its underlying game .

3.1 Encoding test assumptions

We formalize test interaction by implementing a Moves function for each of the test assumptions. Note that the Moves function is of type , i.e. it takes a game state, an input from the tester, and an output from the SUT. The function returns a set of the reached next states for the given input and output, using the transition function of the SA for which this moves function is defined.

All the Moves functions from Definition 9 use the symbols and as special symbols that transfer control to the tester or SUT respectively. The symbol is used in its usual semantics, i.e. it denotes quiescence. When the SUT is quiescent, the tester is always able to provide an input. However, can only actually be observed if the tester chooses the input. Hence, is then used as an artificial input to model that the tester is waiting for the SUT to take an output transition. This corresponds with how is used in [18].

In practice, can be observed by setting a timeout. It is then assumed that the SUT does not produce any regular output from after this timeout. Input can be implemented in practice, by waiting for the SUT to produce an output.

The behaviour of the Moves functions from Definition 9 differ for the regular inputs and outputs of a state of the SA, in order to resolve input/output conflicts according to one of the test assumptions from Section 2. We will discuss how to implement the input-fair test assumption in Section 3.2, because its semantics cannot be implemented directly with only a Moves function.

Definition 9.

Let be an SA. The various test assumptions from Section 2 give rise to the following functions (we do not include the moves from Definition 8 for , stop? and undefined actions here):

  • In the input-eager regime, an input is always executed, unless the tester decides to not perform an input, i.e. she proposes .

  • In the output-eager regime, the output action is always executed, unless both and .

  • In the nondeterministic regime, a nondeterministic choice is made whether to execute the input (unless ) or the output (unless and ). We take the same Moves function for the input-fair regime, and explain this in Section 3.2.

Testing with a hard reset.

Further, it is often useful to have a hard reset function. In testing, a reset function is used to execute multiple test cases from the initial state. In the specification, a hard reset is enabled in any state, and the tester can always use it to go back to the initial state, no matter whether the SUT wanted to do an output action. In practice, a SUT often needs to be instrumented for implementing such a reset function in a fast way, as rebooting a system can take a lot of time. In Definition 10, we have adapted Definition 8, to include this special input .

Definition 10.

Let be an SA. The resettable game arena underlying is defined by where:

  • and

  • for all and , we take and ,

  • we take and , and

  • the function encodes one of the test assumptions from Definition 9, with the modification that .

3.2 Encoding the input-fair test assumptions via fair plays

To define the input-fair test assumption from Section 2, we introduce the notion of a fair play. In essence, a play is input fair if, whenever the tester wants to perform an input action from state , it is eventually executed in [5]. Given a state and an input action that is enabled in , we say that a play is fair with respect to and if is actually taken in at some point in time. A play is fair if it is fair with respect to any appearing in and any input action that was proposed in in .

Definition 11.

Let . Then is input-fair w.r.t. some state and some input if:

Play is input-fair if:

Clearly, restricting an underlying game arena of an SA with the nondeterministic test assumption to input-fair plays only results in satisfying the description of the input-fair test assumption from Section 2. Instead of restricting the plays of the game, we will leave the game with the nondeterministic test assumption as is, and consider input-fair strategies in Definition 12.

Definition 12.

Let be the underlying game arena of SA with the nondeterministic Moves function from Definition 9. Then a strategy is input-fair if it is winning on the set of all input-fair plays of .

Requiring input-fairness may appear restrictive for specifications with states that cannot be reached in any way once leaving them via some transition. However, adding a reset to the game (Definition 10) resolves this, as a state reached by some play, can then always be reached again by resetting, and following the same play. Of course, the SUT may also be able to prevent the tester from following this play right away, but input-fairness ensures that this is possible after trying a few times.

3.3 Comparing strategies for different test assumptions

This section compares strategies for game arenas based on the different test assumptions, constructed from the same specification. In particular, we show that, for any reachability objective , either both the input-eager and input-fair games are winning for player 1, or both not. Similarly, for objective , either both the nondeterministic and output-eager games are winning for player 1 or both not.

To show that the input-eager and input-fair games coincide, we observe that any winning player 1 strategy of an input-fair game is also winning in an input-eager game. The converse does not hold however, because the set of plays of an input-eager game has fewer plays than the input-fair game, as the input-fair game does not have plays in which a proposed input transition is not taken. However, by reaching the same state multiple times, we know that the winning action proposed in the input-eager game is eventually carried out.

A similar reasoning holds for the winning player 1 strategies of games based on the nondeterministic and output-eager test assumption. There are no guarantees for taking an input transition in a mixed state (according to the nondeterministic test assumption), or it is simply not possible (according to the output-eager test assumption). Hence, in both cases, a winning strategy cannot make use of taking input transitions in mixed states. Hence, the existence of winning strategies is the same in both games.

Theorem 1.

Let be an SA, and the underlying game arenas of for the input-eager, input-fair, output-eager, and nondeterministic test assumption, respectively. Let be a reachability goal. Then:

  1. is winning for player 1 w.r.t. if and only if is winning for player 1 w.r.t. , and

  2. is winning for player 1 w.r.t if and only if is winning for player 1 w.r.t. .

4 Test cases are game strategies (and vice versa)

This section establishes a strong correspondence between Player 1 strategies and test cases. To achieve this result, we observe that test cases and strategies share many features, but differ on two aspects: (1) Test cases are finite, while strategies play forever. Hence, test cases correspond to Player 1 strategies that are finite, i.e., eventually provide a stop? action. (2) Test cases base their decisions on the observed traces only, while strategies can use all information contained in the plays, especially the proposed actions for which the corresponding transition was not taken. Therefore, test cases correspond to finite trace-based Player 1 strategies. Thus, we establish a bijection between test cases and finite, trace-based strategies. We first explain how a strategy can be extracted from a test case, and then how a test case can be extracted from a strategy.

4.1 From strategies to test cases

We derive a test case from each finite and trace-based Player 1 strategy (Definition 13). A strategy is trace-based if its choices only depend on the observed traces. Finite strategies eventually provide a stop? action. Note that, after one stop? action from Player 1, all subsequent Player 1 actions are stop?. Hence, only the prefix before the stop?-action matters (Definition 14).

Exactly the traces of these prefixes cut off before the stop? actions can be used to construct a test case. Each of these traces either leads to the Pass state, or to a unique test case state. A test case only consisting of these traces does not contain any output transition to the Fail state, because these outputs do not occur in the specification. Hence, they need to be added. This construction proves Theorem 2.

Definition 13.

The function assigns to each play prefix , a sequence of action labels given by if and otherwise. A strategy is called trace-based if:

A strategy is finite if:

Example 4.

A strategy is not trace-based, if it returns different actions for a play that consists of the same executed actions that form the trace, and different non-executed actions. This situation cannot occur for the printer from Figure 0(c), so we give an example player 1 strategy for the MP3 player of Figure 0(b):

State enables two outputs (namely song! and endPlayList!) which are both not executed, in the two plays mentioned above, because the input transition for quit? has been taken (as indicated by the 1 in the last state). Nevertheless, returns either play? or repeat? based on the non-executed outputs.

Definition 14.

Let be a player 1 trace-based, finite strategy in . We define a trace set .

Theorem 2.

Then characterizes a unique test case.

Example 5.

Let be defined as follows:

Note that strategy is finite and trace-based. The trace set of is , in case of the input-fair or nondeterministic test assumption. This set is exactly the prefix-closed set of the traces leading to a Pass state in the test case of Figure 0(d). Note that if uses the input-eager test assumption, traces print?printed! and are not included in . The traces , and print?scan?scanned! are not included in in case of the output-eager test assumption.

4.2 From test cases to strategies

Given a test case for an SA , we construct a game strategy as follows. On play prefixes whose traces are included in , returns the input action enabled in the state of reached by this trace, if it has one. If has a trace in leading to (or passing by) the Pass state, then returns the action stop?. In all other cases, we set , because the trace of then either reaches a state of with no enabled input transition, or is a play that does not occur in the outcome of the game when using .

Definition 15.

Let be a test case for an SA . We define a strategy of as follows:

Note that is well-defined, because by Definition 7, the input in the first clause is unique, if it exists. Theorem 3 then states that is finite and trace-based Player 1 strategy. Further, is unique, i.e. from a test case exactly one strategy can be derived.

Theorem 3.

Let be a test case for SA . Then we have

  1. The strategy is a Player 1 strategy in .

  2. is finite and trace-based.

  3. If then .

Example 6.

We use Definition 14 to construct the following strategy from test case of Figure 0(d):

Note that this strategy is equivalent to the one from Example 5.

4.3 Test case generation is strategy synthesis

We can now establish that we have defined a bijective function between test cases and strategies, by using the translation from strategy to test case from Theorem 2, and the translation from test case to strategy from Theorem 3 as its inverse.

Theorem 4.

The function is a bijection from the set of test cases of to the set of finite trace-based strategies of .

A consequence of Theorem 4 is that game synthesis algorithms can be used for deriving test cases for specific test objectives. Test objectives describe the objective that a tester likes to achieve during testing.

Various test objectives exist: Reachability goals [6] are states in the specification that the tester likes to reach. For example, one may like to see that the MP3 player is able to play songs. To do so, the tester likes to reach any state with an outgoing song-transition (and see if this transition can be executed), conforming to states and in Figure 0(b). Test purposes [20] generalize reachability goals in the sense that a whole scenario needs to be executed; for example, one likes to see if the MP3 player can produce songs, after a quit? action. Since such scenarios can be adaptive, we model a test purpose as an SA with final states, in which the test purpose was successfully executed. This idea is common in model-based testing [20], but has not been exploited in a game theoretic setting. The interaction between the specification and the test purpose is modeled via a composition operator . Finally, (state) coverage [7] can be a test objective, where the tester tries to cover as many states in the specification as possible. As stated, our framework enables strategy synthesis for these test objectives for any test regime.

5 Conformance is alternating trace inclusion

A popular conformance relation for model-based testing is input-output conformance, for short [18]. This relation formalizes what it means that an SUT, modeled as an input-enabled suspension automaton , conforms to a specification, modeled as an SA (Definition 16); an SA is input-enabled if all its states satisfy . The relation allows the implementation to implement more inputs, and fewer outputs than the specification. Indeed, implementation may implement more services than specified in , but on the specified inputs it must behave as prescribed by .

This viewpoint corresponds to Player 2 alternating trace inclusion for games [3]. Game is 2-alternating-trace included in game , if any trace set that can be enforced by Player 2 in can also be enforced by Player 2 in (Theorem 5).

In the definition of alternating trace inclusion (Definition 18), player 1 chooses an input in , and then needs to choose a corresponding input in . However, we need to take care that player 1 does not cheat in , by choosing the input, if this input is not chosen in (Definition 17).

Definition 16.

Let and be SAs over the same label sets and assume that is input-enabled. Then we say that if for all we have .

Definition 17.

Let be a game arena corresponding to some SA, and a play prefix of . Then the action decision sequence of is:

Let be two game arenas corresponding to an input-enabled SA , and an SA , respectively. Let , be two player 1 strategies in these games. Strategy cheats on if:

Definition 18.

Let and be game arenas corresponding to an input-enabled SA , and an SA , respectively. We say that is alternating trace included in , denoted iff

Theorem 5.

Let and be SAs over the same label sets and assume that is input-enabled. Let and be their respective underlying game arenas for the nondeterministic test assumption. Then:

The relation between game refinement and ioco has been studied before: [2, 19] show that, on interface automata, corresponds to alternating simulation. Theorem 5 differs from these results in three ways: (1) [2, 19] compare ioco and alternating simulation on interface automata. We compare ioco on SAs versus alternating trace inclusion on games, (2) Our games consider concurrent moves by both players; interface automata compare different transitions of the same player. (3) Alternating trace inclusion is linear, whereas alternating simulation is a branching time relation. One could however argue that simulation and trace inclusion coincide for deterministic systems (including our SAs). However, we prefer the formulation in terms of alternating traces, because we conjecture that this formulation extends to the non-deterministic case.

6 Conclusions and Future Work

We have established a fundamental connection between model-based testing and 2-player concurrent games, where specifications are game arenas, test cases are game strategies, test case derivation is strategy synthesis, and conformance is alternating trace inclusion. This connection allows the wide plethora of game synthesis techniques to be deployed to test case generation.

The game theoretic setting spawns several game theoretic questions. While the games we propose are concurrent because both the tester and the SUT propose moves at the same time, one could argue that they are only semi-concurrent, since only one of these moves is carried out at the same time. Therefore, we believe that the test games have various properties that do not hold for concurrent games in general. We conjecture that, whereas concurrent games require probabilistic strategies to win reachability properties, our games require only deterministic strategies to win these, and we also believe that our games are determined in that case.


We thank Ramon Janssen and Frits Vaandrager for their comments and support.


For proofs of the theorems in this paper, we refer the reader to https://petravdbos.nl/


  • [1]
  • [2] Fides Aarts & Frits Vaandrager (2010): Learning I/O Automata. In: International Conference on Concurrency Theory, Springer, pp. 71–85, doi:10.1007/978-3-642-15375-4_6.
  • [3] Rajeev Alur, Thomas Henzinger, Orna Kupferman & Moshe Vardi (1998): Alternating Refinement Relations. In: International Conference on Concurrency Theory, Springer, pp. 163–178, doi:10.1007/BFb0055622.
  • [4] Roderick Bloem, Robert Könighofer, Ingo Pill & Franz Röck (2016): Synthesizing Adaptive Test Strategies from Temporal Logic Specifications. In: Formal Methods in Computer-Aided Design, IEEE, pp. 17–24, doi:10.1109/FMCAD.2016.7886656.
  • [5] Petra van den Bos, Ramon Janssen & Joshua Moerman (2017): n-Complete Test Suites for IOCO. In: IFIP International Conference on Testing Software and Systems, Springer, pp. 91–107, doi:10.1007/978-3-319-67549-7_6.
  • [6] Laura Brandán Briones & Hendrik Brinksma (2004): A Test Teneration Framework for Quiescent Real-Time Systems. In: Proc. Formal Approaches to Testing of Software (4th International Workshop), pp. 71 – 85, doi:10.1007/978-3-540-31848-4_5.
  • [7] Krishnendu Chatterjee, Luca De Alfaro & Rupak Majumdar (2008): The Complexity of Coverage. In: Asian Symposium on Programming Languages and Systems, Springer, pp. 91–106, doi:10.1007/978-3-540-89330-1_7.
  • [8] Eric Dallal, Daniel Neider & Paulo Tabuada (2016): Synthesis of Safety Controllers Robust to Unmodeled Intermittent Disturbances. In: Decision and Control (CDC), 2016 IEEE 55th Conference on, IEEE, pp. 7425–7430, doi:10.1109/CDC.2016.7799416.
  • [9] Alexandre David, Kim Larsen, Shuhao Li & Brian Nielsen (2008): Cooperative Testing of Timed Systems. Electronic Notes in Theoretical Computer Science, pp. 79–92, doi:10.1016/j.entcs.2008.11.007.
  • [10] Alexandre David, Kim Larsen, Shuhao Li & Brian Nielsen (2008): A Game-Theoretic Approach to Real-Time System Testing. In: Design, Automation and Test in Europe, IEEE, pp. 486–491, doi:10.1145/1403375.1403491.
  • [11] Alexandre David, Kim Larsen, Shuhao Li & Brian Nielsen (2009): Timed Testing under Partial Observability. In: International Conference on Software Testing Verification and Validation, IEEE, pp. 61–70, doi:10.1109/ICST.2009.38.
  • [12] Niklas Krafczyk & Jan Peleska (2017): Effective Infinite-State Model Checking by Input Equivalence Class Partitioning. In: IFIP International Conference on Testing Software and Systems, Springer, pp. 38–53, doi:10.1007/978-3-319-67549-7_3.
  • [13] Lev Nachmanson, Margus Veanes, Wolfram Schulte, Nikolai Tillmann & Wolfgang Grieskamp (2004): Optimal Strategies for Testing Nondeterministic Systems. ACM SIGSOFT International Symposium on Software Testing and Analysis, pp. 55–64, doi:10.1145/1013886.1007520.
  • [14] Christos Papadimitriou (2001): Algorithms, Games, and the Internet. In:

    Proceedings of the thirty-third annual ACM symposium on Theory of computing

    , ACM Press, pp. 749–753, doi:10.1145/380752.380883.
  • [15] Adenilso Simao & Alexandre Petrenko (2014): Generating Complete and Finite Test Suite for ioco: Is It Possible? In: Proceedings of the Ninth Workshop on Model-Based Testing, pp. 56–70, doi:10.4204/EPTCS.141.5.
  • [16] Willem Stokkink, Mark Timmer & Mariëlle Stoelinga (2013): Divergent Quiescent Transition Systems. In: Proceedings seventh conference on Tests and Proofs, LNCS, doi:10.1007/978-3-642-38916-0_13.
  • [17] Mark Timmer, Hendrik Brinksma & Mariëlle Stoelinga (2011): Model-Based Testing. In: Software and Systems Safety: Specification and Verification, NATO Science for Peace and Security, IOS Press, pp. 1–32, doi:10.3233/978-1-60750-711-6-1.
  • [18] Jan Tretmans (2008): Model Based Testing with Labelled Transition Systems. In: Formal methods and testing, Springer, pp. 1–38, doi:10.1007/978-3-540-78917-8_1.
  • [19] Margus Veanes & Nikolaj Bjørner (2010): Alternating Simulation and IOCO. In: IFIP International Conference on Testing Software and Systems, Springer, pp. 47–62, doi:10.1007/978-3-642-16573-3_5.
  • [20] René de Vries & Jan Tretmans (2001): Towards Formal Test Purposes. Formal Approaches to Testing of Software, FATES’01: A Satellite Workshop of CONCUR’01 Proceedings, pp. 61–76.
  • [21] Farn Wang, Sven Schewe & Jung-Hsuan Wu (2015): Complexity of Node Coverage Games. Theoretical Computer Science, pp. 45–60, doi:10.1016/j.tcs.2015.02.002.