Strategy Representation by Decision Trees in Reactive Synthesis

02/02/2018 ∙ by Tomáš Brázdil, et al. ∙ 0

Graph games played by two players over finite-state graphs are central in many problems in computer science. In particular, graph games with ω-regular winning conditions, specified as parity objectives, which can express properties such as safety, liveness, fairness, are the basic framework for verification and synthesis of reactive systems. The decisions for a player at various states of the graph game are represented as strategies. While the algorithmic problem for solving graph games with parity objectives has been widely studied, the most prominent data-structure for strategy representation in graph games has been binary decision diagrams (BDDs). However, due to the bit-level representation, BDDs do not retain the inherent flavor of the decisions of strategies, and are notoriously hard to minimize to obtain succinct representation. In this work we propose decision trees for strategy representation in graph games. Decision trees retain the flavor of decisions of strategies and allow entropy-based minimization to obtain succinct trees. However, decision trees work in settings (e.g., probabilistic models) where errors are allowed, and overfitting of data is typically avoided. In contrast, for strategies in graph games no error is allowed, and the decision tree must represent the entire strategy. We develop new techniques to extend decision trees to overcome the above obstacles, while retaining the entropy-based techniques to obtain succinct trees. We have implemented our techniques to extend the existing decision tree solvers. We present experimental results for problems in reactive synthesis to show that decision trees provide a much more efficient data-structure for strategy representation as compared to BDDs.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Graph games. We consider nonterminating two-player graph games played on finite-state graphs. The vertices of the graph are partitioned into states controlled by the two players, namely, player 1 and player 2, respectively. In each round the state changes according to a transition chosen by the player controlling the current state. Thus, the outcome of the game being played for an infinite number of rounds, is an infinite path through the graph, which is called a play. An objective for a player specifies whether the resulting play is either winning or losing. We consider zero-sum games where the objectives of the players are complementary. A strategy for a player is a recipe to specify the choice of the transitions for states controlled by the player. Given an objective, a winning strategy for a player from a state ensures the objective irrespective of the strategy of the opponent.

Games and synthesis. These games play a central role in several areas of computer science. One important application arises when the vertices and edges of a graph represent the states and transitions of a reactive system, and the two players represent controllable versus uncontrollable decisions during the execution of the system. The synthesis problem for reactive systems asks for the construction of a winning strategy in the corresponding graph game. This problem was first posed independently by Church [17] and Büchi [14], and has been extensively studied [47, 15, 29, 38]. Other than applications in synthesis of discrete-event and reactive systems [48, 45], game-theoretic formulations play a crucial role in modeling [21, 1], refinement [31], verification [20, 3], testing [6], compatibility checking [19], and many other applications. In all the above applications, the objectives are -regular, and the -regular sets of infinite paths provide an important and robust paradigm for reactive-system specifications [37, 53].

Parity games. Graph games with parity objectives are relevant in reactive synthesis, since all common specifications for reactive systems are expressed as -regular objectives that can be transformed to parity objectives. In particular, a convenient specification formalism in reactive synthesis is LTL (linear-time temporal logic). The LTL synthesis problem asks, given a specification over input and output variables in LTL, whether there is a strategy for the output sequences to ensure the specification irrespective of the behavior of the input sequences. The conversion of LTL to non-deterministic Büchi automata, and non-deterministic Büchi automata to deterministic parity automata, gives rise to a parity game to solve the LTL synthesis problem. Formally, the algorithmic problem asks for a given graph game with a parity objective and a starting state, whether player 1 has a winning strategy. This problem is central in verification and synthesis. While it is a major open problem whether the problem can be solved in polynomial time, it has been widely studied in the literature [56, 16, 51].

Strategy representation. In graph games, the strategies are the most important objects as they represent the witness to winning of a player. For example, winning strategies represent controllers in the controller synthesis problem. Hence all parity-games solvers produce the winning strategies as their output. While the algorithmic problem of solving parity games has received huge attention, quite surprisingly, data-structures for representation of strategies have received little attention. While the data-structures for strategies could be relevant in particular algorithms for parity games (e.g., strategy-iteration algorithm), our focus is very different than improving such algorithms. Our main focus is the representation of the strategies themselves, which are the main output of the parity-games solvers, and hence our strategy representation serves as post-processing of the output of the solvers. The standard data-structure for representing strategies is binary decision diagrams (BDDs) [2, 13] and it is used as follows: a strategy is interpreted as a lookup table of pairs that specifies for every controlled state of the player the transition to choose, and then the lookup table is represented as a binary decision diagram (BDD).

Strategies as BDDs. The desired properties of data-structures for strategies are as follows: (a) succinctness, i.e., small strategies are desirable, since strategies correspond to controllers, and smaller strategies represent efficient controllers that are required in resource-constrained environments such as embedded systems; (b) explanatory

, i.e., the representation explains the decisions of the strategies. In this work we consider different data-structure for representation of strategies in graph games. The key drawbacks of BDDs to represent strategies in graph games are as follows. First, the size of BDDs crucially depends on the variable ordering. The variable ordering problem is notoriously difficult: the optimal variable ordering problem is NP-complete, and for large dimensions no heuristics are known to work well. Second, due to the fact that strategies have to be input to the BDD construction as Boolean formulae, the representation though succinct, does not retain the inherent important choice features of the decisions of the strategies (for an illustration see Example 

2).

Strategies as decision trees. In this work, we propose to use decision trees, i.e. [40], for strategy representation in graph games. A decision tree is a structure similar to a BDD, but with nodes labelled by various predicates over the system’s variables. In the basic algorithm for decision trees, the tree is constructed using an unfolding procedure where the branching for the decision making is done in order to maximize the information gain at each step.

The key advantages of decision trees over BDDs are as follows:

  • The first two advantages are conceptual. First, while in BDDs, a level corresponds to one variable, in decision trees, a predicate can appear at different levels and different predicates can appear at the same level. This allows for more flexibility in the representation. Second, decision trees utilize various predicates over the given features in order to make decisions, and ignore all the unimportant features. Thus they retain the inherent flavor of the decisions of the strategies.

  • The other important advantage is algorithmic. Since the data-structure is based on information gain, sophisticated algorithms based on entropy exist for their construction. These algorithms result in a succinct representation, whereas for BDDs there is no good algorithmic approach for variable reordering.

Key challenges.

While there are several advantages of decision trees, and decision trees have been extensively studied in the machine learning community, there are several key challenges and obstacles for representation of strategies in graph games by decision trees.

  • First, decision trees have been mainly used in the probabilistic setting. In such settings, research from the machine learning community has developed techniques to show that decision trees can be effectively pruned to obtain succinct trees, while allowing small error probabilities. However, in the context of graph games, no error is allowed in the strategic choices.

  • Second, decision trees have been used in the machine learning community in classification, where an important aspect is to ensure that there is no overfitting of the training data. In contrast, in the context of graph games, the decision tree must fit the entire representation of the strategies.

While for probabilistic models such as Markov decision processes (MDPs), decision trees can be used as a blackbox 

[10], in the setting of graph games their use is much more challenging. In summary, in previous settings where decision trees are used small error rates are allowed in favor of succinctness, and overfitting is not permitted, whereas in our setting no error is allowed, and the complete fitting of the tree has to be ensured. The basic algorithm for decision-tree learning (called ID3 algorithm [46, 40]

) suffers from the curse of dimensionality, and the error allowance is used to handle the dimensionality. Hence we need to develop new techniques for strategy learning with decision trees in graph games.

Our techniques. We present a new technique for learning strategies with decision trees based on look-ahead. In the basic algorithm for decision trees, at each step of the unfolding, the algorithm proceeds as long as there is any information gain. However, suppose for no possible branching there is any information gain. This represents the situation where the local (i.e., one-step based) decision making fails to achieve information gain. We extend this process so that look-ahead is allowed, i.e., we consider possible information gain with multiple steps. The look-ahead along with complete unfolding ensure that there is no error in the strategy representation. While the look-ahead approach provides a systematic principle to obtain precise strategy representation, it is computationally expensive, and we present heuristics used together with look-ahead for computational efficiency and succinctness of strategy representation.

Implementation and experimental results. Since in our setting existing decision tree solvers cannot be used as a blackbox, we extended the existing solvers with our techniques mentioned above. We have then applied our implementation to compare decision trees and BDDs for representation of strategies for problems in reactive synthesis. First, we compared our approach against BDDs for two classical examples of reactive synthesis from SYNTCOMP benchmarks [33]. Second, we considered randomly generated LTL formulae, and the graph games obtained for the realizability of such formulae. In both the above experiments the decision trees represent the winning strategies much more efficiently as compared to BDDs.

Related work. Previous non-explicit representation of strategies for verification or synthesis purposes typically used BDDs [55] or automata [41, 43] and do not explain the decisions by the current valuation of variables. Decision trees

have been used a lot in the area of machine learning as a classifier that naturally explains a decision

[40]. They have also been considered for approximate representation of values in states and thus implicitly for an approximate representation of strategies, for the model of Markov decision processes (MDPs) in [9, 8]. Recently, in the context of verification, this approach has been modified to capture strategies guaranteed to be -optimal, for MDPs [10] and partially observable MDPs [11]. Learning a compact decision tree representation of an MDP strategy was also investigated in [36] for the case of body sensor networks. Besides, decision trees are becoming more popular in verification and programming languages in general, for instance, they are used to capture program invariants [35, 28]. To the best of our knowledge, decision trees were only used in the context of (possibly probabilistic) systems with only a single player. Our decision-tree approach is thus the first in the game setting with two players that is required in reactive synthesis.

Summary. To summarize, our main contributions are:

  1. We propose decision trees as data-structure for strategy representation in graph games.

  2. The representation of strategies with decision trees poses many obstacles, as in contrast to the probabilistic setting no error is allowed in games. We present techniques that overcome these obstacles while still retaining the algorithmic advantages (such as entropy-based methods) of decision trees to obtain succinct decision trees.

  3. We extend existing decision tree solvers with our techniques and present experimental results to demonstrate the effectiveness of our approach in reactive synthesis.

2 Graph Games and Strategies

Graph games. A graph game consists of a tuple , where:

  • is a finite set of states partitioned into player 1 states and player 2 states ;

  • (resp., ) is the set of actions for player 1 (resp., player 2); and

  • is the transition function that given a player 1 state and a player 1 action, or a player 2 state and a player 2 action, gives the successor state.

Plays. A play is an infinite sequence of state-action pairs such that for all we have that if for , then and . We denote by the set of all plays of a graph game .

Strategies. A strategy is a recipe for a player to choose actions to extend finite prefixes of plays. Formally, a strategy for player 1 is a function that given a finite sequence of visited states chooses the next action. The definitions for player 2 strategies are analogous. We denote by and the set of all strategies for player 1 and player 2 in graph game , respectively. Given strategies and , and a starting state in , there is a unique play such that and for all if (resp., ) then (resp., ). A memoryless strategy is a strategy that does not depend on the finite prefix of the play but only on the current state, i.e., functions and .

Objectives. An objective for a graph game is a set . We consider the following objectives:

  • Reachability and safety objectives. A reachability objective is defined by a set of target states, and the objective requires that a state in is visited at least once. Formally, . The dual of reachability objectives are safety objectives, defined by a set of safe states, and the objective requires that only states in are visited. Formally, .

  • Parity objectives. For an infinite play we denote by the set of states that occur infinitely often in . Let be a priority function. The parity objective requires that the minimum of the priorities of the states visited infinitely often be even.

Winning region and strategies. Given a game graph and an objective , a winning strategy from state for player 1 is a strategy such that for all strategies we have . Analogously, a winning strategy for player 2 from ensures that for all strategies we have . The winning region (resp., ) for player 1 (resp., player 2) is the set of states such that player 1 (resp., player 2) has a winning strategy. A fundamental result for graph games with parity objectives shows that the winning regions form a partition of the state space, and if there is a winning strategy for a player, then there is a memoryless winning strategy [25].

LTL synthesis and objectives. Reachability and safety objectives are the most basic objectives to specify properties of reactive systems. Most properties that arise in practice for analysis of reactive systems are -regular objectives. A convenient logical framework to express -regular objectives is the LTL (linear-time temporal logic) framework. The problem of synthesis from specifications, in particular, LTL synthesis has received huge attention [18]. LTL objectives can be translated to parity automata, and the synthesis problem reduces to solving games with parity objectives.

In reactive synthesis it is natural to consider games where the state space is defined by a set of variables, and the game is played by input and output player who choose the respective input and output signals. We describe such games below that easily correspond to graph games.

I/O games with variables. Consider a finite set of variables from a finite domain; for simplicity, we consider Boolean variables only. A valuation is an assignment to each variable, in our case denotes the set of all valuations. Let be partitioned into input signals, output signals, and state variables, i.e., . Consider the alphabet (resp., ) where each letter represents a subset of the input (resp., output) signals and the alphabet where each letter represents a subset of state variables. The input/output choices affect the valuation of the variables, which is given by the next-step valuation function . Consider a game played as follows: at every round the input player chooses a set of input signals (i.e., a letter from ), and given the input choice the output player chooses a set of output signals (i.e., a letter from ). The above game can be represented as a graph game as follows:

  • ;

  • player 1 represents the input player and ; player 2 represents the output player and ;

  • and ; and

  • given a valuation and we have , and for we have .

In this paper, we use decision trees to represent memoryless strategies in such graph games, where states are represented as vectors of Boolean values. In Section

5 we show how such games arise from various sources (AIGER specifications [32], LTL synthesis) and why it is sufficient to consider memoryless strategies only.

3 Decision Trees and Decision Tree Learning

In this section we recall decision trees and learning decision trees. A key application domain of games on graphs is reactive synthesis (such as safety synthesis from SYNTCOMP benchmarks as well as LTL synthesis) and our comparison for strategy representation is against BDDs. BDDs are particularly suitable for states and actions represented as bitvectors. Hence for a fair comparison against BDDs, we consider a simple version of decision trees over bitvectors, though decision trees and their corresponding methods can be naturally extended to richer domains (such as vectors of integers as used in [10]).

3.0.1 Decision trees.

A decision tree over is a tuple where is a finite rooted binary (ordered) tree with a set of inner nodes and a set of leaves , assigns to every inner node a number of , and assigns to every leaf a value or .

The language of the tree is defined as follows. For a vector , we find a path from the root to a leaf such that for each inner node on the path, iff the first child of is on . Denote the leaf on this particular path by . Then is in the language of iff .

Example 1

Consider dimension . The language of the tree depicted in Fig. 1 can be described by the following regular expression . Intuitively, the root node represents the predicate of the third value, the other inner node represents the predicate of the second value. For each inner node, the first and second children correspond to the cases where the value at the position specified by the predicate of the inner node is and , respectively. We supply the edge labels to depict the tree clearly. The leftmost leaf corresponds to the subset of where the third value is , the rightmost leaf corresponds to the subset of where the third value is and the second value is .

=1

=0

=1

=0
Figure 1: A decision tree over

Standard DT learning. We describe the standard process of binary classification using decision trees (see Algorithm 1). Given a training set , partitioned into two subsets and , the process of learning according to the algorithm ID3 [46, 40] computes a decision tree that assigns to all elements of and to all elements of . In the algorithm, a leaf is mixed if has a non-empty intersection with both and . To split a leaf on means that becomes an internal node with the two new leaves and as its children. Then, the leaf contains the samples of where the value in the position equals , and the leaf contains the rest of the samples of , since these have the value in the position equal to . The entropy of a node is defined as

An information gain of a given (and thus also of the split into and ) is defined by

(1)

where is the set of all with and . Finally, given we define

1:Inputs: partitioned into subsets and .
2:Outputs: A decision tree such that .
3:/* train on positive set and negative set */
4:
5:while a mixed leaf exists do
6:      an element of that maximizes the information gain
7:     split on into two leaves and ,
8:      and
9:return
Algorithm 1 ID3 learning algorithm

Intuitively, the splitting on the component with the highest gain splits the set so that it maximizes the portion of in one subset and the portion of in the other one.

Remark 1 (Optimizations)

The basic ID3 algorithm for decision tree learning suffers from the curse of dimensionality. However, decision trees are primarily applied to machine learning problems where small errors are allowed to obtain succinct trees. Hence the allowance of error is crucially used in existing solvers (such as WEKA [30]) to combat dimensionality. In particular, the error rate is exploited in the unfolding, where the unfolding proceeds only when the information gain exceeds the error threshold. Further error is also introduced in the pruning of the trees, which ensures that the overfitting of training data is avoided.

4 Learning Winning Strategies Efficiently

In this section we present our contributions. We first start with the representation of strategies as training sets, and then present our strategy decision-tree learning algorithm.

4.1 Strategies as Training Sets and Decision Trees

Strategies as training sets. Let us consider a game . We represent strategies of both players using the same method. So in what follows we consider either of the players and denote by and the sets of states and actions of the player, respectively. We fix , a memoryless strategy of the player.

We assume that

is an I/O game with binary variables, which means

and . A memoryless strategy is then a partial function . Furthermore, we fix an initial state , and let be the set of all states reachable from using against some strategy of the other player. We consider all objectives only on plays starting in the initial state . Therefore, the strategy can be seen as a function such that .

Now we define The set of all training examples is a disjunctive union .

As we do not use any pruning or stopping rules, the ID3 algorithm returns a decision tree that fits the training set exactly. This means that for all we have that iff . Thus represents the strategy . Note that for any sample of , the fact whether it belongs to or not is immaterial to us. Thus strategies are naturally represented as decision trees, and we present an illustration below.

=0

=1

=0

=1

=0

=1

=0

=1

=0

=1

=0

=1

=0

=1
Figure 2: Tree representation of strategy
Example 2

Let the state binary variables be labeled as , , and , respectively, and let the action binary variable be labeled as . Consider a strategy such that , , , . Then

Fig. 2 depicts a decision tree representing the strategy .

Remark 2

The above example demonstrates the conceptual advantages of decision trees over BDDs. First, in decision trees, different predicates can appear at the same level of the tree (e.g. predicates and appear at the second level). At the same time, a predicate can appear at different levels of the tree (e.g. predicate appears once at the second level and twice at the third level).

Second advantage is a bit technical, but very crucial. In the example there is no pair of samples and that differs only in the value of state3. This suggests that the feature state3 is unimportant w.r.t. differentiating between and , and indeed the decision tree  in Fig. 2 contains no predicate state3 while still representing . However, to construct a BDD that ignores state3 is very difficult, since a Boolean formula is provided as the input to the BDD construction, and this formula inevitably sets the value for every sample. Therefore, it is impossible to declare “the samples of can be resolved either way”. One way to construct a BDD  would be . But then and , so state3 has to be used in the representation of . Another option could be , but then and , so state3 still has to be used in the representation.

Example 3

Consider and . Algorithm 1 outputs a simple decision tree differentiating between and only according to the value of the last variable. On the other hand, a BDD constructed as contains nodes for all five variables.

4.2 Strategy-DT Learning

4.2.1 Challenges.

In contrast to other machine learning domains, where errors are allowed, since strategies in graph games must be represented precisely, several challenges arise. Most importantly, the machine-learning philosophy of classifiers is to generalize the experience, trying to achieve good predictions on any (not just training) data. In order to do so, overfitting the training data must be avoided. Indeed, specializing the classifier to cover the training data precisely leads to classifiers reflecting the concrete instances of random noise instead of generally useful predictors. Overfitting is prevented using a tolerance on learning all details of the training data. Consequently, the training data are not learnt exactly. Since in our case, the training set is exactly what we want to represent, our approach must be entirely different. In particular, the optimizations in the setting where errors are allowed (see Remark 1) are not applicable to handle the curse of dimensionality. In particular, it may be necessary to unfold the decision tree even in situations where none of the one-step unfolds induces any information gain.

4.2.2 Solution: look-ahead.

In the ID3 algorithm Alg. 1, when none of the splits has a positive information gain (see Formula (1)), the corresponding node is split arbitrarily. This can result in very large decision trees. We propose a better solution. Namely, we extend ID3 with a “look-ahead”: If no split results in a positive information gain, one can pick a split so that next, when splitting the children, the information gain is positive. If still no such split exists, one can try and pick a split and splits of children so that afterwards there is a split of grandchildren with positive information gain. And so on, possibly until a constant depth , yielding a -look-ahead.

Before we define the look-ahead formally, we have a look at a simple example:

Example 4

Consider and , characterising . Splitting on any , does not give a positive information gain. Standard DT learning procedures would either stop here and not expand this leaf any more, or split arbitrarily. With the look-ahead, one can see that using and then , the information gain is positive and we obtain a decision tree classifying the set perfectly.

Here we could as well introduce more complex predicates such as instead of look-ahead. However, in general the look-ahead has the advantage that each of the and branches may afterwards split on different bits (currently best ones), whereas with we commit to using in both branches.

The example illustrates the 2-look-ahead with the following formal definition. (For explanatory reasons, the general case follows afterwards.) Consider a node . For every , consider splitting on and subsequently the -child on and the -child on . This results in a partition of . We assign to its 2-look-ahead information gain defined by

The 2-look-ahead information gain of is defined as

We say that maximizes the 2-look-ahead information gain if

In general, we define the -step weighted entropy of a node with respect to a predicate by

and

Then we say that maximizes the -look-ahead information gain in if

Note that 1-look-ahead coincides with the choice of split by ID3. For a fixed , if the information gain for each -look-ahead, is zero, we split based on a heuristic on Line 14 of Algorithm 2. This heuristic is detailed on in the following subsection. Note that Algorithm 2 is correct-by-construction since we enforce representation of the entire input training set. We present a formal correctness proof in Appendix 0.B.

Remark 3 (Properties of look-ahead algorithm)

We now highlight some desirable properties of the look-ahead algorithm.

  • Incrementality. First, the algorithm presents an incremental approach: computation of the -look-ahead can be done by further refining the results of the -look-ahead analysis due to the recursive nature of our definition. Thus the algorithm can start with and increase only when required.

  • Entropy-based minimization. Second, the look-ahead approach naturally extends the predicate choice of ID3, and thus the entropy-based minimization for decision trees is still applicable.

  • Reduction of dimensionality. Finally, Algorithm 2 uses the look-ahead method in an incremental fashion, thus only considering more complex “combinations” when necessary. Consequently, we do not produce all these combinations of predicates in advance, and avoid the problem of too high dimensionality and only experience local blowups.

In general, -look-ahead clearly requires resources exponential in . However, in our benchmarks, it was typically sufficient to apply the look-ahead for equal to two, which is computationally feasible.

A different look-ahead-based technique was considered in order to dampen the greedy nature of decision tree construction [24], examining the predicates yielding the highest information gains. In contrast, our technique retains the greedy approach but focuses on the case where none of the predicates provides any information gain itself at all and thus ID3-based techniques fail to advance. The main goal of our technique is to capture strong dependence between the features of the training set, in order to solve a different problem than the one treated by [24]. Moreover, the look-ahead description in [24] is very informal, which prevents us from implementing their solution and comparing the two approaches experimentally.

1:Inputs: partitioned into subsets and .
2:Outputs: A decision tree such that .
3:/* train on positive set and negative set */
4:
5:while a mixed leaf exists do
6:     if  with a positive 1-look-ahead information gain then
7:          an element of that maximizes the 1-look-ahead information gain
8: maximum information gain is positive
9:
10:     else if  with a positive k-look-ahead information gain then
11:          an element of that maximizes the k-look-ahead information gain
12: maximum -look-ahead information gain is positive
13:     else
14:               
15:     split on into two leaves and ,
16:      and
17:return
Algorithm 2 -look-ahead ID3

4.3 Heuristics

4.3.1 Statistical split-decision.

The look-ahead mentioned above provides a very systematic principle on how to resolve splitting decisions. However, the computation can be demanding in terms of computational resources. Therefore we present a very simple statistical heuristic that gives us one more option to decide a split. The precise formula is

Intuitively, we choose a that maximizes the portion of good samples in one subset and the portion of bad samples in the other subset, which mimics the entropy-based method, and at the same time is very fast to compute. One can consider using this heuristic exclusively every time the basic ID3-based splitting technique fails. However, in our experiments, using 2-look-ahead and then (once needed) proceeding with the heuristic yields better results, and is still computationally undemanding.

4.3.2 Chain disjunction.

The entropy-based approach favors the splits where one of the branches contains a completely resolved data set ( or ), as this provides notable information gain. Therefore, as the algorithm proceeds, it often happens that at some point multiple splits provide a resolved data set in one of the branches. We consider a heuristic that chains all such splits together and computes the information gain of the resulting disjunction. More specifically, when considering each as a split candidate (line 6 of Algorithm 2), we also consider (a) the disjunction of all bits that contain a subset of in either of the branches, and (b) the disjunction of bits containing a subset of in a branch. Then we choose the candidate that maximizes the information gain. These two extra checks are very fast to compute, and can improve succinctness and readability of the decision trees substantially, while maintaining the fact that a decision tree fits its training set exactly. Appendix 0.D provides two examples where the decision tree obtained without this heuristic is presented, and then the decision tree obtained when using the heuristic is presented.

5 Experimental Results

In our experiments we use two sources of problems reducible to the representation of memoryless strategies in I/O games with binary variables: AIGER specifications [32] and LTL specifications [44]. Given a game, we use an explicit solver to obtain a strategy in the form of a list of played and non-played actions for each state, which can be directly used as a training set. Throughout our experiments, we compare succinctness of representation (expressed as the number of inner nodes) using decision trees and BDDs.

We implemented our method in the programming language Java. We used the external library CuDD [52] for the manipulation of BDDs. We used the Algorithm 2 with to compute the decision trees. We obtained all the results on a single machine with Intel(R) Core(TM) i5-6200U CPU (2.40 GHz) with the heap size limited to 8 GB.

5.1 AIGER specifications

SYNTCOMP [33] is the most important competition of synthesis tools, running yearly since 2014. Most of the benchmarks have the form of AIGER specifications [32], describing safety specifications using circuits with input, output, and latch variables. This reduces directly to the I/O games with variables since the latches describe the current configuration of the circuit, corresponding to the state variables of the game. Since the objectives here are safety/reachability, the winning strategies can be computed and guaranteed to be memoryless.

We consider two benchmarks: scheduling of washing cycles in a washing system and a simple bit shifter model (the latter presented only in Appendix 0.D due to space constraints), introduced in SYNTCOMP 2015 [33] and SYNTCOMP 2014, respectively.

5.1.1 Scheduling of Washing Cycles.

The goal is to design a centralized controller for a washing system, composed of several tanks running in parallel [33]. The model of the system is parametrized by the number of tanks, the maximum allowed reaction delay before filling a tank with water, the delay after which the tank has to be emptied again, and the number of tanks that share a water pipe. The controller should satisfy a safety objective, that is, avoid reaching an error state, which means that the objective of the other player is reachability. In total, we obtain 406 graph games with safety/reachability objectives. In 394 cases we represent a winning strategy of the safety player, in the remaining 12 cases a winning strategy of the reachability player. The number of states of the graph games ranges from 30 to 43203, the size of training example sets ranges from 40 to 3359232.

Figure 3: Washing cycles – safety

The left plot in Fig. 3 displays the size of our decision tree representation of the controller winning safety strategies versus the size of their BDD representations. The decision tree is smaller than the corresponding BDD in all 394 cases. The arithmetic average ratio of decision tree size and BDD size is , the geometric average is , and the harmonic average is .

In these experiments, we obtain the BDD representation as follows: we consider 1000 randomly chosen variable orderings and for each construct a corresponding BDD, in the end we consider the BDD with the minimal size. As a different set of experiments, we compare against BDDs obtained using several algorithms for variable reordering, namely, Sift [49], Window4 [27], simulated-annealing-based algorithm [7]

, and a genetic algorithm 

[22]. The results with these algorithms are very similar and provided in Appendix 0.C. Furthermore, the information about execution time is also provided in Appendix 0.C.

Moreover, in the experiments described above, we do not use the chain heuristic described in Section 4.3, in order to provide a fair comparison of decision trees and BDDs. The right plot in Fig. 3 displays the difference in decision tree size once the chain heuristic is enabled. Each dot represents the ratio of decision tree size with and without it.

The decision trees also allow us to get some insight into the winning strategies. Namely, for a fixed number of water tanks and a fixed empty delay, we obtain a solution that is affected by different values of the fill delay in a minimal way, and is easily generalizable for all the values of the parameter. This fact becomes more apparent once the chain heuristic described in Section 4.3 is enabled. This phenomenon is not present in the case of BDDs as they differ significantly, even in size, for different values of the parameter (see Table 1 in Appendix 0.C). For two tanks and empty delay of one, the solution is small enough to be humanly readable and understandable, see Fig. 4 (where the fill delay is set to ). Additional examples of the parametric solutions can be found in Appendix 0.C. This example suggests that decision tree representation might be useful in solving parametrized synthesis (and verification) problems.

Figure 4: A solution for two tanks and empty delay of one, illustration for fill delay of . Solution for other values are the same except for replacing values and for and , respectively. Thus a parametric solution could be obtained by a simple syntactic analysis of the difference of any two instance solutions.
Name wash_3_1_1_3 102 3 7 40 45 3 1 wash_4_1_1_3 466 4 9 144 76 4 1 wash_4_1_1_4 346 4 9 96 78 4 1 wash_4_2_1_4 958 4 9 432 157 4 1 wash_4_2_2_4 3310 4 9 432 301 4 1 wash_5_1_1_3 1862 5 11 416 127 5 1 wash_5_1_1_4 1630 5 11 352 121 5 1 wash_5_2_1_4 5365 5 11 2368 255 5 1 wash_5_2_2_4 27919 5 11 2368 554 5 1 wash_6_1_1_3 6962 6 13 1088 193 6 1 wash_6_1_1_4 6622 6 13 1024 172 6 1 wash_6_2_1_4 27412 6 13 10432 419 6 1
Figure 5: Washing cycles – reachability

The table in Fig. 5 summarizes the results for the cases where the controller cannot be synthesized and we synthesize a counterexample winning reachability strategy of the environment. The benchmark parameters specify the total number of tanks, the fill delay, the empty delay, and the number of tanks sharing a pipe, respectively. In all of these cases, the size of the decision tree is substantially smaller compared to its BDD counterpart. The decision trees also provide some structural insight that may easily be used in debugging. Namely, the trees have a simple repeating structure where the number of repetitions depends just on the number of tanks. This is even easier to see once the chain heuristic of Section 4.3 is used. Fig. 5 shows the tree solution for the case of three and six tanks, respectively. The structural phenomenon is not apparent from the BDDs at all.

5.2 Random LTL

In reactive synthesis, the objectives are often specified as LTL (linear-time temporal logic) formulae over input/output letters. In our experiments, we use formulae randomly generated using SPOT [23] 111First, we run randltl from the Spot tool-set randltl -n10000 5 --tree-size=20..25 –seed=0 --simplify=3 -p --ltl-priorities ’ap=3, false=1,true=1,not=1,F=1,G=1,X=1,equiv=1,implies=1,xor=0,R=0,U=1, W=0,M=0,and=1,or=1’ | ltlfilt --unabbreviate="eiMRWˆ" to obtain the formulae. Then we run Rabinizer to obtain the respective automata and we retain those with at least 100 states.. LTL formulae can be translated into deterministic parity automata; for this translation we use the tool Rabinizer [34]. Finally, given a parity automaton, we consider various partitions of the atomic propositions into input/output letters, which gives rise to graph games with parity objectives. See Appendix 0.F for more details on the translation. We retain all formulae that result in games with at most three priorities.

Consequently, we use two ways of encoding states of the graph games as binary vectors. First, naive encoding, allowed by the fact that the output of tools such as [23, 34] in HOA format [4] always assigns an id to each state. As this id is an integer, we may use its binary encoding. Second, we use a more sophisticated Rabinizer encoding obtained by using internal structure of states produced by Rabinizer [34]. Here the states are of the form “formula, set of formulae, permutation, priority”. We propose a very simple, yet efficient procedure of encoding the state structure information into bitvectors. Although the resulting bitvectors are longer than in the naive encoding, some structural information of the game is preserved, which can be utilized by decision trees to provide a more succinct representation. BDDs perform a lot better on the naive encoding than on the Rabinizer encoding, since they are unable to exploit the preserved state information. As a result, we consider the naive encoding with BDDs and both, the naive and the Rabinizer encodings, with decision trees.

We consider 976 examples where the goal of the player, whose strategy is being represented, is that the least priority occurring an infinite number of times is odd.

Figure 6: BDDs vs DTrees
Figure 7: DTrees improvement with Rabinizer enc.

Fig. 7 plots the size ratios when we compare BDDs and decision trees (note that the

-axis scales logarithmically). For each case, we consider 1000 random variable orderings and choose the BDD that is minimal in size, and after that we construct a decision tree (without the chain heuristic of Section 

4.3). For BDDs, we also consider all the ordering algorithms mentioned in the previous set of experiments, however, they provide no improvement compared to the random orderings.

In 925 out of 976 cases, the resulting decision tree is smaller than the corresponding BDD (in 3 cases they are of a same size and in 48 cases the BDD is smaller). The arithmetic average ratio of decision tree size and BDD size is , the geometric average is , and the harmonic average is .

Fig. 7 demonstrates how decision tree representation improves once the features of the game-structural information can be utilized. Each dot corresponds to a ratio of the decision tree size once the Rabinizer encoding is used, and once the naive encoding is used. In 638 cases the Rabinizer encoding is superior, in 309 cases there is no difference, and in 29 cases the naive encoding is superior. All three types of the average ratio are around . In Appendix 0.E we present the further improvement of decision trees once we use the chain heuristic of Section 4.3.

6 Conclusion

In this work we propose decision trees for strategy representation in graph games. While decision trees have been used in probabilistic settings where errors are allowed and overfitting of data is avoided, for graph games, strategies must be entirely represented without errors. Hence optimization techniques for existing decision-tree solvers do not apply, and we develop new techniques and present experimental results to demonstrate the effectiveness of our approach. Moreover, decision trees have several other advantages: First, in decision trees the nodes represent predicates, and in richer domains, e.g., where variables represent integers, the internal nodes of the tree can represent predicates in the corresponding domain, e.g., comparison between the integer variables and a constant. Hence richer domains can be directly represented as decision trees without conversion to bitvectors as required by BDDs. However, we restricted ourselves to the boolean domain to show that even in such domains that BDDs are designed for the decision trees improve over BDDs. Second, as illustrated in our examples, decision trees can often provide similar and scalable solution when some parameters vary. This is quite attractive in reactive synthesis where certain parameters vary, however they affect the strategy in a minimal way. Our examples show decision trees exploit this much better than BDDs, and can be useful in parametrized synthesis. Our work opens up many interesting directions of future work. For instance, richer versions of decision trees that are still well-readable could be used instead, such as decision trees with more complex expressions in leaves [42]. The applications of decision trees in other applications related to reactive synthesis is an interesting direction of future work. Another interesting direction is the application of the look-ahead technique in the probabilistic settings.

Data Availability Statement and Acknowledgments.

This work has been partially supported by the Czech Science Foundation, Grant No. P202/12/G061, Vienna Science and Technology Fund (WWTF) Project ICT15-003, Austrian Science Fund (FWF) NFN Grant No. S11407-N23 (RiSE/SHiNE), ERC Starting grant (279307: Graph Games), DFG Grant No KR 4890/2-1 (SUV: Statistical Unbounded Verification), TUM IGSSE Grant 10.06 (PARSEC) and EU Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie Grant No. 665385. We thank Fabio Somenzi for detailed information about variable reordering in BDDs. The source code and binary files used to obtain the results presented in this paper are available in the figshare repository: https://doi.org/10.6084/m9.figshare.5923915 [12].

References

  • [1] M. Abadi, L. Lamport, and P. Wolper. Realizable and unrealizable specifications of reactive systems. In ICALP’89, LNCS 372, pages 1–17. Springer, 1989.
  • [2] S. B. Akers. Binary decision diagrams. IEEE Trans. Comput., C-27(6):509–516, 1978.
  • [3] R. Alur, T. Henzinger, and O. Kupferman. Alternating-time temporal logic. Journal of the ACM, 49:672–713, 2002.
  • [4] T. Babiak, F. Blahoudek, A. Duret-Lutz, J. Klein, J. Kretínský, D. Müller, D. Parker, and J. Strejcek. The Hanoi omega-automata format. In CAV, Part I, pages 479–486, 2015.
  • [5] D. Berwanger, K. Chatterjee, M. D. Wulf, L. Doyen, and T. A. Henzinger. Strategy construction for parity games with imperfect information. Inf. Comput., 208(10):1206–1220, 2010.
  • [6] A. Blass, Y. Gurevich, L. Nachmanson, and M. Veanes. Play to test. In FATES’05, 2005.
  • [7] B. Bollig, M. Löbbing, and I. Wegener. Simulated annealing to improve variable orderings for OBDDs. 1995. Presented at the International Workshop on Logic Synthesis, Granlibakken, CA.
  • [8] C. Boutilier and R. Dearden. Approximate value trees in structured dynamic programming. In L. Saitta, editor, ICML, pages 54–62. Morgan Kaufmann, 1996.
  • [9] C. Boutilier, R. Dearden, and M. Goldszmidt. Exploiting structure in policy construction. In IJCAI, pages 1104–1113. Morgan Kaufmann, 1995.
  • [10] T. Brázdil, K. Chatterjee, M. Chmelík, A. Fellner, and J. Křetínský. Counterexample explanation by learning small strategies in Markov decision processes. In CAV, Part I, pages 158–177, 2015.
  • [11] T. Brázdil, K. Chatterjee, M. Chmelík, A. Gupta, and P. Novotný. Stochastic shortest path with energy constraints in POMDPs: (extended abstract). In AAMAS, pages 1465–1466, 2016.
  • [12] T. Brázdil, K. Chatterjee, J. Křetínský, and V. Toman. Artifact and instructions to generate experimental results for TACAS 2018 paper Strategy Representation by Decision Trees in Reactive Synthesis. Figshare, 2018. https://doi.org/10.6084/m9.figshare.5923915.
  • [13] R. Bryant. Graph-based algorithms for boolean function manipulation. IEEE Transactions on Computers, C-35(8):677–691, 1986.
  • [14] J. Büchi. On a decision method in restricted second-order arithmetic. In E. Nagel, P. Suppes, and A. Tarski, editors, Proceedings of the First International Congress on Logic, Methodology, and Philosophy of Science 1960, pages 1–11. Stanford University Press, 1962.
  • [15] J. Büchi and L. Landweber. Solving sequential conditions by finite-state strategies. Transactions of the AMS, 138:295–311, 1969.
  • [16] C. S. Calude, S. Jain, B. Khoussainov, W. Li, and F. Stephan. Deciding parity games in quasipolynomial time. In STOC, 2017 (to appear).
  • [17] A. Church. Logic, arithmetic, and automata. In Proceedings of the International Congress of Mathematicians, pages 23–35. Institut Mittag-Leffler, 1962.
  • [18] E. Clarke, T. Henzinger, and H. Veith, editors. Chapter: Games and Synthesis: Handbook of Model Checking. Springer, 2017 (to appear).
  • [19] L. de Alfaro and T. Henzinger. Interface automata. In FSE’01, pages 109–120. ACM, 2001.
  • [20] L. de Alfaro, T. Henzinger, and F. Mang. Detecting errors before reaching them. In CAV’00, pages 186–201, 2000.
  • [21] D. Dill. Trace Theory for Automatic Hierarchical Verification of Speed-independent Circuits. The MIT Press, 1989.
  • [22] R. Drechsler, B. Becker, and N. Gockel. Genetic algorithm for variable ordering of OBDDs. May 1995. Presented at the International Workshop on Logic Synthesis, Granlibakken, CA.
  • [23] A. Duret-Lutz, A. Lewkowicz, A. Fauchille, T. Michaud, E. Renault, and L. Xu. Spot 2.0 - A framework for LTL and -automata manipulation. In ATVA, pages 122–129, 2016.
  • [24] T. Elomaa and T. Malinen. On lookahead heuristics in decision tree learning. In ISMIS 2003, pages 445–453, 2003.
  • [25] E. Emerson and C. Jutla. Tree automata, mu-calculus and determinacy. In FOCS’91, pages 368–377. IEEE, 1991.
  • [26] O. Friedmann and M. Lange. Solving parity games in practice. In ATVA, pages 182–196, 2009.
  • [27] M. Fujita, Y. Matsunaga, and T. Kakuda. On variable ordering of binary decision diagrams for the application of multi-level logic synthesis. In EURO-DAC, pages 50–54, 1991.
  • [28] P. Garg, D. Neider, P. Madhusudan, and D. Roth. Learning invariants using decision trees and implication counterexamples. In POPL, 2016.
  • [29] Y. Gurevich and L. Harrington. Trees, automata, and games. In STOC’82, pages 60–65. ACM Press, 1982.
  • [30] M. A. Hall, E. Frank, G. Holmes, B. Pfahringer, P. Reutemann, and I. H. Witten. The WEKA data mining software: an update. SIGKDD Explorations, 11(1):10–18, 2009.
  • [31] T. Henzinger, O. Kupferman, and S. Rajamani. Fair simulation. I&C, 173:64–81, 2002.
  • [32] S. Jacobs. Extended AIGER format for synthesis. CoRR, abs/1405.5793, 2014.
  • [33] S. Jacobs, R. Bloem, R. Brenguier, R. Könighofer, G. A. Pérez, J. Raskin, L. Ryzhyk, O. Sankur, M. Seidl, L. Tentrup, and A. Walker. The second reactive synthesis competition (SYNTCOMP 2015). In SYNT, pages 27–57, 2015.
  • [34] Z. Komárková and J. Křetínský. Rabinizer 3: Safraless translation of LTL to small deterministic automata. In ATVA, pages 235–241, 2014.
  • [35] S. Krishna, C. Puhrsch, and T. Wies. Learning invariants using decision trees. CoRR, abs/1501.04725, 2015.
  • [36] S. Liu, A. Panangadan, C. S. Raghavendra, and A. Talukder. Compact representation of coordinated sampling policies for body sensor networks. In In Proceedings of Workshop on Advances in Communication and Networks (Smart Homes for Tele-Health), pages 6–10. IEEE, 2010.
  • [37] Z. Manna and A. Pnueli. The Temporal Logic of Reactive and Concurrent Systems: Specification. Springer-Verlag, 1992.
  • [38] R. McNaughton. Infinite games played on finite graphs. Annals of Pure and Applied Logic, 65:149–184, 1993.
  • [39] P. J. Meyer and M. Luttenberger. Solving mean-payoff games on the GPU. In ATVA, pages 262–267, 2016.
  • [40] T. M. Mitchell. Machine Learning. McGraw-Hill, Inc., New York, NY, USA, 1 edition, 1997.
  • [41] D. Neider. Small strategies for safety games. In ATVA, pages 306–320, 2011.
  • [42] D. Neider, S. Saha, and P. Madhusudan. Synthesizing piece-wise functions by learning classifiers. In TACAS, pages 186–203, 2016.
  • [43] D. Neider and U. Topcu. An automaton learning approach to solving safety games over infinite graphs. In TACAS, pages 204–221, 2016.
  • [44] A. Pnueli. The temporal logic of programs. In Proceedings of the 18th Annual Symposium on Foundations of Computer Science, pages 46–57. IEEE Computer Society Press, 1977.
  • [45] A. Pnueli and R. Rosner. On the synthesis of a reactive module. In POPL’89, pages 179–190. ACM Press, 1989.
  • [46] J. R. Quinlan. Induction of decision trees. Machine Learning, 1(1):81–106, 1986.
  • [47] M. Rabin. Automata on Infinite Objects and Church’s Problem. Number 13 in Conference Series in Mathematics. American Mathematical Society, 1969.
  • [48] P. Ramadge and W. Wonham. Supervisory control of a class of discrete-event processes. SIAM Journal of Control and Optimization, 25(1):206–230, 1987.
  • [49] R. Rudell. Dynamic variable ordering for ordered binary decision diagrams. In ICCAD, pages 42–47. IEEE Computer Society Press, 1993.
  • [50] S. Safra. On the complexity of -automata. In Proceedings of the 29th Annual Symposium on Foundations of Computer Science, pages 319–327. IEEE Computer Society Press, 1988.
  • [51] S. Schewe. Solving Parity Games in Big Steps. JCSS, 84:243–262, 2017.
  • [52] F. Somenzi. CUDD: CU decision diagram package release 3.0.0, 2015.
  • [53] W. Thomas. Languages, automata, and logic. In G. Rozenberg and A. Salomaa, editors, Handbook of Formal Languages, volume 3, Beyond Words, chapter 7, pages 389–455. Springer, 1997.
  • [54] M. Vardi and P. Wolper. An automata-theoretic approach to automatic program verification. In LICS, pages 322–331. IEEE Computer Society Press, 1986.
  • [55] R. Wimmer, B. Braitling, B. Becker, E. M. Hahn, P. Crouzen, H. Hermanns, A. Dhama, and O. Theel. Symblicit calculation of long-run averages for concurrent probabilistic systems. In QEST, pages 27–36, Washington, DC, USA, 2010. IEEE Computer Society.
  • [56] W. Zielonka. Infinite games on finitely coloured graphs with applications to automata on infinite trees. In Theoretical Computer Science, volume 200(1-2), pages 135–183, 1998.

Appendix

Appendix 0.A Artifact Description

We provide instructions to replicate the experimental results presented in this paper, using our artifact that is openly available at [12]. All the results can be obtained with the heap size limited to 8 GB.

Results for Scheduling of Washing Cycles (Section 5.1). Running this batch takes roughly 30 hours and generates 7.1GB of training data. Note that we did not include around 30 most resource-demanding benchmarks of this batch in the artifact. (i) in folder art, execute ./run.sh wTOTAL, (ii) observe the results at art/results/reports/reprWash{2,3,4,reach}.txt, (iii) in folder art/results, execute python plotsWash.py and observe the plots that correspond to Figure 3. Alternatively, to run a subset of this batch that takes only 30 minutes to run and generates only 265MB of training data, in (i) execute ./run.sh wPART. To additionaly generate dot representation of DTs/BDDs, in (i) execute either ./run.sh wTOTALdot or ./run.sh wPARTdot.

Results for Scheduling of Washing Cycles BDD reordering (Appendix 0.C). Running this batch takes roughly 30 minutes. (i) make sure you have the training data obtained by running the batch above, (ii) in folder art/results, execute ./runBDDreorder.sh, (iii) observe the results at art/results/reports/BDDreorder.txt.

Results for Random LTL (Section 5.2). Running this batch takes roughly 2 hours and generates 84MB of training data. (i) in folder art, execute ./run.sh rTOTAL, (ii) observe the results at art/results/reports/reprRandomLTL{naive,encoded}.txt, (iii) in folder art/results, execute python plotsRandomLTL.py and observe the plots that correspond to Figure 7 and Figure 7.

Results for Bit Shifter (Appendix 0.D). Running this experiment batch takes roughly 5 minutes. Note that we did not include two benchmarks in the artifact since they take considerable execution time. (i) in folder art, execute ./run.sh aTOTAL, (ii) observe the results at art/results/reports/reprAiger.txt.

Appendix 0.B Correctness of Algorithm -look-ahead ID3

Theorem 0.B.1

Let be an I/O game with binary variables, and let be a memoryless strategy that defines a training set . Algorithm 2 with input outputs a decision tree such that , which means that for all we have that iff . Thus represents the strategy .

Proof

Recall that a strategy defines:

  • ( denotes a disjoint union)

Since we consider I/O games with binary variables, states and actions are labeled by bitvectors, so , where is the number of features for states plus the number of features for actions. Further recall that given a leaf , we define as if , and otherwise. Also, a leaf is mixed if it has a non-empty intersection with both and . Finally recall that given a decision tree , assigns to every inner node a number from , and assigns to every leaf a value or .

Partial correctness.

Consider the algorithm with input , and let be the output decision tree. Consider arbitrary , note that it belongs to . Consider the leaf corresponding to in , i.e., . Decision tree has no mixed leaves, since otherwise the main while-loop (line 5) and the algorithm would not have terminated. Therefore is not mixed and thus we have that either (i) , implying , or (ii) , implying . Additionaly, , which was set at line 16 during the iteration of the main while-loop that processed the parent of . Since , we obtain iff . Finally, since iff and iff , we obtain iff .

Total correctness.

We maintain an invariant such that the length of an arbitrary path in is at most , i.e., the number of features in . We prove this by showing that for every feature , for every path in , at most one inner node of the path has .

Consider a path with a mixed leaf , let be the set of features appearing in this path. All elements of coincide in the values of the features of . Additionally, from the definition of a mixed leaf it follows that is indeed a strict subset of and that there exists an element of and an element of , let be an arbitrary feature where these two elements differ. Consider the smallest such that the maximum -look-ahead information gain is positive (such exists and its value is bounded by ). For every , its -look-ahead information gain is zero, since (i) its information gain is zero, and (ii) in case , its -look-ahead information gain is bounded by the maximum -look-ahead information gain, which is zero. Therefore the feature maximizing -look-ahead information gain does not belong to . When the heuristic at line 14 is computed, the value of the computation formula for feature is positive, whereas for every , we obtain undefined terms in the computation formula, so we explicitly define the value as 0. Therefore the feature maximizing the value of the formula does not belong to . Finally, at line 15 of the algorithm, we set , and by the arguments presented above, .

Since every iteration of the while-loop adds two vertices to the decision tree, by the above invariant we have that the algorithm terminates. This together with partial correctness gives us total correctness.∎

Appendix 0.C Details of Section 5.1: Scheduling of Washing Cycles

Execution time. The average time spent on constructing a decision tree for a given benchmark is around seconds. For BDDs, the construction algorithm uses the optimized CuDD [52] library, and constructs faster. Therefore, we consider randomly chosen variable orderings and in the end retain the smallest BDD. This way, we provide a lot higher time budget for the BDD construction, the average time spent on one example is around seconds. Finally, the average time spent on one example for decision trees with the chain heuristic (see Section 4.3) is around seconds. Note that this is shorter than when the heuristic is turned off. This shows the heuristic incurs minimal computational overhead, and on the other hand saves resources when performing essentially multiple splits at once.

Reordering algorithms. We compare decision trees (obtained without the chain heuristic of Section 4.3), and BDDs obtained as follows. For each example, we consider four BDDs obtained using the following reordering algorithms: Sift [49], Window4 [27], simulated-annealing-based algorithm [7], and a genetic algorithm [22]. Then, we retain the smallest BDD.

In 332 out of 394 cases, none of the algorithms provide any improvement compared to the default variable ordering. This suggests the default natural ordering is already quite solid for the BDD representation. Fig. 8 plots the results of the comparison, the red dots correspond to the cases where reordering algorithms provide improvement. The decision tree is smaller in 386 cases, and the BDD is smaller in 8 cases. The arithmetic average ratio of decision tree size and BDD size is , the geometric average is , and the harmonic average is .

Figure 8: Washing cycles – safety; BDDs with reordering algorithms

Parametric solutions – comparison with BDDs. In Table 1 we provide a snippet of the table containing detailed information about the results of the experiments for safety. The benchmark parameters specify the total number of tanks, the fill delay, the empty delay, and the number of tanks sharing a pipe, respectively. The snippet shows how BDDs differ in size for the cases where decision trees provide a solution that differs minimally and is easily generalizable for all the cases.

Parametric solutions – more examples. For two tanks and empty delay of one, in Section 5.1 we provide one illustration of the generalizable solution, for fill delay of . Fig. 10 provides an illustration for fill delay , in order to show how some labels change when the parameter value is changes (note that the structure of the decision tree remains the same). Finally, Fig. 10 presents the parametric solution for all the values of the fill delay, which could be be easily obtained by a syntactic analysis of the difference of any two instance solutions.

Name
wash_2_1_1_1 35 2 5 800 50 18 9
wash_2_2_1_1 53 2 5 800 54 18 9
wash_2_3_1_1 75 2 5 800 57 18 9
wash_2_4_1_1 101 2 5 800 64 18 9
wash_2_5_1_1 131 2 5 800 74 18 9
wash_2_6_1_1 165 2 5 800 77 18 9
wash_2_7_1_1 203 2 5 800 84 18 9
wash_2_8_1_1 245 2 5 800 85 18 9
wash_2_9_1_1 291 2 5 800 75 18 9
wash_2_2_2_1 118 2 5 2592 94 33 22
wash_2_3_2_1 150 2 5 2592 121 33 22
wash_2_4_2_1 186 2 5 2592 113 33 22
wash_2_5_2_1 226 2 5 2592 154 33 22
wash_2_6_2_1 270 2 5 2592 138 33 22
wash_2_7_2_1 318 2 5 2592 165 33 22
wash_2_8_2_1 370 2 5 2592 126 33 22
wash_2_9_2_1 426 2 5 2592 185 33 22
wash_3_1_1_1 153 3 7 16000 139 39 21
wash_3_2_1_1 281 3 7 16000 142 39 21
wash_3_3_1_1 469 3 7 16000 132 39 21
wash_3_4_1_1 729 3 7 16000 181 39 21
wash_3_5_1_1 1073 3 7 16000 185 39 21
wash_3_6_1_1 1513 3 7 16000 215 39 21
wash_3_7_1_1 2061 3 7 16000 217 39 21
wash_3_8_1_1 2729 3 7 16000 217 39 21
wash_3_9_1_1 3529 3 7 16000 253 39 21

Table 1: Washing cycles – safety; snippet of the results

We have observed multiple cases of the parametric solution phenomenon. We present one more example for the case of three tanks and empty delay of one, in Fig. 11 (where the fill delay is set to ).

Appendix 0.D Details of Section 5.1: Bit Shifter

The specification for a bit shifter circuit is one of the toy example benchmarks for SYNTCOMP. The benchmark set is parametrized by the length of the input bit array.

Name
bs16n 305 4 1 64 39 11 3
bs32n 1121 5 1 128 72 13 3
bs64n 4289 6 1 256 137 15 3
bs128n 16769 7 1 512 266 17 3
bs256n 66305 8 1 1024 523 19 3
bs512n 263681 9 1 2048 1036 21 3
Table 2: Bit shifter

Fig. 2 summarizes the results. The decision trees are smaller in each case and the difference increases with the increasing parameter value for the benchmark.

Figure 9: A solution for two tanks and empty delay of one, illustration for fill delay of .
Figure 10: A parametric solution for two tanks and empty delay of one, fill delay parametrized by .
Figure 11: A solution for three tanks and empty delay of one, illustration for fill delay of .

Moreover, unlike BDDs, the computed decision trees provide a scalable universal solution for the whole family of benchmarks. Fig. 14 shows the decision tree computed for the benchmark with the lowest parameter value and the highest parameter value, respectively. The scalable universal solution is more apparent and easier to understand once we use the chain heuristic described in Section 4.3 to construct the decision trees. Fig. 14 shows the two decision trees once the chain heuristic is enabled.

Appendix 0.E Details of Section 5.2: Random LTL

Fig. 12 plots the ratios of decision tree sizes with and without the chain heuristic described in Section 4.3. In 21 cases the size of both trees is the same, in the remaining 955 cases the decision tree becomes smaller after applying the heuristic. All three types of the average ratio are around .

Figure 12: Basic vs Chained decision trees
Figure 13: bs16n decision tree and bs512n decision tree
Figure 14: bs16n decision tree and bs512n decision tree when constructed with the chain heuristic

Appendix 0.F Details of Section 5.2: From LTL to I/O Games

Objective transformation. LTL formulae can be translated into non-deterministic Büchi automata [54], and then translated to deterministic parity automata [50]. The synchronous product of the game graph and deterministic parity automata thus gives rise to graph games with parity objectives.

While such translation is doubly exponential in the worst-case, practically this is rarely the case and there are efficient tools [23, 34] allowing to translate some reasonably sized formulae. Moreover, the number of priorities can also be limited. For instance, the GR(1) fragment can be translated to parity automata with three priorities.

The first conclusion is that the resulting parity automaton corresponds to the arena of the game and each state of the automaton can be encoded into binary, resulting in a sequence of state variables for the I/O game with variables (see the main body of the text). The second conclusion is that we are interested in positional strategies in these games since parity games allow for memoryless winning strategies, too.

It remains to show how to solve the parity games in our setting efficiently.

Strategy construction in parity games. There are several algorithms to solve parity game and several solvers available [26, 39]. Here we use the classical algorithm of Zielonka, tailored to parity 3, covering such fragments as GR(1) in polynomial time. The algorithm is recursive. Consider that 0 is the least priority in the game. Let and denote the parity objectives of player 1 and player 2, respectively. The algorithm repeats the following steps:

  1. The algorithm first computes the set of states such that player 1 can ensure reach the set of states with priority 0.

  2. Consider the subgame without the set of states (which has one less priority). The subgame is solved recursively. Let denote the winning region for player 2 in the subgame.

  3. The set of vertices in the original game such that player 2 can ensure to reach is removed as part of winning region for player 2, and then the algorithm repeats the above steps on the remaining game graph.

The algorithm stops when is empty, and then the remaining states represent the winning region for player 1. The winning strategy construction in such games is obtained from winning strategies for reachability objectives and safety objectives. An explicit construction of winning strategies in parity games from winning strategies for reachability and safety objectives is presented in [5] (even in the context of partial-information games).