Finite-state Strategies in Delay Games

09/07/2017 ∙ by Martin Zimmermann, et al. ∙ Universität Saarland 0

What is a finite-state strategy in a delay game? We answer this surprisingly non-trivial question and present a very general framework for computing such strategies: they exist for all winning conditions that are recognized by automata with acceptance conditions that satisfy a certain aggregation property. Our framework also yields upper bounds on the complexity of determining the winner of such delay games and upper bounds on the necessary lookahead to win the game. In particular, we cover all previous results of that kind as special cases of our uniform approach.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

What is a finite-state strategy in a delay game? The answer to this question is surprisingly non-trivial due to the nature of delay games in which one player is granted a lookahead on her opponent’s moves. This puts her into an advantage when it comes to winning games, i.e., there are games that can only be won with lookahead, but not without. A simple example is a game where one has to predict the third move of the opponent with one’s first move. This is impossible when moving in alternation, but possible if one has access to the opponent’s first three moves before making the first move. More intriguingly, lookahead also allows Player  to improve the quality of her winning strategies in games with quantitative winning conditions, i.e., there is a tradeoff between quality and amount of lookahead [Zimmermann17].

However, managing (and, if necessary, storing) the additional information gained by the lookahead can be a burden. Consider another game where one just has to copy the opponent’s moves. This is obviously possible with or without lookahead (assuming the opponent moves first). In particular, without lookahead one just has to remember the last move of the opponent and copy it. However, when granted lookahead, one has to store the last moves of the opponent in a queue to implement the copying properly. This example shows that lookahead is not necessarily advantageous when it comes to minimizing the memory requirements of a strategy.

In this work, we are concerned with Gale-Stewart games [GaleStewart53], abstract games without an underlying arena.111The models of Gale-Stewart games and arena-based games are interreducible, but delay games are naturally presented as a generalization of Gale-Stewart games. This is the reason we prefer this model here. In such a game, both players produce an infinite sequence of letters and the winner is determined by the combination of these sequences. If it is in the winning condition, a set of such combinations, then the second player wins, otherwise the first one wins. In a classical Gale-Stewart game, both players move in alternation while in a delay game, the second player skips moves to obtain a lookahead on the opponent’s moves. Which moves are skipped is part of the rules of the game and known to both players.

Delay games have recently received a considerable amount of attention after being introduced by Hosch and Landweber [HoschLandweber72] only three years after the seminal Büchi-Landweber theorem [BuechiLandweber69]. Büchi and Landweber had shown how to solve infinite two-player games with -regular winning conditions. Forty years later, delay games were revisited by Holtmann, Kaiser, and Thomas [HoltmannKaiserThomas12] and the first comprehensive study was initiated, which settled many basic problems like the exact complexity of solving -regular delay games and the amount of lookahead necessary to win such games [KleinZimmermann16]. Furthermore, Martin’s seminal Borel determinacy theorem [Martin75] for Gale-Stewart games has been lifted to delay games [KleinZimmermann15] and winning conditions beyond the -regular ones have been investigated [FridmanLoedingZimmermann11, KleinZimmermann16a, Zimmermann16, Zimmermann17]. Finally, the uniformization problem for relations over infinite words boils down to solving delay games: a relation  is uniformized by a continuous function (in the Cantor topology) if, and only if, the delaying player wins the delay game with winning condition . We refer to [HoltmannKaiserThomas12] for details.

What makes finite-state strategies in infinite games particularly useful and desirable is that a general strategy is an infinite object, as it maps finite play prefixes to next moves. On the other hand, a finite-state strategy is implemented by a transducer, an automaton with output, and therefore finitely represented: the automaton reads a play prefix and outputs the next move to be taken. Thus, the transducer computes a finite abstraction of the play’s history using its state space as memory and determines the next move based on the current memory state.

In Gale-Stewart games, finite-state strategies suffice for all -regular games [BuechiLandweber69] and even for deterministic -contextfree games, if one allows pushdown transducers [Walukiewicz01]. For Gale-Stewart games (and arena-based games), the notion is well-established and one of the most basic questions about a class of winning conditions is that about the existence and size of winning strategies for such games.

While foundational questions for delay games have been answered and many results have been lifted from Gale-Stewart games to those with delay, the issue of computing tractable and implementable strategies has not been addressed before. However, this problem is of great importance, as the existence and computability of finite-state strategies is a major reason for the successful application of infinite games to diverse problems like reactive synthesis, model-checking of fixed-point logics, and automata theory.

In previous work, restricted classes of strategies for delay games have been considered [KleinZimmermann15]. However, those restrictions are concerned with the amount of information about the lookahead’s evolution a strategy has access to, and do not restrict the size of the strategies: In general, they are still infinite objects. On the other hand, it is known that bounded lookahead suffices for many winning conditions of importance, e.g., the -regular ones [KleinZimmermann16], those recognized by parity and Streett automata with costs [Zimmermann17], and those definable in (parameterized) linear temporal logics [KleinZimmermann16a]. Furthermore, for all those winning conditions, the winner of a delay game can be determined effectively. In fact, all these proofs rely on the same basic construction that was already present in the work of Holtmann, Kaiser, and Thomas [HoltmannKaiserThomas12], i.e., a reduction to a Gale-Stewart game using equivalence relations that capture the behavior of the automaton recognizing the winning condition. These reductions and the fact that finite-state strategies suffice for the games obtained in the reductions imply that (some kind of) finite-state strategies exist.

Indeed, in his master’s thesis [Salzmann15], Salzmann recently introduced the first notion of finite-state strategies in delay games and, using these reductions, presented an algorithm computing them for several types of acceptance conditions, e.g., parity conditions and related -regular ones. However, the exact nature of finite-state strategies in delay games is not as canonical as for Gale-Stewart games. We discuss this issue in-depth in Sections 3 and 5 by proposing two notions of finite-state strategies, a delay-oblivious one which yields large strategies in the size of the lookahead, and a delay-aware one that follows naturally from the reductions to Gale-Stewart games mentioned earlier. In particular, the number of states of the delay-aware strategies is independent of the size of the lookahead, but often larger in the size of the automaton recognizing the winning condition. However, this is offset by the fact that strategies of the second type are simpler to compute than the delay-oblivious ones and have overall fewer states, if the lookahead is large. In comparison to Salzmann’s notion, where strategies syntactically depend on a given automaton representing the winning condition, our strategies are independent of the representation of the winning condition and therefore more general. Also, our framework is more abstract and therefore applicable to a wider range of acceptance conditions (e.g., qualitative ones) and yields in general smaller strategies, but there are of course some similarities, which we discuss in detail.

To present these notions, we first introduce some definitions in Section 2, e.g., delay games and finite-state strategies for Gale-Stewart games. After introducing the two notions of finite-state strategies for delay games in Section 3, we show how to compute such strategies in Section 4. To this end, we present a generic account of the reduction from delay games to Gale-Stewart games which subsumes, to the best of our knowledge, all decidability results presented in the literature. Furthermore, we show how to obtain the desired strategies from our construction. Then, in Section 5, we compare the two different definitions of finite-state strategies for delay games proposed here and discuss their advantages and disadvantages. Also, we compare our approach to that of Salzmann. We conclude by mentioning some directions for further research in Section 6.

Proofs and constructions omitted due to space restrictions can be found in the full version [Zimmermann17c].

Related Work

As mentioned earlier, the existence of finite-state strategies is the technical core of many applications of infinite games, e.g., in reactive synthesis one synthesizes a correct-by-construction system from a given specification by casting the problem as an infinite game between a player representing the system and one representing the antagonistic environment. It is a winning strategy for the system player that yields the desired implementation, which is finite if the winning strategy is finite-state. Similarly, Gurevich and Harrington’s game-based proof of Rabin’s decidability theorem for monadic second-order logic over infinite binary trees [Rabin1969] relies on the existence of finite-state strategies.222The proof is actually based on positional strategies, a further restriction of finite-state strategies for arena-based games, because they are simpler to handle. Nevertheless, the same proof also works for finite-state strategies.

These facts explain the need for studying the existence and properties of finite-state strategies in infinite games [Khoussainov03, Rabinovich09, LeRouxPauly16, Thomas94]. In particular, the seminal work by Dziembowski, Jurdziński, and Walukiewicz [DziembowskiJW97] addressed the problem of determining upper and lower bounds on the size of finite-state winning strategies in games with Muller winning conditions. Nowadays, one of the most basic questions about a given winning condition is that about such upper and lower bounds. For most conditions in the literature, tight bounds are known, see, e.g., [ChatterjeeHenzingerHorn11, Horn05, WallmeierHuettenThomas03]. But there are also surprising exceptions to that rule, e.g., generalized reachability games [FijalkowH13]. More recently, Colcombet, Fijalkow, and Horn presented a very general technique that yields tight upper and lower bounds on memory requirements in safety games, which even hold for games in infinite arenas, provided their degree is finite [ColcombetFH14].

2 Preliminaries

We denote the non-negative integers by . Given two -words  and , we define . Similarly, we define for finite words  and with .

-automata

A (deterministic and complete) -automaton is a tuple  where is a finite set of states, is an alphabet, is the initial state, is the transition function, and is the set of accepting runs (here, and whenever convenient, we treat as a relation ). A finite run  of is a sequence . As usual, we say that starts in , ends in , and processes . Infinite runs on infinite words are defined analogously. If we speak of the run of on , then we mean the unique run of starting in processing . The language  of contains all those -words whose run of is accepting. The size of is defined as .

This definition is very broad, which allows us to formulate our theorems as general as possible. In examples, we consider parity and Muller automata whose set of accepting runs is finitely represented: An -automaton  is a parity automaton, if for some coloring . To simplify our notation, define . Furthermore, is a Muller automaton, if there is a family  of sets of states such that , where is the set of states visited infinitely often by .

Delay Games

A delay function is a mapping , which is said to be constant if for all . A delay game  consists of a delay function  and a winning condition  for some alphabets  and . Such a game is played in rounds  as follows: in round , first Player  picks a word , then Player  picks a letter . Player  wins a play  if the outcome  is in ; otherwise, Player  wins.

A strategy for Player  in is a mapping satisfying while a strategy for Player  is a mapping . A play  is consistent with if for all , and it is consistent with if for all . A strategy for Player  is winning, if every play that is consistent with the strategy is won by Player .

An important special case are delay-free games, i.e., those with respect to the delay function  mapping every to . In this case, we drop the subscript  and write for the game with winning condition . Such games are typically called Gale-Stewart games [GaleStewart53].

Finite-state Strategies in Gale-Stewart Games

A strategy for Player  in a Gale-Stewart game is still a mapping . Such a strategy is said to be finite-state, if there is a deterministic finite transducer  that implements in the following sense: is a tuple  where is a finite set of states, is the input alphabet, is the initial state, is the deterministic transition function, is the output alphabet, and is the output function. Let denote the unique state that is reached by when processing from . Then, the strategy  implemented by is defined as . We say that a strategy is finite-state, if it is implementable by some transducer. Slightly abusively, we identify finite-state strategies with transducers implementing them and talk about finite-state strategies with some number of states. Thus, we focus on the state complexity (e.g., the number of memory states necessary to implement a strategy) and ignore the other components of a transducer (which are anyway of polynomial size in , if we assume and to be fixed).

3 What is a Finite-state Strategy in a Delay Game?

Before we answer this question, we first ask what properties a finite-state strategy should have, i.e., what makes finite-state strategies in Gale-Stewart games useful and desirable? A strategy  is in general an infinite object and does not necessarily have a finite representation. Furthermore, to execute such a strategy, one needs to store the whole sequence of moves made by Player  thus far: Unbounded memory is needed to execute it.

On the other hand, a finite-state strategy is finitely described by a transducer  implementing it. To execute it, one only needs to store a single state of and needs to have access to the transition function  and the output function  of . Assume the current state is at the beginning of some round  (initialized with before round ). Then, Player  makes his move by picking some , which is processed by updating the memory state to . Then, prescribes picking and round  is completed. Thus, there are two aspects that make finite-state strategies desirable: (1) the next move depends only on a finite amount of information about the history of the play, i.e., a state of the automaton, which is (2) easily updated. In particular, the strategy is completely specified by the transition function and the output function.

Further, there is a generic framework to compute such strategies by reducing them to arena-based games (see, e.g., [GraedelThomasWilke02] for an introduction to such games). As an example, consider a game  where is a parity automaton with set  of states and transition function . We describe the construction of an arena-based parity game contested between Player  and Player  whose solution allows us to compute the desired strategies (formal details are presented in the full version [Zimmermann17c]). The positions of Player  are transitions of while those of Player  are pairs  where and where is an input letter. From a vertex  Player  can move to every vertex  for , from which Player  can move to every vertex  for . Finally, Player  wins a play, if the run constructed during the infinite play is accepting. It is easy to see that the resulting game is a parity game with  vertices, and has the same winner as . The winner of the arena-based game has a positional333A strategy in an arena-based games is positional, if its output only depends on the last vertex of the play’s history, not on the full history (see, e.g., [GraedelThomasWilke02]). winning strategy [EmersonJutla91, Mostowski91], which can be computed in quasipolynomial time [CJKLS16, FJSSW17, JL17]. Such a positional winning strategy can easily be turned into a finite-state winning strategy with states for Player  in the game , which is implemented by an automaton with state set . This reduction can be generalized to arbitrary classes of Gale-Stewart games whose winning condition is recognized by an -automaton with set  of states: if Player  has a finite-state strategy with states in the arena-based game obtained by the construction described above, then Player  has a finite-state winning strategy with states for the original Gale-Stewart game. Such a strategy is obtained by solving an arena-based game with vertices.

So, what is a finite-state strategy in a delay game? In the following, we discuss this question for the case of delay games with respect to constant delay functions, which is the most important case. In particular, constant lookahead suffices for all -regular winning conditions [KleinZimmermann16], i.e, Player  wins with respect to an arbitrary delay function if, and only if, she wins with respect to a constant one. Similarly, constant lookahead suffices for many quantitative conditions like (parameterized) temporal logics [KleinZimmermann16a] and parity conditions with costs [Zimmermann17]. For winning conditions given by parity automata, there is an exponential upper bound on the necessary constant lookahead. On the other hand, there are exponential lower bounds already for winning conditions specified by deterministic automata with reachability or safety acceptance (which are subsumed by parity acceptance).

3.1 Delay-oblivious Finite-state Strategies for Delay Games

Technically, a strategy for Player  in a delay game is still a mapping . Hence, the definition of finite-state strategies via transducers as given above for Gale-Stewart games is also applicable to delay games. As a (cautionary) example, consider a delay game with winning condition , i.e., Player  just has to copy Player ’s moves, which she can do with respect to every delay function: Player  wins for every . However, a finite-state strategy has to remember the whole lookahead, i.e., those moves that Player  is ahead of Player , in order to copy his moves. Thus, an automaton implementing a winning strategy for Player  in needs at least  states, if is a constant delay function with . Thus, the memory requirements grow with the size of the lookahead granted to Player , i.e., lookahead is a burden, not an advantage. She even needs unbounded memory in the case of unbounded lookahead.

On the other hand, an advantage of this “delay-oblivious” definition is that finite-state strategies can be obtained by a trivial extension of the reduction presented for Gale-Stewart games above: now, states of Player  are from  and those of Player  are from . Player  can move from to for while Player  can move from to for . Intuitively, a state now additionally stores a queue of length , which contains the lookahead granted to Player . Coming back to the parity example, this approach yields a finite-state strategy with states. To obtain such a strategy, one has to solve a parity game with vertices, which is of doubly-exponential size in , if is close to the (tight) exponential upper bound. This can be done in doubly-exponential time, as it still has the same number of colors as the automaton . Again, this reduction can be generalized to arbitrary classes of delay games with constant delay whose winning conditions are recognized by an -automaton with set  of states: if Player  has a finite-state strategy with states in the arena-based game obtained by the construction, then Player  has a finite-state winning strategy with states for the delay game with constant lookahead of size . In general, factors exponentially into the size, as is the memory size required to win a game with vertices. Also, to obtain the strategy for the delay game, one has to solve an arena-based game with  vertices.

3.2 Block Games

We show that one can do better than by decoupling the history tracking and the handling of the lookahead, i.e., by using delay-aware finite-state strategies. In the delay-oblivious definition, we hardcode a queue into the arena-based game, which results in a blowup of the arena and therefore also in a blowup in the solution complexity and in the number of memory states for the arena-based game, which is turned into one for the delay game. To overcome this, we introduce a slight variation of delay games with respect to constant delay functions, so-called block games444Holtmann, Kaiser, and Thomas already introduced a notion of block game in connection to delay games [HoltmannKaiserThomas12]. However, their notion differs from ours in several aspects. Most importantly, in their definition, Player  determines the length of the blocks (within some bounds specified by ) while our block length is fixed., present a notion of finite-state strategy in block games, and show how to transfer strategies between delay games and block games. Then, we show how to solve block games and how to obtain finite-state strategies for them.

The motivation for introducing block games is to eliminate the queue containing the letters Player  is ahead of Player , which is cumbersome to maintain, and causes the blowup in the case of games with winning condition . Instead, in a block game, both players pick blocks of letters of a fixed length with Player  being one block ahead to account for the delay, i.e., Player  has to pick two blocks in round  and then one in every round, as does Player  in every round. This variant of delay games lies implicitly or explicitly at the foundation of all arguments establishing upper bounds on the necessary lookahead and at the foundation of all algorithms solving delay games [HoltmannKaiserThomas12, KleinZimmermann16, KleinZimmermann16a, Zimmermann16, Zimmermann17]. Furthermore, we show how to transform a (winning) strategy for a delay game into a (winning) strategy for a block game and vice versa, i.e., Player  wins the delay game if, and only if, she wins the corresponding block game.555Due to their prevalence and importance for solving delay games, one could even argue that the notion of block games is more suitable to model delay in infinite games.

Formally, the block game , where is the block length and where is the winning condition, is played in rounds as follows: in round , Player  picks two blocks , then Player  picks a block . In round , Player  picks a block , then Player  picks a block . Player  wins the resulting play , if the outcome  is in .

A strategy for Player  in is a map  such that and for . A strategy for Player  is a map . A play  is consistent with , if and for every ; it is consistent with if for every . Winning strategies and winning a block game are defined as for delay games.

In the following, we call strategies for block games delay-aware and strategies for delay games delay-oblivious. The next lemma relates delay games with constant lookahead and block games: for a given winning condition, Player  wins a delay game with winning condition  (with respect to some delay function) if, and only if, she wins a block game with winning condition  (for some block size).

Lemma 1.

Let .

  1. If Player  wins for some constant delay function , then she also wins .

  2. If Player  wins , then she also wins for the constant delay function  with .

Proof.

1.) Let be a winning strategy for Player  in and fix . Now, define for Player  in via with for .

A straightforward induction shows that for every play consistent with there is a play consistent with that has the same outcome. Thus, as is a winning strategy, so is .

2.) Now, let be a winning strategy for Player  in . We define for Player  in . To this end, let be a possible input occurring during a play. Hence, by the choice of , we obtain . Thus, we can decompose into such that , each is a block over of length  and . Now, let . Then, we define .

Again, a straightforward induction shows that for every play consistent with there is a play consistent with that has the same outcome. Thus, is a winning strategy. ∎

3.3 Delay-aware Finite-state Strategies in Block Games

Now fix a block game  with . A finite-state strategy for Player  in is implemented by a transducer  where , , and are defined as in Subsection 2.2. However, the transition function  processes full input blocks and the output function  maps a state and a pair of input blocks to an output block. The strategy  implemented by is defined as for .

Again, we identify delay-aware strategies with transducers implementing them and are interested in the number of states of the transducer. This definition captures the amount of information that is differentiated in order to implement the strategy. Note however, that it ignores the representation of the transition and the output function. These are no longer “small” (in ), as it is the case for transducers implementing strategies for Gale-Stewart games. When focussing on executing such strategies, these factors become relevant, but for our purposes they are not: We have decoupled the history tracking from the lookahead-handling. The former is implemented by the automaton as usual while the latter is taken care of by the output function. In particular, the size of the automaton is (a-priori) independent of the block size. In the conclusion, we revisit the issue of presenting the transition and the output function.

In the next section, we present a very general approach to computing finite-state strategies for block games whose winning conditions are specified by automata with acceptance conditions that satisfy a certain aggregation property. For example, for block games with winning conditions given by deterministic parity automata, we obtain a strategy implemented by a transducer with exponentially many states, which can be obtained by solving a parity game of exponential size. In both aspects, this is an exponential improvement over the delay-oblivious variant for classical delay games.

To conclude the introduction of block games, we strengthen Lemma 1 to transfer finite-state strategies between delay games and block games.

Lemma 2.

Let .

  1. If Player  has a delay-oblivious finite-state winning strategy for with states for some constant delay function , then she also has a delay-aware finite-state winning strategy for with states.

  2. If Player  has a delay-aware finite-state winning strategy for with states, then she also has a delay-oblivious finite-state winning strategy for with states for the constant delay function  with .

Proof.

It is straightforward to achieve the strategy transformations described in the proof of Lemma 1 by transforming transducers that implement finite-state strategies. ∎

The blowup in the direction from block games to delay games is in general unavoidable, as finite-state winning strategies for the game  need at least states to store the lookahead while winning strategies for the block game need only one state, independently of the block size.

4 Computing Finite-state Strategies for Block Games

The aim of this section is twofold. Our main aim is to compute finite-state strategies for block games (and, by extension, for delay games with constant lookahead). We do so by presenting a general framework for analyzing delay games with winning conditions specified by -automata whose acceptance conditions satisfy a certain aggregation property. The technical core is a reduction to a Gale-Stewart game, i.e., we remove the delay from the game. This framework yields upper bounds on the necessary (constant) lookahead to win a given game, but also allows us to determine the winner and a finite-state winning strategy, if the resulting Gale-Stewart game can be effectively solved.

Slightly more formally, let be the automaton recognizing the winning condition of the block game. Then, the winning condition of the Gale-Stewart game constructed in the reduction is recognized by an automaton  that can be derived from . In particular, the acceptance condition of simulates the acceptance condition of . Many types of acceptance conditions are preserved by the simulation, e.g., starting with a parity automaton , we end up with a parity automaton . Thus, the resulting Gale-Stewart game can be effectively solved.

Our second aim is to present a framework as general as possible to obtain upper bounds on the necessary lookahead and on the solution complexity for a wide range of winning conditions. In fact, our framework is a generalization and abstraction of techniques first developed for the case of -regular winning conditions [KleinZimmermann16], which were later generalized to other winning conditions [KleinZimmermann16a, Zimmermann16, Zimmermann17]. Here, we cover all these results in a uniform way.

4.1 Aggregations

Let us begin by giving some intuition for the construction. The winning condition of the game is recognized by an automaton . Thus, as usual, the exact input can be abstracted away, only the induced behavior in is relevant. Such a behavior is characterized by the state transformations induced by processing the input and by the effect on the acceptance condition triggered by processing it. For many acceptance conditions, this effect can be aggregated, e.g., for parity conditions, one can decompose runs into non-empty pieces and then only consider the maximal colors of the pieces. For quantitative winning conditions, one typically needs an additional bound on the lengths of these pieces (cp. [Zimmermann16, Zimmermann17]).

Thus, we begin by introducing two types of such aggregations of different strength. Fix an -automaton  and let for some finite set . Given a decomposition  of a run into non-empty pieces  we define .

  • We say that is a strong aggregation (function) for , if for all decompositions  and of runs  and with and : .

  • We say that is a weak aggregation (function) for , if for all decompositions  and of runs  and with , , and : .

Example 1.
  • The function  defined as is a strong aggregation for a parity automaton  with coloring  (recall that ).

  • The function  defined as is a strong aggregation for a Muller automaton .

  • The exponential time algorithm for delay games with winning conditions given by parity automata with costs, a quantitative generalization of parity automata, is based on a strong aggregation [Zimmermann17].

  • The algorithm for delay games with winning conditions given by max automata [Bojanczyk11], another quantitative automaton model, is based on a weak aggregation [Zimmermann16].

Due to symmetry, we can replace the implication  by an equivalence in the definition of a weak aggregation. Also, every strong aggregation is trivially a weak one as well.

Let us briefly comment on the difference between strong and weak aggregations using the examples of parity automata with costs and max-automata: the acceptance condition of the former automata is a boundedness condition on some counters while the acceptance condition of the latter is a boolean combination of boundedness and unboundedness conditions on some counters. The aggregations for these acceptance conditions capture whether a piece of a run induces an increment of a counter or not, but abstract away the actual number of increments if it is non-zero. Now, consider the parity condition with costs, which requires to bound the counters. Assume the counters in some run  are bounded and that we have pieces  of bounded length having the same aggregation. Then, the increments in some piece  have at least one corresponding increment in . Thus, if a counter in is unbounded, then it is also unbounded in , which yields a contradiction. Hence, the implication  holds. For details, see [Zimmermann17]. On the other hand, to preserve boundedness and unboundedness properties, one needs to bound the length of the and the length of the . Hence, there is only a weak aggregation for max-automata. Again, see [Zimmermann16] for details.

Given a weak aggregation  for with acceptance condition , let

Next, we consider aggregations that are trackable by automata. A monitor for an automaton  with transition function  is a tuple  where is a finite set of memory elements, is the empty memory element, and is an update function, where we use . Note that the empty memory element  is only used to initialize the memory, it is not in the image of . We say that computes the function  defined by and for and .

Example 2.

Recall Example 1. The strong aggregation  for a parity automaton is computed by the monitor , where for every .

Next, we take the product of and the monitor  for , which simulates and simultaneously aggregates the acceptance condition. Formally, we define the product as where for . Note that has an empty set of accepting runs, as these are irrelevant to us.

4.2 Removing Delay via Aggregation

Consider a play prefix in a delay game : Player  has produced a sequence  of letters while Player  has produced with, in general, . Now, she has to determine . The automaton  can process the joint sequence , but not the sequence , as Player  has not yet picked the letters . However, one can determine which states are reachable by some completion  by projecting away from .

Thus, from now on assume and define via

Intuitively, is obtained as follows: take , project away , and apply the power set construction (while discarding the anyway empty acceptance condition). Then, is the transition function of the resulting deterministic automaton. As usual, we extend to via and .

Remark 1.

The following are equivalent for and :

  1. .

  2. There is a whose projection to is such that the run  of processing starting from ends in and satisfies .

We use this property to define an equivalence relation formalizing the idea that words having the same behavior in do not need to be distinguished. To this end, to every we assign the transition summary  defined via . Having the same transition summary is a finite equivalence relation  over whose index is bounded by . For an -class  define , which is independent of representatives. Let be the set of infinite -classes.

Now, we define a Gale-Stewart game in which Player  determines an infinite sequence of equivalence classes from . By picking representatives, this induces a word . Player  picks states  such that the aggregate a run of on some completion  of . Player  wins if the imply that the run of on is accepting. To account for the delay, Player  is always one move ahead, which is achieved by adding a dummy move for Player  in round .

Formally, in round , Player  picks an -class  and Player  has to pick . In round , first Player  picks an -class , then Player  picks a state  of the product automaton. Player  wins the resulting play  if (note that is ignored). The notions of (finite-state and winning) strategies are inherited from Gale-Stewart games, as this game is indeed such a game  for some automaton  of size  which can be derived from and .

Formally, we define for some arbitrary , some arbitrary , , and if, and only if,

  • ,

  • for all , and

  • .

It is straightforward to prove that has the desired properties.

Note that, due to our very general definition of acceptance conditions, we are able to express the local consistency requirement “” using the acceptance condition. For less general acceptance modes, e.g., parity, one has to check this property using the state space of the automaton, which leads to a polynomial blowup, as one has to store each  for one transition.

Theorem 2.

Let be an -automaton and let be a monitor for such that is a strong aggregation for , let be constructed as above, and define .

  1. If Player  wins for some delay function , then she also wins .

  2. If Player  wins , then she also wins the block-game . Moreover, if she has a finite-state winning strategy for with states, then she has a delay-aware finite-state winning strategy for with states.

By applying both implications and Item 2 of Lemma 1, we obtain upper bounds on the complexity of determining for a given  whether Player  wins for some and on the necessary constant lookahead necessary to do so.

Corollary 1.

Let , , and be as in Theorem 2. Then, the following are equivalent:

  1. Player  wins for some delay function .

  2. Player  wins for the constant delay function  with .

  3. Player  wins .

Thus, determining whether, given , Player  wins for some is achieved by determining the winner of the Gale-Stewart game  and, independently, we obtain an exponential (in ) upper bound on the necessary constant lookahead.

Example 3.

Continuing our example for the parity acceptance condition, we obtain the exponential upper bound  on the constant lookahead necessary to win the delay game and an exponential-time algorithm for determining the winner, as has exponentially many states, but the same number of colors as . Both upper bounds are tight [KleinZimmermann16].

In case there is no strong aggregation for , but only a weak one, one can show that finite-state strategies exist, if Player  wins with respect to some constant delay function at all.

Theorem 3.

Let be an -automaton and let be a monitor for such that is a weak aggregation for , let be constructed as above, and define .

  1. If Player  wins for some constant delay function , then she also wins .

  2. If Player  wins , then she also wins the block-game . Moreover, if she has a finite-state winning strategy for with states, then she has a delay-aware finite-state winning strategy for with states.

Again, we obtain upper bounds on the solution complexity (here, with respect to constant delay functions) and on the necessary constant lookahead.

Corollary 2.

Let , , and be as in Theorem 3. Then, the following are equivalent:

  1. Player  wins for some constant delay function .

  2. Player  wins for the constant delay function  with .

  3. Player  wins .

5 Discussion

Let us compare the two approaches presented in the previous section with three use cases: delay games whose winning conditions are given by deterministic parity automata, by deterministic Muller automata, and by LTL formulas. All formalisms only define -regular languages, but vary in their succinctness.

The following facts about arena-based games will be useful for the comparison:

  • The winner of arena-based parity games has positional winning strategies [EmersonJutla91, Mostowski91], i.e., finite-state strategies with a single state.

  • The winner of an arena-based Muller game has a finite-state strategy with states [McNaughton93], where is the number of vertices of the arena.

  • The winner of an arena-based LTL game has a finite-state strategy with states [PnueliRosner89a], where is the formula specifying the winning condition.

Also, we need the following bounds on the necessary lookahead in delay games:

  • In delay games whose winning conditions are given by deterministic parity automata, exponential (in the size of the automata) constant lookahead is both sufficient and in general necessary [KleinZimmermann16].

  • In delay games whose winning conditions are given by deterministic Muller automata, doubly-exponential (in the size of the automata) constant lookahead is sufficient. This follows from the transformation of deterministic Muller automata into deterministic parity automata of exponential size (see, e.g., [GraedelThomasWilke02]). However, the best lower bound is the exponential one for parity automata, which are also Muller automata.

  • In delay games whose winning conditions are given by LTL formulas, triply-exponential (in the size of the formula) constant lookahead is both sufficient and in general necessary [KleinZimmermann16a].

Using these facts, we obtain the following complexity results for finite-state strategies: Figure 1 shows the upper bounds on the number of states of delay-oblivious finite-state strategies for delay games and on the number of states of delay-aware finite-state strategies for block games. In all three cases, the former strategies are at least exponentially larger. This illustrates the advantage of decoupling tracking the history from managing the lookahead.

parity Muller LTL
delay-oblivious doubly-exp. quadruply-exp. quadruply-exp.
delay-aware exp. doubly-exp. triply-exp.
Figure 1: Memory size for delay-oblivious strategies (for delay games) and delay-aware finite-state strategies (for block games), measured in the size of the representation of the winning condition. For the sake of readability, we only present the orders of magnitude, but not exact values.

Finally, let us compare our approach to that of Salzmann. Fix a delay game  and assume Player  has picked while Player  has picked with . His strategies are similar to our delay-aware ones for block games. The main technical difference is that his strategies have access to the state reached by when processing . Thus, his strategies explicitly depend on the specification automaton  while ours are independent of it. In general, his strategies are therefore smaller than ours, as our transducers have to simulate  if they need access to the current state. On the other hand, our aggregation-based framework is more general and readily applicable to quantitative winning conditions as well, while he only presents results for selected qualitative conditions like parity, weak parity, and Muller.

6 Conclusion

We have presented a very general framework for analyzing delay games. If the automaton recognizing the winning condition satisfies a certain aggregation property, our framework yields upper bounds on the necessary lookahead to win the game, an algorithm for determining the winner (under some additional assumptions on the acceptance condition), and finite-state winning strategies for Player , if she wins the game at all. These results cover all previous results on the first two aspects (although not necessarily with optimal complexity of determining the winner).

Thereby, we have lifted another important aspect of the theory of infinite games to the setting with delay. However, many challenging open questions remain, e.g., a systematic study of memory requirements in delay games is now possible. For delay-free games, tight upper and lower bounds on these requirements are known for almost all winning conditions.

Another exciting question concerns the tradeoff between memory and amount of lookahead: can one trade memory for lookahead? In other settings, such tradeoffs exist, e.g., lookahead allows Player  to improve the quality of her strategies [Zimmermann17]. Salzmann has presented some tradeoffs between memory and lookahead, e.g., linear lookahead allows exponential reductions in memory size in comparison to delay-free strategies [Salzmann15]. In current work, we investigate whether these results are inherent to his setting, which differs subtly from the one proposed here, or whether they exist in our setting as well.

Finite-state strategies in arena-based games are typically computed by game reductions, which turn a game with a complex winning condition into one in a larger arena with a simpler winning condition. In future work, we plan to lift this approach to delay games. Note that the algorithm for computing finite-state strategies presented here can already be understood as a reduction, as we turn a delay game into a Gale-Stewart game. This removes the delay, but preserves the type of winning condition. However, it is also conceivable that staying in the realm of delay games yields better results, i.e., by keeping the delay while simplifying the winning condition. In future work, we address this question.

In our study here we focussed on the state complexity of the automata implementing the strategies, i.e., we measure the quality of a strategy in the number of states of a transducer implementing it. However, this is not the true size of such a machine, as we have ignored the need to represent the transition function and the output function, which have an exponential domain (in the block size) in the case of delay-aware strategies. Thus, when represented as lookup tables, they are prohibitively large. However, our delay-removing reduction hints at these functions also being implementable by transducers. For the transition function this is straightforward; in current work, we investigate whether this is also possible for the output function.

Finally, in future work we will determine the complexity of computing finite-state strategies in delay games and investigate notions of finite-state strategies for Player , which should be much simpler since he does not have to deal with the lookahead.

Acknowledgements

The author is very grateful to the anonymous reviewers whose feedback significantly improved the exposition.

References