A Subexponential Algorithm for ARRIVAL

by   Bernd Gärtner, et al.
ETH Zurich

The ARRIVAL problem is to decide the fate of a train moving along the edges of a directed graph, according to a simple (deterministic) pseudorandom walk. The problem is in NP ∩ coNP but not known to be in P. The currently best algorithms have runtime 2^Θ(n) where n is the number of vertices. This is not much better than just performing the pseudorandom walk. We develop a subexponential algorithm with runtime 2^O(√(n)log n). We also give a polynomial-time algorithm if the graph is almost acyclic. Both results are derived from a new general approach to solve ARRIVAL instances.



There are no comments yet.


page 1

page 2

page 3

page 4


A simple lower bound for ARRIVAL

The ARRIVIAL problem introduced by Dohrau, Gärtner, Kohler, Matoušek and...

Computing the hull number in toll convexity

A walk W between vertices u and v of a graph G is called a tolled walk ...

Online Graph Matching Problems with a Worst-Case Reassignment Budget

In the online bipartite matching with reassignments problem, an algorith...

ARRIVAL: Next Stop in CLS

We study the computational complexity of ARRIVAL, a zero-player game on ...

Connecting edge-colouring

This paper studies the problem of connecting edge-colouring. Given an un...

Assimilation of fire perimeters and satellite detections by minimization of the residual in a fire spread model

Assimilation of data into a fire-spread model is formulated as an optimi...

A weighted graph zeta function involved in the Szegedy walk

We define a new weighted zeta function for a finite graph and obtain its...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1. Introduction

Informally, the ARRIVAL problem is the following (we quote from Dohrau et al. [5]):

Suppose that a train is running along a railway network, starting from a designated origin, with the goal of reaching a designated destination. The network, however, is of a special nature: every time the train traverses a switch, the switch will change its position immediately afterwards. Hence, the next time the train traverses the same switch, the other direction will be taken, so that directions alternate with each traversal of the switch.
Given a network with origin and destination, what is the complexity of deciding whether the train, starting at the origin, will eventually reach the destination?

ARRIVAL is arguably the simplest problem in that is not known to be in . Due to its innocence and at the same time unresolved complexity status, ARRIVAL has attracted quite some attention recently. The train run can be interpreted as a deterministic simulation of a random walk that replaces random decisions at a switch by perfectly fair decisions. Such pseudorandom walks have been studied before under the names of Eulerian walkers [16], rotor-router walks [11], and Propp machines [3]. The reachability question as well as and membership are due to Dohrau et al. [5].

Viewed somewhat differently, ARRIVAL is a zero player game (a process that runs without a controller); in contrast, three other well-known graph games in that are not known to be in are two-player (involving two controllers). These are simple stochastic games, mean-payoff games and parity games [2, 20, 12]. Moreover, it is stated in (or easily seen from) these papers that the one-player variants (the strategy of one controller is fixed) have polynomial-time algorithms. In light of this, one might expect a zero-player game such as ARRIVAL to be really simple. But so far, no polynomial-time algorithm could be found.

On the positive side, the complexity upper bound could be strengthened in various ways. ARRIVAL is in , meaning that there are efficient verifiers that accept unique proofs [9]. A search version of ARRIVAL has been introduced by Karthik C. S. and shown to be in  [14], then in  [9], and finally in  [9, 7]. The latter complexity class, established by Fearnley et al. [7], has an intriguing complete problem, but there is no evidence that ARRIVAL is complete for .

Concerning complexity lower bounds, there is one result: ARRIVAL is -hard [6]

. This is not a very strong statement and means that every problem that can be solved by a nondeterministic log-space Turing machine reduces (in log-space) to ARRIVAL.

Much more interesting are the natural one- and two-player variants of ARRIVAL that have been introduced in the same paper by Fearnley et al. [6]. These variants allow a better comparison with the previously mentioned graph games. It turns out that the one-player variant of ARRIVAL is -complete, and that the two-player variant is -hard [6]. This shows that the

-player variant of ARRIVAL is probably strictly harder than the

-player variants of the other graph games mentioned before, for . This makes it a bit less surprising that ARRIVAL itself () could so far not be shown to lie in .

On the algorithmic side, the benchmark is the obvious algorithm for solving ARRIVAL on a graph with vertices: simulate the train run. This is known to take at most steps (after this, we can conclude that the train runs forever) [5]. There is also an lower bound for the simulation [5]. The upper bound was improved to (in expectation) for some polynomial , using a way to efficiently sample from the run [9]. The same bound was later achieved deterministically [10, 17], and the approach can be refined to yield a runtime of , the currently best one for general ARRIVAL instances [17].

In this paper, we prove that ARRIVAL can be decided in subexponential time . While this is still far away from the desired polynomial-time algorithm, the new upper bound is making the first significant progress on the runtime. We also prove that polynomial runtime can be achieved if the graph is close to acyclic, meaning that it can be made acyclic by removing a constant number of vertices.

As the main technical tool from which we derive both results, we introduce a generalization of ARRIVAL. In this multi-run variant, there is a subset of vertices where additional trains may start and also terminate. It turns out that if we start the right numbers of trains from the vertices in , we also solve the original instance, so the problem is reduced to searching for these right numbers. We show that this search problem is well-behaved and can be solved by systematic guessing, where the number of guesses is exponential in , not in .

We are thus interested in cases where is small but at the same time allows a sufficiently fast evaluation of a given guess. For the subexponential algorithm, we choose as a set of size , with the property that a train can only take a subexponential number of steps until it terminates (in or a destination). For almost acyclic graphs, we choose as a minimum feedback vertex set, a set whose removal makes the graph acyclic. In this case, a train can visit any vertex only once before it terminates.

The multi-run variant itself is an interesting new approach to the ARRIVAL problem, and other applications of it might be found in the future.

2. Arrival

The ARRIVAL problem was introduced by Dohrau et al. [5] as the problem of deciding whether the train arrives at a given destination or runs forever. Here, we work in a different but equivalent setting (implicitly established by Dohrau et al. already) in which the train always arrives at one of two destinations, and we have to decide at which one. The definitions and results from Dohrau et al. [5] easily adapt to our setting. We still provide independent proofs, derived from the more general setting that we introduce in Section 3.

Given a finite set of vertices , an origin , two destinations and two functions , the 6-tuple is an ARRIVAL instance. The vertices and

are called the even and the odd successor of


An ARRIVAL instance defines a directed graph, connecting each vertex to its even and its odd successor. We call this the switch graph of and denote it by . To avoid special treatment of the origin later, we introduce an artificial vertex (think of it as the “train yard”) that only connects to the origin . Formally, where and . We also refer to simply as the edges of . An edge is called proper.

The run procedure is the following. For every vertex we maintain a current and a next successor, initially the even and the odd one. We put a token (usually referred to as the train) at and move it along switch graph edges until it reaches either or . Whenever the train is at a vertex , we move it to ’s current successor and then swap the current and the next successor; see Algorithm 1 for a formal description and Figure 1 for an example.

Input: ARRIVAL instance
Output: destination of the train: either or
Let and be arrays indexed by the vertices of for  do
  /* traversal of edge */
while  and  do
       swap(   /* traversal of edge */
Algorithm 1 Run Procedure

Algorithm 1 (Run procedure) may cycle, but we can avoid this by assuming that from every vertex , one of and is reachable along a directed path in . We call such an ARRIVAL instance terminating, since it guarantees that either or is eventually reached.


Figure 1. A terminating ARRIVAL instance and the train run. Bold edges go to the even successors, dashed edges to the odd successors. The two successors may coincide (lower left vertex). The numbers indicate how often each edge is traversed by the train.
Lemma 1.

Let be a terminating ARRIVAL instance, . Let and suppose that the shortest path from to a destination in has length . Then is visited (the train is at ) at most times by Algorithm 1 (Run Procedure).


Let be the sequence of vertices on a shortest path from to . Consider the first visits to (if there are less, we are done). Once every two consecutive visits, the train moves on to , so we can consider the first visits to and repeat the argument from there to show that is visited at least times for all , before exceeds visits. In particular, is visited, so the run indeed terminates within at most visits to . ∎

Lemma 2.

Let be a terminating ARRIVAL instance, . Let be the maximum length of the shortest path from a vertex in to a destination. Algorithm 1 (Run Procedure) traverses at most proper edges.


By Lemma 1, the total number of visits to vertices is bounded by , where is the number of vertices with a shortest path of length to a destination. We have if and only if , and hence the sum is maximized if for all , and . In this case, the sum is . The number of proper edges being traversed (one after every visit of ) is the same. ∎

Given a terminating instance, ARRIVAL is the problem of deciding whether Algorithm 1 (Run Procedure) returns (YES instance) or (NO instance). It is unknown whether ARRIVAL , but it is in , due to the existence of switching flows that are certificates for the output of Algorithm 1 (Run Procedure).

2.1. Switching Flows

For a vertex and a set of edges , we will denote the set of outgoing edges of by . Analogously, we will denote the set of incoming edges of by . Furthermore, for a function , we will also use the notation instead of to denote the value of at some edge . Lastly, given some vertex , edges and a function , we will use to denote the outflow of at and to denote the inflow of at . For two functions , we write if this holds componentwise, i.e.  for all .

Definition 1 (Switching Flow [5]).

Let be a terminating ARRIVAL instance with edges . A function is a switching flow for if

Moreover, is called a switching flow to if .

Note that due to flow conservation, a switching flow is a switching flow either to or to : exactly one of the destinations must absorb the unit of flow emitted by . If we set to the number of times the edge is traversed in Algorithm 1 (Run Procedure), we obtain a switching flow to the output; see Figure 1 for an example. Indeed, every time the train enters , it also leaves it; this yields flow conservation. The strict alternation between the successors (beginning with the even one) yields switching behavior.

Hence, the existence of a switching flow to the output is necessary for obtaining the output. Interestingly, it is also sufficient. For that, it remains to prove that we cannot have switching flows to both and for the same instance.

Theorem 3 (Switching flows are certificates [5]).

Let be a terminating ARRIVAL instance, . Algorithm 1 (Run Procedure) outputs if and only if there exists a switching flow to .

The switching flow corresponding to the actual train run can be characterized as follows.

Theorem 4 (The run profile is the minimal switching flow [5]).

Let be a terminating ARRIVAL instance with edges . Let be the run profile of , meaning that counts the number of times edge is traversed during Algorithm 1 (Run Procedure). Then for all switching flows . In particular, is the unique minimizer of the total flow over all switching flows.

We note that this provides the missing direction of Theorem 3. Indeed, is a switching flow and hence either has or . By , every switching flow is to the same destination. In general, there can be switching flows  [5].

We will derive Theorem 4 as a special case of Theorem 6 in the next section.

3. A General Framework

In order to solve the ARRIVAL problem, we can simulate Algorithm 1 (Run Procedure) which takes exponential time in the worst case [5]; alternatively, we can try to get hold of a switching flow; via Theorem 3, this also allows us to decide ARRIVAL.

According to Definition 1

, a switching flow can be obtained by finding a feasible solution to an integer linear program (ILP); this is a hard task in general, and it is unknown whether switching flow ILPs can be solved more efficiently than general ILPs.

In this section, we develop a framework that allows us to reduce the problem to that of solving a number of more constrained ILPs. At the same time, we provide direct methods for solving them that do not rely on using general purpose ILP solvers.

3.1. The idea

Given a terminating ARRIVAL instance, we consider the ILP whose (unique) optimal solution is the run profile, see Theorem 4. Given an arbitrary fixed subset of vertices, we drop the flow conservation constraints at the vertices in , but at the same time prescribe outflow values that we can think of as guesses for their values in the run profile.

If we optimally solve this guessing ILP, it turns out that we still obtain a unique solution (Theorem 6 (i) below) and hence unique inflow values for the vertices in . If we happen to stumble upon a fixed point of the mapping , we recover flow conservation also at , which means that our guesses were correct and we have obtained the run profile.

The crucial property is that the previously described mapping is monotone (Theorem 6 (ii) below), meaning that the theory of Tarski fixed points applies that guarantees the existence of a fixed point as well as efficient algorithms for finding it (Lemma 9 below).

Hence, we reduce the computation of the run profile to a benign search problem (for a Tarski fixed point), where every search step requires us to solve a “simpler” ILP. But how much simpler this is (if at all) depends on the properties of the switch graph and an appropriate choice of the set . We next present a “rail” way of solving the guessing ILP that turns out to be more efficient in the worst case (and also simpler) than general purpose ILP solvers.

3.2. The Multi-Run Procedure

Given and (guesses for the outflows from the vertices in ), we start one train from and trains from until they arrive back in , or at a destination. In this way, we produce inflow values for the vertices in .

By starting, we mean that we move each of the trains by one step: the one on moves to , while of the ones at move to the even successor of , and to the odd successor. Trains that are now on vertices in are called waiting (to move on).

For all , we initialize current and next successors as before in Algorithm 1 (Run Procedure). Then we (nondeterministically) repeat the following until there are no more trains waiting.

We pick a vertex where some trains are waiting and call the number of waiting trains . We choose a number of trains to move on; we move of them to the current successor and to the next successor. If is odd, we afterwards swap the current and the next successor at .

Algorithm 2 (Multi-Run Procedure) provides the details. For , the procedure becomes deterministic and is equivalent to Algorithm 1 (Run Procedure).

Input: Terminating ARRIVAL instance with edges ;
, (one train starts from , and trains start from ).
Output: number of trains arriving at , and in , respectively
Let be a zero-initialized array indexed by the vertices of   /* traversal of */
for  do
         /* traversals of */
         /* traversals of */
Let and be arrays indexed by the vertices of for  do
while  do
       pick such that and choose   /* traversals of */
         /* traversals of */
       if  is odd then
Algorithm 2 Multi-Run Procedure
Lemma 5.

Algorithm 2 (Multi-Run Procedure) terminates.


This is a qualitative version of the argument in Lemma 1. Let record how many times each edge has been traversed in total, at any given time of Algorithm 2 (Multi-Run Procedure). For , we always have , where is the number of trains currently waiting at . Suppose for a contradiction that the Multi-Run procedure cycles. Then is unbounded for at least one , which means that is also unbounded, since is bounded. This in turn means that and are unbounded as well, since we distribute evenly between the two successors. Repeating this argument, we see that is unbounded for all vertices reachable from . But as and are bounded (by the number of trains that we started), neither nor are reachable from . This is a contradiction to being terminating. ∎

3.3. Candidate switching flows

After Algorithm 2 (Multi-Run Procedure) has terminated, let be the number of times the edge was traversed. We then have flow conservation at , switching behavior at and outflow from . Indeed, every train that enters eventually also leaves it; moreover, the procedure is designed such that it simulates moving trains out of individually, strictly alternating between successors. Finally, as we start trains from and stop all trains once they arrive in , we also have outflow from .

We remark that we do not have any control over how many trains end up at or . Also, could in principle depend on the order in which we pick vertices, and on the chosen ’s. We will show in Theorem 6 below that it does not. So far, we have only argued that is a candidate switching flow according to the following definition.

Definition 2 (Candidate Switching Flow).

Let be a terminating ARRIVAL instance with edges , , .

A function is a candidate switching flow for (w.r.t.  and ) if

Theorem 6 (Each Multi-Run profile is the minimal candidate switching flow).

Let be as in Definition 2 and let be a Multi-Run profile of , meaning that is the number of times edge was traversed during some run of Algorithm 2 (Multi-Run Procedure). Then the following statements hold.

  • for all candidate switching flows (w.r.t.  and ). In particular, is the unique minimizer of the total flow over all candidate switching flows.

  • For fixed , define . Then the function is monotone, meaning that implies that .


We prove part (i) by the pebble argument [5]: Let be any candidate switching flow w.r.t.  and . For every edge , we initially put pebbles on , and whenever a train traverses in Algorithm 2 (Multi-Run Procedure), we let it collect a pebble. If we can show that we never run out of pebbles, follows. By “running out of pebbles”, we concretely mean that we are for the first time trying to collect a pebble from an edge with no pebbles left.

Since is a candidate switching flow, we cannot run out of pebbles while starting the trains. In fact, we exactly collect all the pebbles on the outgoing edges of . It remains to show that we cannot run out of pebbles while processing a picked vertex . For this, we prove that we maintain the following additional invariants (which hold immediately after starting the trains). Let record for each edge the remaining number of pebbles on . Then for all ,

  • , where is the number of trains waiting at ;

  • .

Suppose that these invariants hold when picking a vertex . As we have not run out of pebbles before, and (a) guarantees that we have pebbles on the outgoing edges; by (b), of them are on and on . From the former, we collect , and from the latter where , so we do not run out of pebbles. We maintain (a) at where both and are reduced by . We also maintain (a) at the successors; there, the gain in exactly compensates the loss in . Finally, we maintain (b) at : If is even, both and shrink by . If is odd, we have after collecting one more pebble from than from , but then we reverse the sign by swapping and .

For , this proves Theorem 4, and for general , we have now proved (i). In particular, the order in which we move trains in Algorithm 2 (Multi-Run Procedure) does not matter.

The proof of (ii) is now an easy consequence; recall that the inflow is the number of trains that arrive at . If , we run Algorithm 2 (Multi Run Procedure) with input such that it first simulates a run with input ; for this, we keep the extra trains corresponding to waiting where they are after the start, until all other trains have terminated. At this point, we have inflow at , where corresponds to the extra trains that have already reached right after the start. We finally run the extra trains that are still waiting, and as this can only further increase the inflows at , we get . ∎

3.4. Runtime

As we have proved in Theorem 6 (i), the Multi-Run procedure always generates the unique flow-minimal candidate switching flow. But the number of steps depends on the order in which vertices are picked, and on the chosen ’s. We start with an upper bound on the number of edge traversals that generalizes Lemma 2.

Lemma 7.

Let be a terminating ARRIVAL instance, , , . Let be the maximum length of the shortest path from a vertex in to a vertex in . Further suppose that at the beginning of some iteration in Algorithm 2 (Multi-Run Procedure), trains are still waiting. Then all subsequent iterations traverse at most edges in total.


We continue to run each of the waiting trains individually and proceed with the next one only when the previous one has terminated. In Algorithm 2 (Multi-Run Procedure), this corresponds to always using and the next vertex as the head of the previously traversed edge, for each of the trains. So we effectively perform Algorithm 1 (Run Procedure) for trains.

As each train terminates once it reaches a vertex in , Lemmata 1 and 2 are easily seen to hold also here, after redefining “destination” as any vertex in . As a consequence, each train traverses at most edges until it reaches a vertex in . This leads to at most edge traversals overall. By Theorem 6 (i), this upper bound holds for all ways of continuing Algorithm 2 (Multi-Run Procedure). ∎

With , we obtain an upper bound for the total number of loop iterations since each iteration traverses at least one edge. But it turns out that we can be significantly faster (and polynomial in the encoding size of ) when we proceed in a greedy fashion, i.e. we always pick the next vertex as the one with the largest number of waiting trains, and move all these trains at once.

Lemma 8.

Let as in Lemma 7, and suppose that in each iteration of Algorithm 2 (Multi-Run Procedure), we pick maximizing and further choose . Then the number of iterations is at most , where .


As in the proof of Theorem 6, we let each train collect a pebble as it traverses an edge, where we initially put pebbles on edge , with being the unique Multi-Run profile. This means that we eventually collect all pebbles. Now consider an iteration and suppose that trains are still waiting. In the greedy algorithm, we move at least of them in this iteration and collect at least that many pebbles. On the other hand, with trains still waiting, and with , there can be no more than pebbles left, as all of them will be collected in the remaining at most that many edge traversals, due to Lemma 7.

In summary, the number of pebbles is guaranteed to be reduced by a factor of

in each iteration, starting from at most pebbles before the first iteration. After iterations, we therefore have at most

pebbles left (using ). Hence, after at most iterations, the greedy version of Algorithm 2 (Multi-Run Procedure) has indeed terminated. ∎

We remark that essentially the same runtime can be achieved by a round robin version that repeatedly cycles through in some fixed order.

3.5. Tarski fixed points

Tarski fixed points arise in the study of order-preserving functions on complete lattices [18]. For our application, it suffices to consider finite sets of the form for some . For such a set, Tarski’s fixed point theorem [18] states that any monotone function has a fixed point, some such that . Moreover, the problem of finding such a fixed point has been studied: Dang, Qi and Ye [4] have shown that a fixed point can be found using evaluations of . Recently, Fearnley and Savani [8] improved this to for .

Via Theorem 6, we have reduced the problem of deciding a terminating ARRIVAL instance to the problem of finding a fixed point of a monotone function , assuming that we can efficiently evaluate . Indeed, if we have such a fixed point, the corresponding flow-minimal candidate switching flow is the flow-minimal actual switching flow and hence the run profile, by Theorem 4.

The function depends on a set of size that we can choose freely (we will do so in the subsequent sections).

Here, we still need to argue that we can restrict to a finite set so that the Tarski fixed point theorem applies. We already know that outflow (and hence inflow) values never exceed in the run profile (Lemma 1), so we simply restrict to this range and at the same time cap the function values accordingly.

Lemma 9.

Let be a terminating ARRIVAL instance, , . Let be the function defined in Theorem 6 (ii), let and consider the function defined by

Then is monotone and has a fixed point that can be found with evaluations of for or evaluations of for . Moreover, is also a fixed point of , and when we apply Theorem 6 (i) with , we obtain the run profile of .


Monotonicity is clear: if , then by monotonicity of ; see Theorem 6 (ii). But then also for the capped values. Hence, the Tarski fixed point theorem [18] yields a fixed point of , and the algorithm of Dang, Qi and Ye [4] finds it using evaluations. For , we can use the algorithm by Fearnley and Savani [8] to find the fixed point after evaluations.

It remains to prove that is a fixed point of . Suppose for a contradiction that it is not a fixed point. Then , i.e. some values were actually capped, and so for at least one . As we also have , we get


On the other hand, consider the candidate switching flow (1) with . At most the total flow emitted (at and the ’s) is absorbed at , so we have


Putting this together with (2), we get an equality in (3). In particular, is the only vertex whose inflow value was capped (by one), all emitted flow is absorbed at , and no flow arrives at or .

But this is a contradiction to : By the same arguments as in the proof of Lemma 1, based on flow conservation (at all ) and switching behavior, one of these outflow units is guaranteed to arrive at . ∎

4. Subexponential Algorithm for Arrival

In this section, we present our main application of the general framework developed in the previous section.

Given a terminating ARRIVAL instance with , the plan is to construct a set of size such that from any vertex, the length of the shortest path in to a vertex in is also bounded by roughly . Since is that small, we can find the run profile with a subexponential number of -evaluations; and since shortest paths are that short, each -evaluation can also be done in subexponential time using the Multi-Run procedure. An overall subexponential algorithm ensues.

Lemma 10.

Let be a terminating ARRIVAL instance with . Let be a real number. In time, we can construct a -set , meaning a set such that

  • ;

  • for all , the shortest path from to in has length at most .


We adapt the ball-growing technique of Leighton and Rao [15], as explained by Trevisan [19].

We first decompose the switch graph into layers based on the distance of the vertices to a destination [10]. More formally, for , we denote by the length of the shortest path from to in . Then the layers are defined as for . Define . We can compute the layer decomposition using breadth-first search in time.

Consider the following procedure that computes a -set as a union of layers:

Input: ARRIVAL instance with layer decomposition ,
Output: a -set
for  do
       if  then
Algorithm 3 Procedure to compute a -set

It is clear that the procedure is done in time. To prove (i), we observe that whenever we add a layer to , we have ; moreover, the ’s considered in these inequalities are mutually disjoint subsets of . Hence, .

For (ii), let . Then for some . Let be the largest index such that . Then the shortest path from to a vertex in has length at most . It remains to bound . The interesting case is .

Consider the above algorithm. After the -th iteration, we have . Moreover, for , meaning that for each iteration in this range, the size of has grown by a factor of at least . Hence, after the -th iteration, . This implies , where we use the inequality for . ∎

Theorem 11.

Let be a terminating ARRIVAL instance with . can be decided (and the run profile be computed) in time , for some polynomial .


By Lemma 10, we can find a -set in time, for any . As , by Lemma 9, we can then decide with evaluations of the function . Each evaluation in turn requires us to evaluate the function in Theorem 6 (ii) for a given . We can do this by applying Algorithm 2 (Multi-Run Procedure). By Lemma 8 and the definition of a -set in Lemma 10, running this algorithm in a greedy fashion requires at most iterations, where and . Further, from the choice of , we have . Therefore, the number of iterations is for some polynomial . At each iteration, we need to find the vertex with the highest number of waiting trains, as stated in Lemma 8, and move the trains from the chosen vertex. All these operations take polynomial time.

In total, the runtime of the whole process is for some polynomial . Choosing , the runtime becomes . ∎

5. Feedback Vertex Sets

In the previous section, we used our framework to obtain an improved algorithm for ARRIVAL in general. In this section, we will instantiate the framework differently to obtain a polynomial-time algorithm for a certain subclass of ARRIVAL.

A subset of vertices in a directed graph is called a feedback vertex set if and only if the subgraph induced by is acyclic (i.e. it contains no directed cycle). Karp [13] showed that the problem of finding a smallest feedback vertex set is -hard. However, there exists a parameterized algorithm by Chen et al. [1] which can find a feedback vertex set of size in time in a directed graph on vertices, or report that no such set exists.

If we apply Theorem 6 with a feedback vertex set , it turns out that we can compute the Multi-Run profile in polynomial time, meaning that we get a polynomial-time algorithm for ARRIVAL if there is a feedback vertex set of constant size .

Theorem 12.

Let be a terminating ARRIVAL instance with graph . If has a feedback vertex set of size (assumed to be fixed as ), then can be decided in time .


Using the algorithm by Chen et al. [1], we can find a feedback vertex set in time if it exists. According to Lemma 9, we can then decide with evaluations of the function if or with evaluations of the function if . Each evaluation in turn requires us to evaluate the function in Theorem 6 (ii) for a given . To do this, we apply Algorithm 2 (Multi-Run Procedure) where we pick vertices in topological order and choose always. As we never send any trains back to vertices that have previously been picked, we terminate within iterations, each of which can be performed in time as it involves -bit numbers. Hence, can be computed in time.

In total, for , the runtime is . For , the runtime is . Hence, the claimed runtime holds for all . ∎

We remark that even if is not constant, we can still beat the subexponential algorithm in Section 4, as long as for some .


  • [1] J. Chen, Y. Liu, S. Lu, B. O’Sullivan, and I. Razgon (2008) A fixed-parameter algorithm for the directed feedback vertex set problem. J. ACM 55 (5), pp. Art. 21, 19. External Links: ISSN 0004-5411, MathReview Entry Cited by: §5, §5.
  • [2] A. Condon (1992) The complexity of stochastic games. Information and Computation 96 (2), pp. 203 – 224. Note: External Links: ISSN 0890-5401 Cited by: §1.
  • [3] J. Cooper, B. Doerr, J. Spencer, and G. Tardos (2007) Deterministic random walks on the integers. European Journal of Combinatorics 28 (8), pp. 2072 – 2090. Note: EuroComb ’05 - Combinatorics, Graph Theory and Applications External Links: ISSN 0195-6698 Cited by: §1.
  • [4] C. Dang, Q. Qi, and Y. Ye (2020) Computations and complexities of Tarski’s fixed points and supermodular games. External Links: 2005.09836 Cited by: §3.5, §3.5.
  • [5] J. Dohrau, B. Gärtner, M. Kohler, J. Matoušek, and E. Welzl (2017) ARRIVAL: a zero-player graph game in . In A journey through discrete mathematics, pp. 367–374. External Links: MathReview Entry Cited by: §1, §1, §1, §2.1, §2, §3.3, §3, Definition 1, Theorem 3, Theorem 4.
  • [6] J. Fearnley, M. Gairing, M. Mnich, and R. Savani (2018) Reachability switching games. In 45th International Colloquium on Automata, Languages, and Programming, LIPIcs. Leibniz Int. Proc. Inform., Vol. 107, pp. Art. No. 124, 14. External Links: MathReview Entry Cited by: §1, §1.
  • [7] J. Fearnley, S. Gordon, R. Mehta, and R. Savani (2020) Unique end of potential line. J. Comput. System Sci. 114, pp. 1–35. External Links: ISSN 0022-0000, MathReview Entry Cited by: §1.
  • [8] J. Fearnley and R. Savani (2020) A faster algorithm for finding Tarski fixed points. External Links: 2010.02618 Cited by: §3.5, §3.5.
  • [9] B. Gärtner, T. D. Hansen, P. Hubáček, K. Král, H. Mosaad, and V. Slívová (2018) ARRIVAL: next stop in CLS. In 45th International Colloquium on Automata, Languages, and Programming, LIPIcs. Leibniz Int. Proc. Inform., Vol. 107, pp. Art. No. 60, 13. External Links: MathReview Entry Cited by: §1, §1.
  • [10] B. Gärtner and H. P. Hoang (2021) ARRIVAL with two vertices per layer. Note: Manuscript in preparation Cited by: §1, §4.
  • [11] A. E. Holroyd and J. Propp (2010)

    Rotor walks and Markov chains

    In Algorithmic probability and combinatorics, Contemp. Math., Vol. 520, pp. 105–126. External Links: MathReview (Laurent Miclo) Cited by: §1.
  • [12] M. Jurdziński (1998) Deciding the winner in parity games is in UP co-UP. Information Processing Letters 68 (3), pp. 119 – 124. Note: External Links: ISSN 0020-0190 Cited by: §1.
  • [13] R. M. Karp (1972) Reducibility among combinatorial problems. In Complexity of computer computations (Proc. Sympos., IBM Thomas J. Watson Res. Center, Yorktown Heights, N.Y., 1972), pp. 85–103. External Links: MathReview (John T. Gill) Cited by: §5.
  • [14] C. S. Karthik (2017) Did the train reach its destination: the complexity of finding a witness. Inform. Process. Lett. 121, pp. 17–21. External Links: ISSN 0020-0190, MathReview Entry Cited by: §1.
  • [15] T. Leighton and S. Rao (1999) Multicommodity max-flow min-cut theorems and their use in designing approximation algorithms. J. ACM 46 (6), pp. 787–832. External Links: ISSN 0004-5411 Cited by: §4.
  • [16] V. B. Priezzhev, D. Dhar, A. Dhar, and S. Krishnamurthy (1996) Eulerian walkers as a model of self-organized criticality. Phys. Rev. Lett. 77, pp. 5079–5082. Cited by: §1.
  • [17] G. Rote (2020) Note: Personal communication Cited by: §1.
  • [18] A. Tarski (1955) A lattice-theoretical fixpoint theorem and its applications. Pacific J. Math. 5, pp. 285–309. External Links: ISSN 0030-8730, MathReview (B. Jónsson) Cited by: §3.5, §3.5.
  • [19] L. Trevisan (2005) Approximation algorithms for unique games. In 46th Annual IEEE Symposium on Foundations of Computer Science (FOCS’05), pp. 197–205. Cited by: §4.
  • [20] U. Zwick and M. Paterson (1996) The complexity of mean payoff games on graphs. Theoretical Computer Science 158, pp. 343–359. Cited by: §1.