1. Introduction
Vector addition systems with states
(VASS) are basically finite state systems with vectors of integers as transition weights, as depicted in Fig. 1.
Their semantics, starting from an initial vector of natural numbers, simply adds componentwise the weights of the successive transitions, but the current values should remain nonnegative at all times on every coordinate. For instance, in the threedimensional system of Fig. 1,
is a path witnessing that can be reached from , but for instance is not a valid execution step due to the negative value in the first coordinate.
Vector addition systems with states are equivalent to Petri nets, and wellsuited whenever one needs to model discrete resources, for instance threads in concurrent computations, molecules in chemical reactions, organisms in biological processes, etc. They are also a crucial ingredient in many algorithms. In particular, the decidability of their reachability problem [21, 13, 14, 17] is the cornerstone of many decidability results—see for instance [28, Sec. 5] for a large sample of problems interreducible with VASS reachability in logic, formal languages, verification, etc.
In spite of its relevance to a wide range of problems, the complexity of the VASS reachability problem is still not well understood. Indeed, it turns out that this seemingly simple problem is both conceptually and computationally very complex.
On a conceptual level,
the 1981 decidability proof by Mayr [21] was the culmination of more than a decade of research in the topic and is considered as one of the great achievements of theoretical computer science. Both Mayr’s decomposition algorithm and its proof are however quite intricate. Kosaraju [13] and Lambert [14] contributed several simplifications of Mayr [21]’s original arguments and Leroux and Schmitz [18] recast the decomposition algorithm in a more abstract framework based on wellquasiorder ideals, while Leroux [17] provides a very simple algorithm with a short but non constructive proof, but none of these developments can be called ‘easy’ and the problem seems inherently involved.
On a computational level,
on the one hand, the best known lower bound—which was from 1976 until very recently hardness [19]—is now TOWERhardness [6]. This new lower bound puts the problem firmly in the realm of nonelementary complexity. In this realm, complexity is measured using the ‘fastgrowing’ complexity classes from [27], which form a strict hierarchy indexed by ordinals. The already mentioned corresponds to problems solvable in time bounded by a tower of exponentials; each for a finite is primitive recursive, and corresponds to problems solvable with Ackermannian resources (see Fig. 2). On the other hand, due to the intricacy of the decomposition algorithm, it eluded analysis for a long time until a ‘cubic Ackermann’ upper bound was obtained in [18] at level , with a slightly improved upper bound in [29].
This leaves a gigantic gap between the known lower and upper bounds. This is however mitigated by the fact that the decomposition algorithm, on which the upper bounds were obtained, provably has a non primitiverecursive complexity. This was already observed by Müller [22], due to the algorithm’s reliance on Karp and Miller trees [12]. Moreover, the full decomposition produced by the algorithm contains more information than just the existence of a reachability witness (which exists if and only if the full decomposition is not empty). For instance, Lambert [14] exploits the full decomposition to derive a pumping lemma for labelled VASS languages, Habermehl et al. [10] further show that one can compute a finitestate automaton recognising the downwardclosure of a labelled VASS language with respect to the scattered subword ordering, and Czerwiński et al. [5] show how to exploit the decomposition for deciding language boundedness properties. In particular, the result of Habermehl et al. means that one can decide, given two labelled VASS, whether an inclusion holds between the downwardclosures of their languages, which is an ACKERMANNhard problem [32]. Thus any algorithm that returns such a full decomposition must be non primitiverecursive.
Contributions.
In this paper, we show that VASS reachability is in , and more precisely in when the dimension of the system is fixed. This improvement over the bound (resp. in fixed dimension) shown in [29] is obtained by analysing a decomposition algorithm similar to those of Mayr [21], Kosaraju [13], and Lambert [14]. In a nutshell, a decomposition algorithm defines both

a condition on this structure that ensures there is an execution witnessing reachability (resp. ‘consistent marking’, ‘property ’, and ‘perfectness’)—see Sec. 4.3.3.
The algorithms compute a decomposition by successive refinements of the structure until the condition is fulfilled, by which time the existence of an execution becomes guaranteed—see Sec. 4.
We work in this paper with a decomposition algorithm quite similar to that of Kosaraju [13], for which the reader will find good expositions for instance in [22, 25, 15]. We benefit however from two key insights (which in turn require significant adaptations throughout the algorithm).
The first key insight is a new termination argument for the decomposition process, based on the dimensions of the vector spaces spanned by the cycles of the structure (see Sec. 3.2). On its own, this new termination argument would already be enough to yield upper bounds and primitiverecursive ones in fixed dimension.
The second key insight lies within the decomposition process itself, where we show using techniques inspired by Rackoff [24] that we can eschew the computation of Karp and Miller’s coverability trees, and therefore the worstcase Ackermannian blowup that arises from their use [3]—see Sec. 4.2.1. In itself, this new decomposition algorithm would not bring the complexity below the previous bounds, but combined with the first insight, it yields rather tight upper bounds, at level in fixed dimension —see Sec. 5.
In fact, the new upper bounds apply to other decision problems. As we discuss in Sec. 6, Zetzsche’s lower bound [32] can be refined to prove that the inclusion problem between the downwardclosures of two labelled VASS languages is hard in fixed dimension , thus close to matching the upper bound one obtains by applying the results of Habermehl et al. [10] to our decomposition algorithm.
2. Background
Notations
Let extend the set of natural numbers with an infinite element with for all . We also use the partial order over defined by if .
Let be a dimension. The relations and are extended componentwise to vectors in . The components of a vector that are equal to intuitively denote arbitrarily large values; we call a vector in finite. Given a vector and a subset of the components, we denote by the vector obtained from by replacing components not in by . Note that implies and that for all and . For instance, for , but ; if , then and , and then . We let denote the zero vector and the vector with for all . Observe that for all .
For a vector , its norm is defined over its finite components as (a sum over an empty set is zero); for a vector , we let as usual . For instance, and .
Vector Addition Systems
While we focus in this paper on reachability in vector addition systems with a finite set of control states, we also rely on notations for the simpler case of vector addition systems.
A vector addition system (VAS) [12] of dimension is a finite set of vectors called actions. The semantics of a VAS is defined over configurations in . We associate to an action the binary relation over configurations by if , where addition is performed componentwise with the convention that for every . Given a finite word of actions we also define the binary relation over configurations by if there exists a sequence of configurations such that
The VAS reachability problem consists in deciding given two finite configurations in and a VAS whether there exists a word such that .
Vector Addition Systems with States
A vector addition system with states (VASS) [11] of dimension is a triple where is a nonempty finite set of states, is the input state, is the output state, and is a finite set of transitions in ; is the associated set of actions.
Example 2.1.
Figure 1 depicts the VASS of dimension where and with
We focus on VASSes in this paper rather than VASes, because we exploit the properties of their underlying directed graphs. A path in a VASS from a state to a state labelled by a word of actions is a word of transitions of of the form with , , and for all . Such a path is complete if and are the input and output states of . A cycle on a state is a path from to .
Example 2.2.
For instance, in Ex. 2.1, the execution presented in the introduction corresponds to the path labelled by , and is complete.
We write if there exists a path from to and a path from to ; this defines an equivalence relation whose equivalence classes are called the strongly connected components of . In Ex. 2.1, the strongly connected components are and . A VASS is said to be strongly connected if is a strongly connected component of .
The Parikh image of a path is the function that maps each transition to its number of occurrences in . The displacement of a path labelled by a word of actions is the vector ; note that this is equal to if is the Parikh image of .
Example 2.3.
For the example path from Ex. 2.2, and .
A stateconfiguration of a VASS is a pair denoted by in the sequel. Given an action we define the step relation over stateconfigurations by if and . By extension, given a word of actions , if there exists a sequence of stateconfigurations such that
Notice that if, and only if, there exists a path in from to labelled by such that . In Ex. 2.1, . Finally, we write if there exists such that .
Reachability
We focus in this paper on the following decision problem.
Problem: 1 (VASS reachability).

[]
 input:

a VASS of dimension and two finite configurations
 question:

does hold?
The previously mentioned VAS reachability problem reduces to Problem: 1 (VASS reachability).: given a VAS and two finite configurations , it suffices to consider the Problem: 1 (VASS reachability). problem with input and the same configurations . A converse reduction is possible by encoding the states, at the expense of increasing the dimension by three [11].
3. Decomposition Structures
The version of the decomposition algorithm we present in Sec. 4 proceeds globally as the ones of Mayr, Kosaraju, and Lambert, and we call the underlying structures KLM sequences after them.
3.1. KLM Sequences
A KLM sequence of dimension is a sequence
(1) 
where are configurations, are VASSes of dimension , and are actions. KLM sequences are essentially the same as Kosaraju’s ‘generalised VASSes’ [13], except that we do not require to be strongly connected.
The action language of a KLM sequence is the set of words of actions of the form such that is the label of a complete path of for every , and such that there exists a sequence of configurations in such that
(2) 
where and for every .
Note that the reachability problem for a VASS and two finite configurations reduces to the nonemptiness of the action language of the KLM sequence . In fact, in that case, the action language is the set of words such that .
Example 3.1.
In Ex. 2.1, is a KLM sequence with action language
3.2. Ranks and Sizes
Vector Spaces
We associate to a transition of a VASS the vector space spanned by the displacements of the cycles that contain . The following lemma shows that this vector space only depends on the strongly connected components of .
Lemma 3.2.
Let be a transition of a strongly connected VASS . Then the vector space is equal to the vector space spanned by the displacements of the cycles of .
Proof.
Let be the vector space spanned be the displacements of the cycles of . Naturally, we have . For the converse, let us consider a sequence of cycles such that is a cycle on a state for every , and such that span the vector space . Since is strongly connected, there exists a path from to for every with . Moreover, we can assume without loss of generality that occurs in the cycle . Let be the cycle obtained from by inserting in and formally defined as . Observe that and are both in since occurs in the cycles and . As , it follows that . We derive that the vector space spanned by is included in . Hence . ∎
As a corollary, if two transitions and are induced by the same strongly connected component of a VASS , then .
Ranks
The rank of a VASS is the tuple where is the number of transitions such that the dimension of is equal to . The rank of a KLM sequence defined as is the vector
Comments
There are no comments yet.