The Polynomial Complexity of Vector Addition Systems with States

07/01/2019
by   Florian Zuleger, et al.
0

Vector addition systems are an important model in theoretical computer science and have been used in a variety of areas. In this paper, we consider vector addition systems with states over a parameterized initial configuration. For these systems, we are interested in the standard notion of computational complexity, i.e., we want to understand the length of the longest trace for a fixed vector addition system with states depending on the size of the initial configuration. We show that the asymptotic complexity of a given vector addition system with states is either Θ(N^k) for some computable integer k, where N is the size of the initial configuration, or at least exponential. We further show that k can be computed in polynomial time in the size of the considered vector addition system. Finally, we show that 1 < k < 2^n, where n is the dimension of the considered vector addition system.

READ FULL TEXT VIEW PDF

Authors

page 1

page 2

page 3

page 4

10/23/2017

Ranking Functions for Vector Addition Systems

Vector addition systems are an important model in theoretical computer s...
04/28/2021

Reachability in Vector Addition Systems is Ackermann-complete

Vector Addition Systems and equivalent Petri nets are a well established...
07/17/2020

Reachability in Two-Dimensional Vector Addition Systems with States: One Test is for Free

Vector addition system with states is an ubiquitous model of computation...
04/29/2018

Efficient Algorithms for Asymptotic Bounds on Termination Time in VASS

Vector Addition Systems with States (VASS) provide a well-known and fund...
02/19/2018

Linear Equations with Ordered Data

Following a recently considered generalization of linear equations to un...
04/08/2019

On Functions Weakly Computable by Pushdown Petri Nets and Related Systems

We consider numerical functions weakly computable by grammar-controlled ...
02/23/2018

Parameterized verification of synchronization in constrained reconfigurable broadcast networks

Reconfigurable broadcast networks provide a convenient formalism for mod...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Vector addition systems (VASs) [13], which are equivalent to petri nets, are a popular model for the analysis of parallel processes [7]. Vector addition systems with states (VASSs) [10] are an extension of VASs with a finite control and are a popular model for the analysis of concurrent systems, because the finite control can for example be used to model a shared global memory [12]. In this paper, we consider VASSs over a parameterized initial configuration. For these systems, we are interested in the standard notion of computational complexity, i.e., we want to understand the length of the longest execution for a fixed VASS depending on the size of the initial configuration. VASSs over a parameterized initial configuration naturally arise in two areas: 1) For concurrent systems the number of system processes is often not known in advance, and thus the system is designed such that a template process can be instantiated an arbitrary number of times. The parameterized verification problem, i.e., the problem of analyzing the concurrent system for all possible system sizes, is a common theme in the literature [9, 8, 1, 11, 4, 2, 3]. 2) VASSs have been used as backend for the computational complexity analysis of programs [18, 19, 20]. Here, suitable abstractions are applied to a program under analysis in order to derive a VASS. The soundness of the abstraction guarantees that the complexity of the VASS is an upper bound on the complexity of the program under analysis. The VASS needs to be considered over a parameterized initial configuration in order to model the dependence of the computational complexity on the input parameters of the program.

Two recent papers have considered the computational complexity of VASSs over a parameterized initial configuration. [15] presents a PTIME procedure for deciding whether a VASS is polynomial or at least exponential, but does not give a precise analysis in case of polynomial complexity. [5] establishes the precise asymptotic computational complexity for the special case of VASSs whose configurations are linearly bounded in the size of the initial configuration. In this paper, we generalize both results and fully characterize the asymptotic behaviour of VASSs with polynomial complexity: We show that the asymptotic complexity of a given VASS is either for some computable integer , where is the size of the initial configuration, or at least exponential. We further show that can be computed in PTIME in the size of the considered VASS. Finally, we show that , where is the dimension of the considered VASS.

1.1 Overview and Illustration of Results

We discuss our approach on the VASS , stated in Figure 1, which will serve as running example. The VASS has dimension (i.e., the vectors annotating the transitions have dimension ) and the four states . In this paper we will always represent vectors using a set of variables , whose cardinality equals the dimension of the VASS. For we choose and use as indices for the first, second and third component of 3-dimensional vectors. The configurations of a VASS are pairs of states and valuations of the variables to non-negative integers. A step of a VASS moves along a transition from the current state to a successor state, and adds the vector labelling the transition to the current valuation; a step can only be taken if the resulting valuation is non-negative. In this paper, we will only consider connected VASSs because non-connected VASSs can be decomposed into strongly-connected components, which can then be analyzed in isolation. For the computational complexity analysis of VASSs, we consider traces (sequences of steps) whose initial configurations consist of a valuation whose maximal value is bounded by (the parameter used for bounding the size of the initial configuration) and an arbitrary initial state (because of connectivity a fixed initial state would result in the same computational complexity up to a constant). The computational complexity is then the length of the longest trace whose initial configuration is bounded by .

In order to analyze the computational complexity of a considered VASS, our approach computes variable bounds and transition bounds. A variable bound is the maximal value of a variable reachable by any trace whose initial configuration is bounded by . A transition bound is the maximal number of times a transition appears in any trace whose initial configuration is bounded by . For , our approach establishes the linear variable bound for and , and the quadratic bound for . We note that because the variable bound of is quadratic and not linear, cannot be analyzed by the procedure of [5]. Our approach establishes the bound for the transitions and , the bound for transitions , , , , and the bound for all self-loops. The computational complexity of is then the maximum of all transition bounds, i.e., . In general, our main algorithm (Algorithm 1 presented in Section 4) either establishes that the VASS under analysis has at least exponential complexity or computes asymptotically precise variable and transition bounds , with computable in PTIME and , where is the dimension of the considered VASS. We note that our upper bound also improves the analysis of [15], which reports an exponential dependence on the number of transitions (and not only on the dimension).

We further state a family of VASSs, which illustrate that can indeed be exponential in the dimension. uses variables and consists of states , for and . We note that has dimension . has transitions

  • , for , with and for all ,

  • , for , with for all ,

  • , for , with , , in case , and for all other ,

  • , for , with , , and for all other ,

  • , for , with and for all ,

  • , for , with for all .

in Figure 1 depicts for , where the vectors components are stated in the order . It is not hard to verify for all that is the precise asymptotic variable bound for and and that is the precise asymptotic transition bound for , , , as well as , in case (Algorithm 1 can be used to find these bounds).

Figure 1: VASS (left) and VASS (right)

1.2 Related Work

A celebrated result on VASs is the EXPSPACE-completeness [16, 17] of the boundedness problem. Deciding termination for a VAS with a fixed initial configuration can be reduced to the boundedness problem, and is therefore also EXPSPACE-complete; this also applies to VASSs whose termination problem can be reduced to the VAS termination problem. In contrast, deciding the termination of VASSs for all initial configurations is in PTIME. It is not hard to see that termination over all initial configurations is equivalent to the existence of non-negative cycles (e.g., using Dickson’s Lemma [6]). Kosaraju and Sullivan have given a PTIME procedure for the detection of zero-cycles [14], which can be easily be adapted to non-negative cycles. The existence of zero-cycles is decided by the repeated use of a constraint system in order to remove transitions that can definitely not be part of a zero-cycle. The algorithm of Kosaraju and Sullivan forms the basis for both cited papers [15, 5], as well as the present paper.

A line of work [18, 19, 20]

has used VASSs (and their generalizations) as backends for the automated complexity analysis of C programs and given sound algorithms for obtaining safe estimations of variable and transition bounds. These algorithms have been designed for practical applicability, but are not complete and no theoretical analysis of their precision has been given. We point out, however, that these papers have inspired the Bound Proof Principle in Section 

5.

2 Preliminaries

Basic Notation.

For a set we denote by the number of elements of . Let be either or . We write for the set of vectors over indexed by some set . We write for the set of matrices over indexed by and . We write for the vector which has entry in every component. Given , we write for the entry at line of , and for the maximum absolute value of . Given and , we denote by the restriction of to , i.e., we set for all . Given , we write for the vector in column of and for the entry in column and row of . Given and , we denote by the restriction of to , i.e., we set for all . We write for the square matrix which has entries on the diagonal and otherwise. Given we write for component-wise addition, for multiplying every component of by some and for component-wise comparison. Given , and , we write for the standard matrix multiplication, for the standard matrix-vector multiplication, for the transposed matrix of and for the transposed vector of .

Vector Addition System with States (VASS).

Let be a finite set of variables. A vector addition with states (VASS) consists of a finite set of states and a finite set of transitions , where ; we call the dimension of . We write to denote a transition ; we call the vector the update of transition . A path of is a finite sequence with for all . We define the length of by and the value of by . Let be the number of times contains the transition , i.e., the number of indices such that . We remark that for every path of . Given a finite path and a path such that the last state of equals the first state of , we write for the path obtained by joining the last state of with the first state of ; we call the concatenation of and , and a decomposition of . We say is a sub-path of , if there is a decomposition for some . A cycle is a path that has the same start- and end-state. A multi-cycle is a finite set of cycles. The value of a multi-cycle is the sum of the values of its cycles. is connected, if for every pair of states there is a path from to . VASS is a sub-VASS of , if and . Sub-VASSs and are disjoint, if . A strongly-connected component (SCC) of a VASS is a maximal non-empty sub-VASS of such that is connected.

Let be a VASS. The set of valuations consists of -vectors over the natural numbers (we assume includes ). The set of configurations consists of pairs of states and valuations. A step is a triple such that and . We write to denote a step of . A trace of is a finite sequence of steps. We lift the notions of length and instances from paths to traces in the obvious way: we set and , for , where is the path that consists of the transitions used by . We denote by the maximum absolute value of the starting valuation of the . We say that reaches a valuation , if . The complexity of is the function , which returns for every the supremum over the lengths of the traces with . The variable bound of a variable is the function , which returns for every the supremum over the the values of reachable by traces with . The transition bound of a transition is the function , which returns for every the supremum over the number of instances of in traces with .

Rooted Tree.

A rooted tree is a connected undirected acyclic graph in which one node has been designated as the root. We will usually denote the root by . We note that for every node in a rooted tree there is a unique path of to the root. The parent of a node is the node connected to on the path to the root. Node is a child of a node , if is the parent of . is a descendent of , if lies on the path from to the root; is a strict descendent, if . is an ancestor of , if a descendent of ; is a strict ancestor, if . The distance of a node to the root, is the number of nodes on the path from to the root. We denote by the set of all nodes with the same distance to the root; we note that is a singleton set that only contains .

All proofs are stated in the appendix.

3 A Duality Result

We will make use of the following matrices associated to a VASS throughout the paper: Let be a VASS. We define the update matrix by setting for all transitions . We define the flow matrix by setting , for transitions with , and for transitions with ; in both cases we further set for all states with and . We note that every column of either contains exactly one and entry (in case the source and target of transition  are different) or only entries (in case the source and target of transition  are the same).

Example 1

We state the update and flow matrix for from Section 1:
, , with column order , , , , , , , , , (from left to right) and row order for resp. for (from top to bottom)

We now consider the constraint systems () and (), stated below, which have maximization objectives. The constraint systems will be used, slightly adapted, by our main algorithm in Section 4

. We observe that both constraint systems are always satisfiable (setting all coefficients to zero gives a trivial solution). We further observe that the solutions of both constraint systems are closed under addition. Hence, both constraint systems have a unique optimal solution in terms of the number of inequalities for which the maximization objective is satisfied. The maximization objectives can be implemented by suitable linear objective functions. Hence, both constraint systems can be solved in PTIME over the integers, because we can first obtain rational solutions using linear programming and then scale these solutions to the integers by multiplying with the least common multiple of the denominators.

constraint system (): there exists with
Maximization Objective: Maximize the number of inequalities with and
constraint system (): there exist with
Maximization Objective: Maximize the number of inequalities with and

The solutions of () and () are characterized by the following two lemmata:

Lemma 1 (Cited from [14])

is a solution to constraint system () iff there exists a multi-cycle with and instances of transition  for every .

Lemma 2 (Cited from [5])

Let be a solution to constraint system (). Let be the function . Then,

  • for all we have

  • for all transitions and valuations with we have ; moreover, the inequality is strict for every with .

We now state a duality between optimal solutions to constraint systems () and (), which will be obtained by an application of Farkas’ Lemma. This duality is the main reason why we will be able to compute the precise asymptotic complexity of VASSs with polynomial bounds.

Lemma 3

Let and be an optimal solution to constraint system () and let be an optimal solution to constraint system (). Then, for all variables we either have or , and for all transitions we either have or .

4 Main Algorithm

Our main algorithm – Algorithm 1 – computes complexity and variable bounds for a given input VASS . The algorithm will either detect that has at least exponential complexity or will compute the precise asymptotic bounds for the transition and variables of (up to a constant factor): Algorithm 1 will compute values such that for every and values such that for every .

Initialization.

We assume to be the update matrix and to be the flow matrix associated to as discussed in Section 3. The algorithm maintains a rooted tree . At initialization, consists only of the root node . Every node of will always be labelled by a sub-VASS of . The nodes in the same layer of will always be labelled by disjoint sub-VASS of . We initialize , i.e., the root is labelled by the input . The main loop of Algorithm 1 will extend by one layer per loop iteration. The variable always contains the next layer that is going to be added to . We initialize as Algorithm 1 is going to add layer to in the first loop iteration. For computing variable and transition bounds, Algorithm 1 maintains the functions and . We initialize for all variables and for all transitions .

The constraint systems solved during each loop iteration.

In loop iteration , Algorithm 1 will set for some transitions and for some variables . In order to determine those transitions and variables, Algorithm 1 instantiates constraint systems () and () from Section 3 over the set of transitions , which contains all transitions associated to nodes in layer of . However, instead of a direct instantiation using and (i.e., the restriction of and to the transitions ), we need to work with an extended set of variables and an extended update matrix. We set , where we set for all . This means that we use a different copy of variable for every node in layer . We note that for a variable with there is only a single copy of in because is the only node in layer . We define the extended update matrix by setting

Constraint systems () and () stated in Figure 2 can be recognized as instantiation of constraint systems () and () with matrices and and variables , and hence the duality stated in Lemma 3 holds. We explain key properties of constraint system () and discuss the choice of in Section 5, when we outline the proof of the upper bound. We explain key properties of constraint system () in Section 6, when we outline the proof of the lower bound.

We note that Algorithm 1 does not use the optimal solution to constraint system () for the computation of the and , and hence the computation of the optimal solution could be removed from the algorithm. The solution is however needed for the extraction of lower bounds in Sections 6 and 8, and this is the reason why it is stated here. The extraction of lower bounds is not explicitly added to the algorithm in order to not clutter the presentation.

Discovering transition bounds.

After an optimal solution , to constraint system () has been found, Algorithm 1 collects all transitions with in the set (note that the optimization criterion in constraint system () tries to find as many such as possible). Algorithm 1 then sets for all . The transitions in will not be part of layer of .

Construction of the next layer in .

For each node in layer , Algorithm 1 will create children by removing the transitions in . This is done as follows: Given a node in layer , Algorithm 1 considers the VASS associated to . Then, is decomposed into its SCCs. Finally, for each SCC of a child of is created with . Clearly, the new nodes in layer are labelled by disjoint sub-VASS of .

The transitions of the next layer.

We claim that the new layer of contains all transitions of layer except for the transitions , i.e., . By Lemma 1 there is a multi-cycle with instances of every transition . By Lemma 3 we have . Hence, is the set of transitions that appear in the multi-cycle . Because the of the nodes in layer are disjoint, we have that the transitions of every cycle of belong to only a single set for some . Now we consider some node . Let be the VASS associated to . Clearly, every cycle of whose transitions belong to must be part of an SCC of (recall the SCCs are the maximal strongly connected sub-VASSs). Now the claim follows, because for every of there is a node with .

Discovering variable bounds.

For each with Algorithm 1 checks whether (we point out that the optimization criterion in constraint systems () tries to find as many with as possible). Algorithm 1 then sets for all those variables.

The check for exponential complexity.

In each loop iteration, Algorithm 1 checks whether there are , with . If this is not the case, then we can conclude that is at least exponential (see Theorem 4.1 below). If the check fails, Algorithm 1 increments and continues with the construction of the next layer in the next loop iteration.

Termination criterion.

The algorithm proceeds until either exponential complexity has been detected or until and for all and (i.e., bounds have been computed for all variables and transitions).

Input: a connected VASS with update matrix and flow matrix
:= single root node with ;
:= 1;
for all variables ;
for all transitions ;
repeat
       let ;
       let , where for ;
       let be the matrix defined by
                         ;
       find optimal solutions and , to constraint systems () and ();
       let ;
       set for all ;
       foreach  do
             let be the VASS associated to ;
             decompose into SCCs;
             foreach  SCC of  do
                   create a child of with ;
                  
            
      foreach  with  do
             if  then  set ;
            
      if there are no , with  then
             return has at least exponential complexity’’
      ;
      
until  and for all and ;
Algorithm 1 Computes transition and variable bounds for a VASS
constraint system (): there exists with
Maximization Objective: Maximize the number of inequalities with and
constraint system (): there exist with
Maximization Objective: Maximize the number of inequalities with and
Figure 2: Constraint Systems () and () used by Algorithm 1

Invariants.

We now state some simple invariants maintained by Algorithm 1, which are easy to verify:

  • For every node that is a descendent of some node we have that is a sub-VASS of .

  • The value of and is changed at most once for each input; when the value is changed, it is changed from to some value .

  • For every transition and layer of , we have that either or there is a node such that .

  • We have for if and only if there is a with and there is no with .