1 Introduction
Static analysis for quantitative bounds. Static analysis of programs reasons about programs without running them. The most basic and important problem about liveness properties studied in program analysis is the termination problem that given a program asks whether it always terminates. The above problem seeks a qualitative or Boolean answer. However, given the recent interest in analysis of resourceconstrained systems, such as embedded systems, as well as for performance analysis, it is vital to obtain quantitative performance characteristics. In contrast to the qualitative termination, the quantitative termination problem asks to obtain bounds on the number of steps to termination. The quantitative problem, which is more challenging than the qualitative one, is of great interest in program analysis in various domains, e.g., (a) in applications domains such as hard realtime systems, worstcase guarantees are required; and (b) the bounds are useful in early detection of egregious performance problems in large code bases [34].
Approaches for quantitative bounds. Given the importance of the quantitative termination problem significant research effort has been devoted, including important projects such as SPEED, COSTA [34, 35, 1]. Some prominent approaches are the following:

The worstcase execution time (WCET) analysis is an active field of research on its own (with primary focus on sequential loopfree code and hardware aspects) [69].

Advanced programanalysis techniques have also been developed for asymptotic bounds, such as resource analysis using abstract interpretation and type systems [35, 1, 45, 36, 37], e.g., linear invariant generation to obtain disjunctive and nonlinear upper bounds [19], or potentialbased methods [36, 37].
In summary, the WCET approach does not consider asymptotic bounds, while the other approaches consider asymptotic bounds, and present sound but not complete methods for upper bounds.
VASS and their modeling power. Vector Addition Systems (VASs) [50] or equivalently Petri Nets are fundamental models for analysis of parallel processes [25]. Enriching VASs with an underlying finitestate transition structure gives rise to Vector Addition Systems with States (VASS). Intuitively, a VASS consists of a finite set of control states and transitions between the control states, and and a set of counters that hold nonnegative integer values, where at every transition between the control states each counter is either incremented or decremented. VASS are a fundamental model for concurrent processes [25], and thus are often used for performing analysis of such processes [22, 30, 48, 49]. Besides that, VASS have been used as models of parametrized systems [6], as abstract models for programs for bounds and amortized analysis [66], as well as models of interactions between components of an API in componentbased synthesis [27]. Thus VASS provide a rich modeling framework for a wide class of problems in program analysis.
Previous results for VASS. For a VASS, a configuration is a control state along with the values of counters. The termination problem for VASS can be defined as follows: (a) counter termination where the VASS terminates when one of the counters reaches value 0; (b) controlstate termination where given a set of terminating control states the VASS terminates when one of the terminating states is reached. The termination question for VASS, given an initial configuration, asks whether all paths from the configuration terminate. The countertermination problem is known to be EXPSPACEcomplete: the EXPSPACEhardness is shown in [56, 23] and the upper bound follows from [71, 5, 26].
Asymptotic bounds analysis for VASS. While the qualitative termination problem has been studied extensively for VASS, the problem of quantitative bounds for the termination problem has received much less attention. In general, even for VASS whose termination can be guaranteed, the number of steps required to terminate can be nonelementary (tower of exponentials) in the magnitude of the initial configuration (i.e. in the maximal counter value appearing in the configuration). For practical purposes, bounds such as nonelementary or even exponential are too high as asymptotic complexity bounds, and the relevant complexity bounds are the polynomial ones. In this work we study the problem of computing asymptotic bounds for VASS, focusing on polynomial asymptotic bounds. Given a VASS and a configuration , let denote the maximum value of the counters in . If for all configurations all paths starting from terminate, then let denote the worstcase termination time from configuration (i.e., the maximum number of steps till termination among all paths starting from ). The quantitative termination problem with polynomial asymptotic bound given a VASS and an integer asks whether the asymptotic worstcase termination time is at most a polynomial of degree , i.e., whether there exists a constant such that for all we have . Note that with (resp., ) the problem asks for asymptotic linear (resp., quadratic, cubic) bounds on the worstcase termination time. The asymptotic bound problem is rather different from the qualitative termination problem for VASS, and even the decidability of this problem is not obvious.
Limitations of the previous approaches for polynomial bounds for VASS. In the analysis of asymptotic bounds there are three key aspects, namely, (a) soundness, (b) completeness, and (c) precise (or tight complexity) bounds. For asymptotic bounds, previous approaches (such as ranking functions, potentialbased methods etc) are sound (but not complete) for upper bounds. In other words, if the approaches obtain linear, or quadratic, or cubic bounds, then such bounds are guaranteed as asymptotic upper bounds (i.e., soundness is guaranteed), however, even if the asymptotic bound is linear or quadratic, the approaches may fail to obtain any asymptotic upper bound (i.e., completeness is not guaranteed). Another approach that has been considered for complexity analysis of programs are lexicographic ranking functions [4]. We show that with respect to polynomial bounds lexicographic ranking functions are not sound, i.e., there exists VASS for which lexicographic ranking function exists but the asymptotic complexity is exponential (see Example 4.11). Finally, none of the existing approaches are applicable for tight complexity bounds, i.e., the approaches consider bounds and are not applicable for bounds. In summary, previous approaches do not provide sound and complete method for polynomial asymptotic complexity of VASS; and no approach provide techniques for precise complexity analysis.
Our contributions. Our main contributions are related to the complexity of the quantitative termination with polynomial asymptotic bounds for VASS and our results are applicable to counter termination.

We start with the important special case of linear asymptotic bounds. We present the first sound and complete algorithm that can decide linear asymptotic bounds for all VASS. Moreover, our algorithm is an efficient one that has polynomial time complexity. This contrast sharply with EXPSPACEhardness of the qualitative termination problem and shows that deciding fast (linear) termination, which seems even more relevant for practical purposes, is computationally easier than deciding qualitative termination.

Next, we turn our attention to polynomial asymptotic bounds. For simplicity, we restrict ourselves to VASS where the underlying finitestate transition structure is strongly connected (see Section 7 for more comments). Given such a VASS , for every short^{1}^{1}1A cycle is short if its length is bounded by the number of control states of a given VASS. cycle of the , the effect of executing the short cycle once can be represented as a dimensional vector, an analogue of loop summary (ignoring any nested subloops) for classical programs. Let denote the set of all increments, i.e., short cycle effects in . We investigate the geometric properties of to derive complexity bounds on . The property playing a key role is whether all cycle effects in
lie on one side of some hyperplane in
. Formally, each hyperplane is uniquely determined by its normal vector (a vector perpendicular to the hyperplane), and a hyperplane defined by covers a vector effects if , where “” is the dot product of vectors. Geometrically, the hyperplane defined by splits the whole dimensional space into two halves such that the normal points into one of the halves, and its negative points into the other half. The hyperplane then “covers” vector if points into the same half as the vector . We denote by the set of all normals such that each covers all cycle effects in . Depending on the properties of , we can distinguish the following cases:
No normal: if (Fig. 0(a));

Negative normal: if all have a negative component (Fig. 0(b));

Positive normal: if there exists whose all components are positive (Fig. 0(c));

Singular normal: if (C) does not hold, but there exists such that all components of are nonnegative (in which some component of is zero, Fig. 0(d));
Figure 1: Classification of VASS into 4 subclasses according to the geometric properties of vectors of cycle effects, pictured on 2D examples. Each figure pictures (as red arrows) vectors of simple cycle effects in some VASS (it is easy, for each figure, to construct a VASS whose simple cycle effects are exactly those pictured). The green dashed line, if present, represents the hyperplane (in 2D it is a line) covering the set of cycle effects. The thick blue arrow represents the normal defining the covering hyperplane. The pink shaded area represents the cone generated by cycle effects (see Section 2.3). Intuitively, we seek hyperplanes that do not intersect the interior of the cone (but can touch its boundary). First, we observe that given a VASS, we can decide to which of the above category it belongs, in time which is polynomial in the number of control states of a given VASS for every fixed dimension (i.e., the algorithm is exponential only in the dimension ; see Section 2.2 for more comments). Second, we also show that if a VASS belongs to one of the first two categories, then there exist configurations with nonterminating runs from them (see Theorem 4.2). Hence asymptotic bounds are not applicable for the first two categories and we focus on the last two categories for polynomial asymptotic bounds.


For the positive normal category (C) we show that either there exist nonterminating runs or else the worstcase termination time is of the form , where is an integer and . We show that given a VASS in this category, we can first decide whether all runs are terminating, and if yes, then we can compute the optimal asymptotic polynomial degree such that the worstcase termination time is (see Theorem 4.8). Again, this is achievable in time polynomial in the number of control states of a given VASS for every fixed dimension. In other words, for this class of VASS we present an efficient approach that is sound, complete, and obtains precise polynomial complexity bounds. To the best of our knowledge, no previous work presents a complete approach for asymptotic complexity bounds for VASS, and the existing techniques only consider bounds, and not precise bounds.

We show that singularities in the normal are the key reason for complex asymptotic bounds in VASS. More precisely, for VASS falling into the singular normal category (D), in general the asymptotic bounds are not polynomial, and we show that (a) by slightly adapting the results of [58], it follows that termination complexity of a VASS in category (D) cannot be bounded by any primitive recursive function in the size of ; (b) even with three dimensions, the asymptotic bound is exponential in general (see Example 4.9), (c) even with four dimensions, the asymptotic bound is nonelementary in general (see Example 4.10).
The main technical contribution of this paper is a novel geometric approach, based on hyperplane separation techniques, for asymptotic time complexity analysis of VASS. Our methods are sound for arbitrary VASS and complete for a nontrivial subclass.
2 Preliminaries
2.1 Basic Notation
We use , , and to denote the sets of nonnegative integers, rational numbers, and real numbers. The subsets of all positive elements of , , and are denoted by , , and . Further, we use to denote the set where is treated according to the standard conventions. The cardinality of a given set is denoted by . When no confusion arises, we also use to denote the absolute value of a given .
Given a function , we use and to denote the sets of all such that and for all sufficiently large , where are some constants. If and , we write .
Let . The elements of are denoted by bold letters such as . The th component of is denoted by , i.e., . For every , we use to denote the constant vector where all components are equal to . The scalar product of is denoted by , i.e., . The other standard operations and relations on such as , , or are extended to in the componentwise way. In particular, is positive if , i.e., all components of are positive. The norm of is defined by .
Halfspaces and Cones.
An open halfspace of determined by a normal vector , where , is the set of all such that . A closed halfspace is defined in the same way but the above inequality is nonstrict. Given a finite set of vectors , we use to denote the set of all vectors of the form , where is a nonnegative real constant for every .
Example 2.1.
In Fig. 1, the cone, or more precisely its part that intersects the displayed area of , generated by the cycle effects (i.e., by the “red” vectors) is the pinkshaded area. As for the half spaces, e.g., in Fig. 0(d), the closed halfspace defined by the normal vector is the set , while the open halfspace determined by the same normal is the set . Intuitively, each normal vector determines a hyperplane (pictured by dashed lines in Fig. 1) that cuts in two halves, and is the half which does not contain : depending on whether we are interested in closed or open halfspace, we include the separating hyperplane into or not, respectively.
2.2 Syntax and semantics of VASS
In this subsection we present a syntax of VASS, represented as finite state graphs with transitions labelled by vectors of counter changes.
Definition 2.2.
Let . A dimensional vector addition system with states (VASS) is a pair , where is a finite set of states and is a set of transitions.
Example 2.3.
In some cases, we design algorithms where the time complexity is not polynomial in (i.e., the size of ), but polynomial in and exponential just in . Then, we say that the running time is polynomial in for a fixed .
We use simple operational semantics for VASS based on the view of VASS as finitestate machines augmented with nonnegative integervalued counters.
A configuration of is a pair , where and . The set of all configurations of is denoted by . The size of is defined as .
A finite path in of length is a finite sequence of the form where and for all . If , then is a cycle. A cycle is short if . The effect of , denoted by , is the sum . Given two finite paths and such that , we use to denote the finite path .
Let be a finite path in . A decomposition of into short^{2}^{2}2A standard technique for analysing paths in VASS are decompositions into simple cycles, where all states except for and are pairwise different. The reason why we use short cycles instead of simple ones is clarified in Lemma 2.5. cycles, denoted by , is a finite list of short cycles (repetitions allowed) defined recursively as follows:

If does not contain any short cycle, then , where is the empty list.

If where is the first short cycle occurring in , then , where is the list concatenation operator.
Observe that if , then the length of is at most . Since the length of every short cycle is bounded by , the length of is asymptotically the same as the number of elements in , assuming a fixed VASS .
Given a path and an initial configuration , the execution of in is a finite sequence of configurations where for all . If for all , we say that is executable in .
2.3 Termination Complexity of VASS
A zeroavoiding computation of length initiated in a configuration is a finite sequence of configurations initiated in such that for all , and for each there is a transition where . Every zeroavoiding computation initiated in determines a unique finite path in such that is the execution of in .
Definition 2.4.
Let be a dimensional VASS. For every configuration of , let be the least such that the length of every zeroavoiding finite computation initiated in is bounded by . The termination complexity of is a function defined by
If for some , we say that is nonterminating, otherwise it is terminating.
Observe that if is nonterminating, then for all sufficiently large . Further, if is terminating, then . In particular, if , we also have .
Given a path and an initial configuration , the execution of in is a finite sequence where for all . If for all , we say that is executable in .
Let The elements of are called increments. Note that if , then for all . Hence, is polynomial in , assuming is a fixed constant. Although the total number of all short cycles can be exponential in , the set is computable efficiently.^{3}^{3}3Note that Lemma 2.5 would not hold if we used simple cycles instead of short cycles, because the problem whether a given vector is an effect of a simple cycle is NPcomplete, even if (NPhardness follows, e.g., by a straightforward reduction of the Hamiltonian path problem).
Lemma 2.5.
Let be a dimensional VASS, and let . The set is computable in time , i.e., polynomial in assuming is a fixed constant.
Proof.
The set is computable by the following standard algorithm: For all and , let be the set of all effects of paths from to of length exactly . Observe that

for all ;

for every , we have that .
Obviously, , and the sets for are computable in time polynomial in , assuming is a fixed constant. ∎
A strongly connected component (SCC) of is maximal such that for all where there is a finite path from to . Given a SCC of , we define the VASS by restricting the set of control states to and the set of transitions to . We say that is strongly connected if is a SCC of .
3 Linear Termination Time
In this section, we give a complete and effective characterization of all VASS with linear termination complexity.
More precisely, we first provide a precise mathematical characterization of VASS with linear complexity: we show that if is a dimensional VASS, then iff there is an open halfspace of such that and .
Next we show that the mathematical characterization of VASS of linear complexity is equivalent to the existence of a ranking function of a special form for this VASS. We also show that existence of such a function for a given VASS can be decided (and the function, if it exists, synthesized) in time polynomial in the size of . Hence, we obtain a sound and complete polynomialtime procedure for deciding whether a given VASS has linear termination complexity.
We start with the mathematical characterization. Due to the next lemma, we can safely restrict ourselves to strongly connected VASS. A proof is trivial.
Lemma 3.1.
Let , and let be a dimensional VASS. Then iff for every SCC of , where is the termination complexity of .
Now we show that if there is no open halfspace such that and , then there exist short cycles and coefficients such that the sum is nonnegative. Note that this does not yet mean that is nonterminating—it may happen that the cycles pass through disjoint subsets of control states and cannot be concatenated without including auxiliary finite paths decreasing the counters.
Lemma 3.2.
Let be a dimensional VASS. If there is no open halfspace of such that and , then there exist and such that and .
Proof.
We distinguish two possibilities.

There exists a closed halfspace of such that and .

There is no closed halfspace of such that and .
Case (a). We show that there exists such that and . Note that this immediately implies the claim of our lemma—since , there are and such that . Since all elements of are vectors of nonnegative integers, we can safely assume for all . Let be the least common multiple of . Then and we are done.
It remains to prove the existence of . Let us fix a normal vector such that and the set is maximal (i.e., there is no satisfying and ). Further, we fix such that . Note that such must exist, because otherwise which contradicts the assumption of our lemma. We show . Suppose the converse. Then by Farkas’ lemma there exists a separating hyperplane for and with normal vector , i.e., for all and . Since , we can fix a sufficiently small such that the following conditions are satisfied:

,

for all such that we have that .
Let . Then , for all , and . This contradicts the maximality of .
Case (b). Let . We prove , which implies the claim of our lemma (there are and such that ). Suppose the converse, i.e., . Since both and are closed and convex and is also compact, we can apply the “strict” variant of hyperplane separation theorem. Thus, we obtain a vector and a constant such that and for all and . Since , we have that . Further, (to see this, realize that if for some , then where and for all ; since and , we have a contradiction). Now we show for all , which contradicts the assumption of Case (b). Suppose for some . Then for a sufficiently large . Since , we have a contradiction. ∎
Now we give the promised characterization of all VASS with linear termination complexity. Our theorem also reveals that the VASS termination complexity is either linear or at least quadratic (for example, it cannot be that ).
Theorem 3.3.
Let be a dimensional VASS. We have the following:

If there is an open halfspace of such that and , then .

If there is no open halfspace of such that and , then .
Proof.
We start with (a). Let be an open halfspace of such that and , and let be a configuration of . Note that because does not depend on . Let . Each short cycle decreases the scalar product of the normal and vector of counters by at least . Therefore, for every zeroavoiding computation initiated in we have that contains at most elements, so the length of is .
Now suppose there is no open halfspace of such that and . We show that , i.e., there exist and a constant such that for all configurations , where is sufficiently large, there is a zeroavoiding computation initiated in whose length is at least . Due to Lemma 3.1, we can safely assume that is strongly connected. By Lemma 3.3, there are and such that and
(1) 
As the individual short cycles with effects may proceed through disjoint sets of states, they cannot be trivially concatenated into one large cycle with nonnegative effect. Instead, we fix a control state and a cycle initiated in visiting all states of (here we need that is strongly connected). Further, for every we fix a short cycle such that . For every , let be a cycle obtained from by inserting precisely copies of every , where . Observe that the inequality (1) implies
(2) 
For every configuration , let be the largest such that is executable in and results in a zeroavoiding computation. If such a does not exist, i.e. is executable in for all , then is nonterminating (since, e.g. must be nonnegative in such a case), and the proof is finished. Hence, we can assume that is welldefined for each . Since the cycles and have fixed effects, there is such that for all configurations where all components of (and thus also ) are above some sufficiently large threshold we have that , i.e. grows asymptotically at least linearly with the minimal component of . Now, for every , consider a zeroavoiding computation initiated in defined inductively as follows: Initially, consists just of ; if the prefix of constructed so far ends in a configuration such that and (an event we call a successful hit), then the prefix is prolonged by executing the cycle (otherwise, the construction of stops). Thus, is obtained from by applying the inductive rule times, where is the number of successful hits before the construction of stops. Denote by the configuration visited by at th successful hit. Now the inequality (2) implies that , so there exists a constant such that . In particular the decrease of all components of is at most linear in . This means that for all sufficiently large , where is a suitable constant. But at the same time, upon each successful hit we have , so length of the segment beginning with th successful hit and ending with the th hit or with the last configuration of is at least . Hence, the length of is at least , i.e. quadratic. ∎
Example 3.4.
Consider the VASS in Figure 1(c). It consists of two strongly connected components, and . In we have . For the open halfspace contains . Similarly, in we have . For we again have that is contained in open halfspace . Hence, the VASS has linear termination complexity.
Now consider the VASS in Figure 1(a). It is strongly connected and . But there cannot be an open 2dimensional halfspace (i.e. an open halfplane) containing two opposite vectors, e.g. and , because for any line going through the origin such that does not lie on the line it holds that lies on the “other side” of the line than . Hence, the VASS in Figure 1(a) has at least quadratic termination complexity. The same argument applies to VASS in Figure 1(b).
A straightforward way of checking the condition of Theorem 3.3
is to construct the corresponding linear constraints and check their feasibility by linear programming. This would yield an algorithm polynomial in
, i.e., polynomial in for every fixed dimension . Now we show that the condition can actually be checked in time polynomial in the size of . We do this by showing that the mathematical condition stated in Theorem 3.3 is equivalent to the existence of a ranking function of a special type for a given VASS. Formally, a weighted linear map for a VASS is defined by a vector of coefficients and by a set of weights , one constant for each state of . The weighted linear map defines a function (which we, slightly abusing the notation, also denote by ) assigning numbers to configurations as follows: . A weighted linear map is a weighted linear ranking (WLR) function for if and there exists such that for each configuration and each transition it holds , which is equivalent, due to linearity, to(3) 
We show that weighted linear ranking functions provide a sound and complete method for proving linear termination complexity of VASS.
Theorem 3.5.
Let . The problem whether the termination complexity of a given dimensional VASS is linear is solvable in time polynomial in the size of . More precisely, the termination complexity of a VASS is linear if and only if there exists a weighted linear ranking function for . Moreover, the existence of a weighted linear ranking function for can be decided in time polynomial in .
Proof Sketch.
In the course of the proof we describe a polynomial timealgorithm for deciding whether given VASS has linear termination complexity. Once the algorithm is described, we will show that what it really does is checking the existence of a weighted linear ranking function for .
Let us start by sketching the underlying intuition. Our goal is to decide, in polynomial time, whether there is an open halfspace of such that and . Note that this is equivalent to deciding whether there is an open halfspace of such that and (since we demand to be open and the scalar product is continuous, can be slightly tilted by adding a small to obtain a positive vector with the desired property).
Given a vector and a configuration , we say that is the value of . Observe that if there is an open halfspace such that and , then there is such that the effect of every short cycle decreases the value of a configuration by at least . As every path can be decomposed into short cycles, every path steadily decreases the value of visited configurations. It follows that the mean change (per transition) of the value along an infinite path is bounded from above by . On the other hand, if the maximum mean change in values (over all infinite paths) is bounded from above by some negative constant, then every short cycle must decrease the value by at least this constant. So, it suffices to decide whether there is such that for all infinite paths the mean change of the value is negative. Thus, we reduce our problem to the classical problem of maximizing the mean payoff over a decision process with rewards. Using standard results (see, e.g., [60]), the latter problem polynomially reduces to the problem of solving a linear program that is (essentially) equivalent to the inequality (3). Finally, the linear program can be solved in polynomial time using e.g. [51]. ∎
Remark 3.6.
The weighted linear ranking functions can be seen as a special case of wellknown linear ranking functions for lineararithmetic programs [20, 59], in particular statebased linear ranking functions, where a linear function of program variables is assigned to each state of the control flow graph. WLR ranking functions are indeed a special case, since the linear functions assigned to various state are almost identical, and they differ only in the constant coefficient . Also, as the proof of the previous theorem shows, WLR functions in VASS can be computed directly by linear programming, without the need for any “supporting invariants,” since effect of a transition in VASS is independent of the current values of the counters. Also, wellfoundedness (i.e. the fact that the function is bounded from below) is guaranteed by the fact that and counter values in VASS are always nonnegative. It is a common knowledge that the existence of a statebased linear ranking function for a linear arithmetic program implies that the running time of the program is linear in the initial valuation of program variables. Hence, our main result can be interpreted as proving that for VASS, statebased linear ranking functions are both sound and complete for proving linear termination complexity.
4 Polynomial termination time
In this section we concentrate on VASS with polynomial termination complexity. For simplicity, we restrict ourselves to strongly connected VASS. As we already indicated in Section 1, our analysis proceeds by considering properties of normal vectors perpendicular to hyperplanes covering the vectors of .
Definition 4.1.
Let be a dimensional VASS. The set consists of all such that and (i.e., for all ).
Let be a strongly connected VASS. We distinguish four possibilities.

.

and all have a negative component.

There exists such that .

There exists such that and (C) does not hold.
Note that one can easily decide which of the four conditions holds by linear programming. Due to Lemma 2.5, the decision algorithm is polynomial in the number of control states of (assuming is a fixed constant).
We start by showing that a VASS satisfying (A) or (B) is nonterminating. A proof is given in Section 5.2.
Theorem 4.2.
Let be a dimensional strongly connected VASS such that (A) or (B) holds. Then is nonterminating.
4.1 VASS satisfying condition (C)
Assume is a dimensional VASS satisfying (C). We prove that if is terminating, then for some . Further, there is a polynomialtime algorithm deciding whether is terminating and computing the constant if it exists (assuming is a fixed constant).
A crucial tool for our analysis is a good normal, introduced in the next definition.
Definition 4.3.
Let be a VASS. We say that a normal is good if and for every we have that iff .
Example 4.4.
Consider the VASS of Fig. 1(a). Here, a good normal is, e.g., the vector . Observe that the effects of both selfloops (on and ) belong to the hyperplane defined by . Note that these loops compensate each other’s effects so long as we stay in the hyperplane (this is the defining property of the good normal). This allows us to zigzag in the plane without "paying" with decrements in the value except when we need to switch between the loops (recall that the value of a configuration is the product ). This produces a path of quadratic length, which is asymptotically the worst case.
The next lemma says that a good normal always exists and it is computable efficiently. A proof can be found in Section 5.3.
Lemma 4.5.
Let be a dimensional VASS satisfying (C). Then there exists a good normal computable in time polynomial in , assuming is a fixed constant.
The next theorem is the key result of this section. It allows to reduce the analysis of termination complexity of a given VASS to the analysis of several smaller instances of the problem, which can be then solved recursively.
Theorem 4.6.
Let be a VASS satisfying (C), and let be a good normal. Consider a VASS where
Further, let be all SCC of with at least one transition. We have the following:

If (i.e., if there is no SCC of with at least one transition), then .

If , all are terminating, and the termination complexity of every is , then is terminating and , where is a function defined by .
To get some intuiting behind the proof of Theorem 4.6, consider the following example.
Example 4.7.
Consider the VASS of Fig. 1(a). As mentioned in Example 4.4, there is a good normal , which gives . Then Case (2) of Theorem 4.6 gives us two simpler VASS where has a single state and a single transition , and has a single state and a single transition . Observe that both and can now be considered individually, and both of them have linear complexity. Also, as mentioned in Example 4.4, the good normal makes sure that the effect of the worst case behavior in can be compensated by a path in , and vice versa. Moreover, following the worst case path in and its compensation in decreases the final value of configurations only by a constant (caused by the switch between and ). So, we can follow such “almost compensating” loop times, and obtain a path of quadratic length.
Note that in the general case the situation is more complicated since the compensating path may need to be composed using paths in several VASS of . So, we need to be careful about the number of switches and about geometry of the compensating path.
Proof sketch for Theorem 4.6.
Claim (1) follows easily. It suffices to realize that if there is no SCC of with at least one transition, then there is no satisfying . Hence, for all , and we can apply Theorem 3.3.
Now we prove Claim (2). Let be a zeroavoiding computation of initiated in a configuration . Since the last configuration of satisfies , we have that . Hence,
Let by a decomposition of into short cycles. For every short cycle of we have that . Since can contain at most transitions which are not contained in any cycle, we have that , where is some fixed constant. This means that is . Consequently, the same holds also for all intermediate configurations visited by .
A short cycle of such that is called decreasing, otherwise it is neutral. Clearly, the total number of decreasing short cycles in
Comments
There are no comments yet.