1 Introduction
In the last years, loop acceleration techniques have successfully been used to build static analyses for programs operating on integers underapprox15 ; ijcar16 ; fmcad19 ; journal ; iosif17 ; Bozga14 ; fast . Essentially, such techniques extract a quantifierfree firstorder formula from a singlepath loop , i.e., a loop without branching in its body, such that underapproximates (or is equivalent to) . More specifically, each model of the resulting formula corresponds to an execution of (and vice versa). By integrating such techniques into a suitable programanalysis framework ijcar16 ; journal ; fmcad19 ; iosif12 ; iosif17 ; FlatFramework , whole programs can be transformed into firstorder formulas which can then be analyzed by offtheshelf solvers. Applications include proving safety iosif12 or reachability iosif12 ; underapprox15 , deducing bounds on the runtime complexity ijcar16 ; journal , and proving (non)termination fmcad19 ; Bozga14 .
However, existing acceleration techniques apply only if certain prerequisites are in place. So the power of static analyses built upon loop acceleration depends on the applicability of the underlying acceleration technique.
In this paper, we introduce a calculus which allows for combining several acceleration techniques modularly in order to accelerate a single loop. This not only allows for modular combinations of standalone techniques, but it also enables interactions between different acceleration techniques, allowing them to obtain better results together. Consequently, it can handle classes of loops where all standalone techniques fail. Moreover, we present two novel acceleration techniques and integrate them into our calculus.
One important application of loop acceleration is proving nontermination. As already observed in fmcad19 , certain properties of loops – in particular monotonicity of (parts of) the loop condition w.r.t. the loop body – are crucial for both loop acceleration and proving nontermination. In fmcad19 , this observation has been exploited to develop a technique for deducing invariants that are helpful to deal with nonterminating as well as terminating loops: For the former, they help to prove nontermination, and for the latter, they help to accelerate the loop.
In this paper, we take the next step by also unifying the actual techniques that are used for loop acceleration and for proving nontermination. To this end, we identify loop acceleration techniques that, if applied in isolation, give rise to nontermination proofs. Furthermore, we show that the combination of such nontermination techniques via our novel calculus for loop acceleration gives rise to nontermination proofs, too. In this way, we obtain a modular framework for combining several different nontermination techniques in order to prove nontermination of a single loop.
In the following, we introduce preliminaries in Sec. 2. Then, we discuss existing acceleration techniques in Sec. 3. In Sec. 4, we present our calculus to combine acceleration techniques and show how existing acceleration techniques can be integrated into our framework. Sec. 5 lifts existing acceleration techniques to conditional acceleration techniques, which provides additional power in the context of our framework by enabling interactions between different acceleration techniques. Next, we present two novel acceleration techniques and incorporate them into our calculus in Sec. 6. Then we adapt our calculus and certain acceleration techniques for proving nontermination in Sec. 7. After discussing related work in Sec. 8, we demonstrate the applicability of our approach via an empirical evaluation in Sec. 9 and conclude in Sec. 10.
A conference version of this paper was published in conference . The present paper provides the following additional contributions:

We present formal correctness proofs for all of our results, which were omitted in conference for reasons of space.

We prove an informal statement from conference on using arbitrary existing acceleration techniques in our setting, resulting in the novel Lem. 1.

The adaption of our calculus and of certain acceleration techniques for proving nontermination (Sec. 7) is completely new.

We extend the empirical evaluation from conference with extensive experiments comparing the adaption of our calculus for proving nontermination with other stateoftheart tools for proving nontermination (Sec. 9.2).
2 Preliminaries
We use the notation , ,
, … for vectors. Let
be the set of closedform expressions over the variables containing, e.g., all arithmetic expressions built from , integer constants, addition, subtraction, multiplication, division, and exponentiation.^{1}^{1}1Note that there is no widely accepted definition of “closed forms”, and the results of the current paper are independent of the precise definition of . We consider loops of the form() 
where is a vector of pairwise different variables that range over the integers, the loop condition (which we also call guard) is a finite quantifierfree firstorder formula over the atoms , and such that the function^{2}^{2}2i.e., the (anonymous) function that maps to maps integers to integers. denotes the set of all such loops.
We identify and the pair . Moreover, we identify and the function , where we sometimes write to make the variables explicit. We use the same convention for other (vectors of) expressions. Similarly, we identify the formula (or just ) with the predicate . We use the standard integerarithmetic semantics for the symbols occurring in formulas.
Throughout this paper, let be a designated variable and let:
Intuitively, the variable represents the number of loop iterations and corresponds to the values of the program variables after iterations.
2.1 Loop Acceleration
In Sec. 3 – Sec. 6, our goal is to accelerate , i.e., to find a formula such that
(equiv) 
To see why we use instead of, e.g., polynomials, consider the loop
() 
Here, an acceleration technique synthesizes, e.g., the formula
() 
where is equivalent to the value of after iterations, and the inequation ensures that can be executed at least times. Clearly, the growth of cannot be captured by a polynomial, i.e., even the behavior of quite simple loops is beyond the expressiveness of polynomial arithmetic.
In practice, one can restrict our approach to weaker classes of expressions to ease automation, but the presented results are independent of such considerations.
Some acceleration techniques cannot guarantee (equiv), but the resulting formula is an underapproximation of , i.e., we have
(approx) 
If (equiv) holds, then is equivalent to . Similarly, if (approx) holds, then approximates .
Definition 1 (Acceleration Technique).
An acceleration technique is a partial function
It is sound if the formula approximates for all . It is exact if the formula is equivalent to for all .
2.2 NonTermination
In Sec. 7, we aim for proving nontermination.
Definition 2 ((Non)Termination).
To this end, we search for a formula that characterizes a nonempty set of witnesses of nontermination, called a certificate of nontermination.
3 Existing Acceleration Techniques
We now recall several existing acceleration techniques. In Sec. 4 we will see how these techniques can be combined in a modular way. All of them first compute a closed form for the values of the program variables after iterations.
To find closed forms, one tries to solve the system of recurrence equations with the initial condition . In the sequel, we assume that we can represent in closed form. Note that one can always do so if with and , i.e., if is linear. To this end, one considers the matrix and computes its Jordan normal form where and is a block diagonal matrix (which has complex entries if
has complex eigenvalues). Then the closed form for
can be given directly (see, e.g., Ouaknine15 ), and is equal to the first components of . Moreover, one can compute a closed form if where and each is a polynomial over polyloopsLPAR20 ; polyloopsSAS20 .3.1 Acceleration via Decrease or Increase
The first acceleration technique discussed in this section exploits the following observation: If implies and if holds, then is applicable at least
times. So in other words, it requires that the indicator function (or characteristic function)
of with is monotonically decreasing w.r.t. , i.e., .Theorem 1 (Acceleration via Monotonic Decrease underapprox15 ).
If
then the following acceleration technique is exact:
So for example, Thm. 1 accelerates to . However, the requirement is often violated in practice. To see this, consider the loop
() 
It cannot be accelerated with Thm. 1 as
A dual acceleration technique to Thm. 1 is obtained by “reversing” the implication in the prerequisites of the theorem. Then is monotonically increasing w.r.t. . So is a loop invariant and thus is a recurrent set rupak08 (see also Sec. 8.2) of .
Theorem 2 (Acceleration via Monotonic Increase).
If
then the following acceleration technique is exact:
Example 1.
3.2 Acceleration via Decrease and Increase
Both acceleration techniques presented so far have been generalized in fmcad19 .
Theorem 3 (Acceleration via Monotonicity fmcad19 ).
If
then the following acceleration technique is exact:
thm:threewayImmediate consequence of Thm. 5 and Remark 1, which will be proven in Sections 5 and 4.
Here, and are again invariants of the loop. Thus, as in Thm. 2 it suffices to require that they hold before entering the loop. On the other hand, needs to satisfy a similar condition as in Thm. 1, and thus it suffices to require that holds before the last iteration. We also say that is a converse invariant (w.r.t. ). It is easy to see that Thm. 3 is equivalent to Thm. 1 if (where denotes logical truth) and it is equivalent to Thm. 2 if .
Example 2.
Thm. 3 naturally raises the question: Why do we need two invariants? To see this, consider a restriction of Thm. 3 where . It would fail for a loop like
() 
which can easily be handled by Thm. 3 (by choosing , , and ). The problem is that the converse invariant is needed to prove invariance of . Similarly, a restriction of Thm. 3 where would fail for the following variant of :
Here, the problem is that the invariant is needed to prove converse invariance of .
3.3 Acceleration via Metering Functions
Another approach for loop acceleration uses metering functions, a variation of classical ranking functions from termination and complexity analysis ijcar16 . While ranking functions give rise to upper bounds on the runtime of loops, metering functions provide lower runtime bounds, i.e., the definition of a metering function ensures that for each , the loop under consideration can be applied at least times.
Definition 5 (Metering Function ijcar16 ).
We call a function a metering function if the following holds:
(mfbounded) 
We can use metering functions to accelerate loops as follows:
We will prove the more general Thm. 9 in Sec. 5. In contrast to (journal, , Thm. 3.8) and (ijcar16, , Thm. 7), the acceleration technique from Thm. 4 does not conjoin the loop condition to the result, which turned out to be superfluous. The reason is that implies due to (mfbounded).
Example 3.
However, synthesizing nontrivial (i.e., nonconstant) metering functions is challenging. Moreover, unless the number of iterations of equals for all , acceleration via metering functions is not exact.
Linear metering functions can be synthesized via Farkas’ Lemma and SMT solving ijcar16 . However, many loops do not have nontrivial linear metering functions. To see this, reconsider . Here, is not a metering function as cannot be iterated at least times if . Thus, journal proposes a refinement of ijcar16 based on metering functions of the form where and is linear. With this improvement, the metering function
4 A Calculus for Modular Loop Acceleration
All acceleration techniques presented so far are monolithic: Either they accelerate a loop successfully or they fail completely. In other words, we cannot combine several techniques to accelerate a single loop. To this end, we now present a calculus that repeatedly applies acceleration techniques to simplify an acceleration problem resulting from a loop until it is solved and hence gives rise to a suitable which approximates or is equivalent to .
Definition 6 (Acceleration Problem).
The first component of an acceleration problem is the partial result that has been computed so far. The second component corresponds to the part of the loop condition that has already been processed successfully. As our calculus preserves consistency, always approximates . The third component is the part of the loop condition that remains to be processed, i.e., the loop still needs to be accelerated. The goal of our calculus is to transform a canonical into a solved acceleration problem.
More specifically, whenever we have simplified a canonical acceleration problem
to
then and
Then it suffices to find some such that
(1) 
The reason is that we have and thus
Note that the acceleration techniques presented so far would map to some such that
(2) 
which does not use the information that we have already accelerated . In Sec. 5, we will adapt all acceleration techniques from Sec. 3 to search for some that satisfies (1) instead of (2), i.e., we will turn them into conditional acceleration techniques.
Definition 7 (Conditional Acceleration).
We call a partial function
a conditional acceleration technique. It is sound if
for all , , and . It is exact if additionally
for all , , and .
Note that every acceleration technique gives rise to a conditional acceleration technique in a straightforward way (by disregarding the second argument of in Def. 7). Soundness and exactness can be lifted directly to the conditional setting:
Lemma 1 (Acceleration as Conditional Acceleration).
Let be an acceleration technique following Def. 1. Then for the conditional acceleration technique given by , the following holds:

is sound if and only if is sound

is exact if and only if is exact
thm:accelascondaccel For the “if” direction of 1., we need to show that
if is a sound acceleration technique. Thus:
(by definition of )  
(by soundness of ) 
The “only if” direction of 1. is trivial.
For the “if” direction of 2., soundness of follows from 1. We still need to show that
if is a sound acceleration technique. Thus:
(by exactness of )  
(by definition of ) 
The “only if” direction of 2. is trivial.
We are now ready to present our acceleration calculus, which combines loop acceleration techniques in a modular way. In the following, w.l.o.g. we assume that formulas are in CNF, and we identify the formula with the set of clauses .
Definition 8 (Acceleration Calculus).
The relation on acceleration problems is defined by the rule
where is a sound conditional acceleration technique. A step is exact (written ) if is exact.
So our calculus allows us to pick a subset (of clauses) from the yet unprocessed condition and “move” it to , which has already been processed successfully. To this end, needs to be accelerated by a conditional acceleration technique, i.e., when accelerating we may assume .
With Lem. 1, our calculus allows for combining arbitrary existing acceleration techniques without adapting them. However, many acceleration techniques can easily be turned into more sophisticated conditional acceleration techniques (see Sec. 5), which increases the power of our approach.
Example 5.
We continue creftypecap 4 and fix . Thus, we need to accelerate the loop to enable a step. The resulting derivation is shown in Fig. 1, where Thm. 2 was applied to the loop in the second step. Thus, we successfully constructed the formula , which is equivalent to . ∎
The crucial property of our calculus is the following.
Lemma 2.
The relation preserves consistency, and the relation preserves exactness.
thm:calculussound For the first part of the lemma, assume
where is consistent and
We get
The first step holds since is consistent and the second step holds since is sound. This proves consistency of
For the second part of the lemma, assume
where is exact and . We get
(by exactness of )  
(by exactness of ) 
which, together with consistency, proves exactness of
Then the correctness of our calculus follows immediately. The reason is that
implies .
Termination of our calculus is trivial, as the size of the third component of the acceleration problem is decreasing.
Theorem 6 (Termination of ).
The relation terminates.
5 Conditional Acceleration Techniques
We now show how to turn the acceleration techniques from Sec. 3 into conditional acceleration techniques, starting with acceleration via monotonic decrease.
Theorem 7 (Conditional Acceleration via Monotonic Decrease).
If
(3) 
then the following conditional acceleration technique is exact:
thm:conditionaloneway For soundness, we need to prove
(4) 
for all . We use induction on . If , then
(as )  
(as ) 
In the induction step, we have
For exactness, we need to prove
for all , which is trivial.
So we just add to the premise of the implication that needs to be checked to apply acceleration via monotonic decrease. Thm. 2 can be adapted analogously.
Theorem 8 (Conditional Acceleration via Monotonic Increase).
If
(5) 
then the following conditional acceleration technique is exact:
thm:conditionalrecurrent For soundness, we need to prove
(6) 
for all . We use induction on . If , then
(as ) 
In the induction step, we have
Comments
There are no comments yet.