A Calculus for Modular Loop Acceleration and Non-Termination Proofs

Loop acceleration can be used to prove safety, reachability, runtime bounds, and (non-)termination of programs operating on integers. To this end, a variety of acceleration techniques has been proposed. However, all of them are monolithic: Either they accelerate a loop successfully, or they fail completely. In contrast, we present a calculus that allows for combining acceleration techniques in a modular way and we show how to integrate many existing acceleration techniques into our calculus. Moreover, we propose two novel acceleration techniques that can be incorporated into our calculus seamlessly. Some of these acceleration techniques apply only to non-terminating loops. Thus, combining them with our novel calculus results in a new, modular approach for proving non-termination. An empirical evaluation demonstrates the applicability of our approach, both for loop acceleration and for proving non-termination.

Authors

• 6 publications
• 3 publications
01/06/2020

A Calculus for Modular Loop Acceleration

Loop acceleration can be used to prove safety, reachability, runtime bou...
05/27/2019

Proving Non-Termination via Loop Acceleration

We present the first approach to prove non-termination of integer progra...
09/01/2021

Termination Analysis for the π-Calculus by Reduction to Sequential Program Termination

We propose an automated method for proving termination of π-calculus pro...
12/07/2019

Modular Termination for Second-Order Computation Rules and Application to Algebraic Effect Handlers

We present a new modular proof method of termination for second-order co...
10/25/2019

On the Decidability of Termination for Polynomial Loops

We consider the termination problem for triangular weakly non-linear loo...
11/25/2019

Runtime Analysis of Quantum Programs: A Formal Approach

In this extended abstract we provide a first step towards a tool to esti...
04/30/2015

PerforatedCNNs: Acceleration through Elimination of Redundant Convolutions

We propose a novel approach to reduce the computational cost of evaluati...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

In the last years, loop acceleration techniques have successfully been used to build static analyses for programs operating on integers underapprox15 ; ijcar16 ; fmcad19 ; journal ; iosif17 ; Bozga14 ; fast . Essentially, such techniques extract a quantifier-free first-order formula from a single-path loop , i.e., a loop without branching in its body, such that under-approximates (or is equivalent to) . More specifically, each model of the resulting formula corresponds to an execution of (and vice versa). By integrating such techniques into a suitable program-analysis framework ijcar16 ; journal ; fmcad19 ; iosif12 ; iosif17 ; FlatFramework , whole programs can be transformed into first-order formulas which can then be analyzed by off-the-shelf solvers. Applications include proving safety iosif12 or reachability iosif12 ; underapprox15 , deducing bounds on the runtime complexity ijcar16 ; journal , and proving (non-)termination fmcad19 ; Bozga14 .

However, existing acceleration techniques apply only if certain prerequisites are in place. So the power of static analyses built upon loop acceleration depends on the applicability of the underlying acceleration technique.

In this paper, we introduce a calculus which allows for combining several acceleration techniques modularly in order to accelerate a single loop. This not only allows for modular combinations of standalone techniques, but it also enables interactions between different acceleration techniques, allowing them to obtain better results together. Consequently, it can handle classes of loops where all standalone techniques fail. Moreover, we present two novel acceleration techniques and integrate them into our calculus.

One important application of loop acceleration is proving non-termination. As already observed in fmcad19 , certain properties of loops – in particular monotonicity of (parts of) the loop condition w.r.t. the loop body – are crucial for both loop acceleration and proving non-termination. In fmcad19 , this observation has been exploited to develop a technique for deducing invariants that are helpful to deal with non-terminating as well as terminating loops: For the former, they help to prove non-termination, and for the latter, they help to accelerate the loop.

In this paper, we take the next step by also unifying the actual techniques that are used for loop acceleration and for proving non-termination. To this end, we identify loop acceleration techniques that, if applied in isolation, give rise to non-termination proofs. Furthermore, we show that the combination of such non-termination techniques via our novel calculus for loop acceleration gives rise to non-termination proofs, too. In this way, we obtain a modular framework for combining several different non-termination techniques in order to prove non-termination of a single loop.

In the following, we introduce preliminaries in Sec. 2. Then, we discuss existing acceleration techniques in Sec. 3. In Sec. 4, we present our calculus to combine acceleration techniques and show how existing acceleration techniques can be integrated into our framework. Sec. 5 lifts existing acceleration techniques to conditional acceleration techniques, which provides additional power in the context of our framework by enabling interactions between different acceleration techniques. Next, we present two novel acceleration techniques and incorporate them into our calculus in Sec. 6. Then we adapt our calculus and certain acceleration techniques for proving non-termination in Sec. 7. After discussing related work in Sec. 8, we demonstrate the applicability of our approach via an empirical evaluation in Sec. 9 and conclude in Sec. 10.

A conference version of this paper was published in conference . The present paper provides the following additional contributions:

• We present formal correctness proofs for all of our results, which were omitted in conference for reasons of space.

• We present an improved version of the loop acceleration technique from (journal, , Thm. 3.8) and (ijcar16, , Thm. 7) that yields simpler formulas.

• We prove an informal statement from conference on using arbitrary existing acceleration techniques in our setting, resulting in the novel Lem. 1.

• The adaption of our calculus and of certain acceleration techniques for proving non-termination (Sec. 7) is completely new.

• We extend the empirical evaluation from conference with extensive experiments comparing the adaption of our calculus for proving non-termination with other state-of-the-art tools for proving non-termination (Sec. 9.2).

2 Preliminaries

We use the notation , ,

, … for vectors. Let

be the set of closed-form expressions over the variables containing, e.g., all arithmetic expressions built from , integer constants, addition, subtraction, multiplication, division, and exponentiation.111Note that there is no widely accepted definition of “closed forms”, and the results of the current paper are independent of the precise definition of . We consider loops of the form

 while φ do →x←→a (Tloop)

where is a vector of pairwise different variables that range over the integers, the loop condition (which we also call guard) is a finite quantifier-free first-order formula over the atoms , and such that the function222i.e., the (anonymous) function that maps to maps integers to integers. denotes the set of all such loops.

We identify and the pair . Moreover, we identify and the function , where we sometimes write to make the variables explicit. We use the same convention for other (vectors of) expressions. Similarly, we identify the formula (or just ) with the predicate . We use the standard integer-arithmetic semantics for the symbols occurring in formulas.

Throughout this paper, let be a designated variable and let:

Intuitively, the variable represents the number of loop iterations and corresponds to the values of the program variables after iterations.

induces a relation on :

 φ(→x)∧→x′=→a(→x)⟺→x⟶???→x′

2.1 Loop Acceleration

In Sec. 3Sec. 6, our goal is to accelerate , i.e., to find a formula such that

 ψ⟺→x⟶n???→x′for% all n>0. (equiv)

To see why we use instead of, e.g., polynomials, consider the loop

 while x1>0 do (x1x2)←(x1−12⋅x2). (Texp)

Here, an acceleration technique synthesizes, e.g., the formula

 (x′1x′2)=(x1−n2n⋅x2)∧x1−n+1>0, (ψexp)

where is equivalent to the value of after iterations, and the inequation ensures that can be executed at least times. Clearly, the growth of cannot be captured by a polynomial, i.e., even the behavior of quite simple loops is beyond the expressiveness of polynomial arithmetic.

In practice, one can restrict our approach to weaker classes of expressions to ease automation, but the presented results are independent of such considerations.

Some acceleration techniques cannot guarantee (equiv), but the resulting formula is an under-approximation of , i.e., we have

 ψ⟹→x⟶n???→x′%foralln>0. (approx)

If (equiv) holds, then is equivalent to . Similarly, if (approx) holds, then approximates .

Definition 1 (Acceleration Technique).

An acceleration technique is a partial function

 accel:Loop⇀FOQF(C(→y)).

It is sound if the formula approximates for all . It is exact if the formula is equivalent to for all .

2.2 Non-Termination

In Sec. 7, we aim for proving non-termination.

Definition 2 ((Non-)Termination).

We call a vector a witness of non-termination for (denoted ) if

 ∀n∈N. φ(→an(→x)).

Here, is the -fold application of , i.e., and . If there is such a witness, then is non-terminating. Otherwise, terminates.

To this end, we search for a formula that characterizes a non-empty set of witnesses of non-termination, called a certificate of non-termination.

Definition 3 (Certificate of Non-Termination).

We call a formula a certificate of non-termination for if it is satisfiable and

 ∀→x∈Zd. η(→x)⟹→x⟶∞???⊥.

3 Existing Acceleration Techniques

We now recall several existing acceleration techniques. In Sec. 4 we will see how these techniques can be combined in a modular way. All of them first compute a closed form for the values of the program variables after iterations.

Definition 4 (Closed Form).

We call a closed form of if

 ∀→x∈Zd,n∈N. →c=→an(→x).

To find closed forms, one tries to solve the system of recurrence equations with the initial condition . In the sequel, we assume that we can represent in closed form. Note that one can always do so if with and , i.e., if is linear. To this end, one considers the matrix and computes its Jordan normal form where and is a block diagonal matrix (which has complex entries if

has complex eigenvalues). Then the closed form for

can be given directly (see, e.g., Ouaknine15 ), and is equal to the first components of . Moreover, one can compute a closed form if where and each is a polynomial over polyloopsLPAR20 ; polyloopsSAS20 .

3.1 Acceleration via Decrease or Increase

The first acceleration technique discussed in this section exploits the following observation: If implies and if holds, then is applicable at least

times. So in other words, it requires that the indicator function (or characteristic function)

of with is monotonically decreasing w.r.t. , i.e., .

Theorem 1 (Acceleration via Monotonic Decrease underapprox15 ).

If

 φ(→a(→x))⟹φ(→x),

then the following acceleration technique is exact:

 ???↦→x′=→an(→x)∧φ(→an−1(→x))

We will prove the more general Thm. 7 in Sec. 5.

So for example, Thm. 1 accelerates to . However, the requirement is often violated in practice. To see this, consider the loop

 \resizebox153.9351pt$while x1>0∧x2>0 do (x1x2)←(x1−1x2+1)$. (Tnon-dec)

It cannot be accelerated with Thm. 1 as

 x1−1>0∧x2+1>0\centernot⟹x1>0∧x2>0.

A dual acceleration technique to Thm. 1 is obtained by “reversing” the implication in the prerequisites of the theorem. Then is monotonically increasing w.r.t. . So is a loop invariant and thus is a recurrent set rupak08 (see also Sec. 8.2) of .

Theorem 2 (Acceleration via Monotonic Increase).

If

 φ(→x)⟹φ(→a(→x)),

then the following acceleration technique is exact:

 ???↦→x′=→an(→x)∧φ(→x)

We will prove the more general Thm. 8 in Sec. 5.

Example 1.

As a minimal example, Thm. 2 accelerates

 while x>0 do x←x+1 (Tinc)

to . ∎

3.2 Acceleration via Decrease and Increase

Both acceleration techniques presented so far have been generalized in fmcad19 .

Theorem 3 (Acceleration via Monotonicity fmcad19 ).

If

 φ(→x)⟺ φ1(→x)∧φ2(→x)∧φ3(→x), φ1(→x)⟹ φ1(→a(→x)), φ1(→x)∧φ2(→a(→x))⟹ φ2(→x),and φ1(→x)∧φ2(→x)∧φ3(→x)⟹ φ3(→a(→x)),

then the following acceleration technique is exact:

 ???↦→x′=→an(→x)∧φ1(→x)∧φ2(→an−1(→x))∧φ3(→x)

thm:three-wayImmediate consequence of Thm. 5 and Remark 1, which will be proven in Sections 5 and 4.

Here, and are again invariants of the loop. Thus, as in Thm. 2 it suffices to require that they hold before entering the loop. On the other hand, needs to satisfy a similar condition as in Thm. 1, and thus it suffices to require that holds before the last iteration. We also say that is a converse invariant (w.r.t. ). It is easy to see that Thm. 3 is equivalent to Thm. 1 if (where denotes logical truth) and it is equivalent to Thm. 2 if .

Example 2.

With Thm. 3, can be accelerated to

 (x′1x′2)=(x1−nx2+n)∧x2>0∧x1−n+1>0 (ψnon-dec)

by choosing , , and . ∎

Thm. 3 naturally raises the question: Why do we need two invariants? To see this, consider a restriction of Thm. 3 where . It would fail for a loop like

 while x1>0∧x2>0 do (x1x2)←(x1+x2x2−1) (T2-invs)

which can easily be handled by Thm. 3 (by choosing , , and ). The problem is that the converse invariant is needed to prove invariance of . Similarly, a restriction of Thm. 3 where would fail for the following variant of :

 while x1>0∧x2>0 do (x1x2)←(x1−x2x2+1)

Here, the problem is that the invariant is needed to prove converse invariance of .

3.3 Acceleration via Metering Functions

Another approach for loop acceleration uses metering functions, a variation of classical ranking functions from termination and complexity analysis ijcar16 . While ranking functions give rise to upper bounds on the runtime of loops, metering functions provide lower runtime bounds, i.e., the definition of a metering function ensures that for each , the loop under consideration can be applied at least times.

Definition 5 (Metering Function ijcar16 ).

We call a function a metering function if the following holds:

 φ(→x) ⟹mf(→x)−mf(→a(→x))≤1 and ¬φ(→x) ⟹mf(→x)≤0 (mf-bounded)

We can use metering functions to accelerate loops as follows:

Theorem 4 (Acceleration via Metering Functions ijcar16 ; journal ).

Let be a metering function for . Then the following acceleration technique is sound:

 ???↦→x′=→an(→x)∧n

We will prove the more general Thm. 9 in Sec. 5. In contrast to (journal, , Thm. 3.8) and (ijcar16, , Thm. 7), the acceleration technique from Thm. 4 does not conjoin the loop condition to the result, which turned out to be superfluous. The reason is that implies due to (mf-bounded).

Example 3.

Using the metering function , Thm. 4 accelerates to

 ((x′1x′2)=(x1−n2n⋅x2)∧n

However, synthesizing non-trivial (i.e., non-constant) metering functions is challenging. Moreover, unless the number of iterations of equals for all , acceleration via metering functions is not exact.

Linear metering functions can be synthesized via Farkas’ Lemma and SMT solving ijcar16 . However, many loops do not have non-trivial linear metering functions. To see this, reconsider . Here, is not a metering function as cannot be iterated at least times if . Thus, journal proposes a refinement of ijcar16 based on metering functions of the form where and is linear. With this improvement, the metering function

 (x1,x2)↦Ix2>0(x2)⋅x1

can be used to accelerate to

 (x′1x′2)=(x1−nx2+n)∧x2>0∧n

4 A Calculus for Modular Loop Acceleration

All acceleration techniques presented so far are monolithic: Either they accelerate a loop successfully or they fail completely. In other words, we cannot combine several techniques to accelerate a single loop. To this end, we now present a calculus that repeatedly applies acceleration techniques to simplify an acceleration problem resulting from a loop until it is solved and hence gives rise to a suitable which approximates or is equivalent to .

Definition 6 (Acceleration Problem).

A tuple

where , , and is an acceleration problem. It is consistent if approximates , exact if is equivalent to , and solved if it is consistent and . The canonical acceleration problem of a loop is

Example 4.

The canonical acceleration problem of is

The first component of an acceleration problem is the partial result that has been computed so far. The second component corresponds to the part of the loop condition that has already been processed successfully. As our calculus preserves consistency, always approximates . The third component is the part of the loop condition that remains to be processed, i.e., the loop still needs to be accelerated. The goal of our calculus is to transform a canonical into a solved acceleration problem.

More specifically, whenever we have simplified a canonical acceleration problem

to

then and

 ψ1implies→x⟶n⟨\widecheckφ,→a⟩→x′.

Then it suffices to find some such that

 →x⟶n⟨\widecheckφ,→a⟩→x′∧ψ2implies→x⟶n⟨ˆφ,→a⟩→x′. (1)

The reason is that we have and thus

 ψ1∧ψ2implies→x⟶n⟨φ,→a⟩→x′,

i.e., approximates .

Note that the acceleration techniques presented so far would map to some such that

 ψ2implies→x⟶n⟨ˆφ,→a⟩→x′, (2)

which does not use the information that we have already accelerated . In Sec. 5, we will adapt all acceleration techniques from Sec. 3 to search for some that satisfies (1) instead of (2), i.e., we will turn them into conditional acceleration techniques.

Definition 7 (Conditional Acceleration).

We call a partial function

 accel:Loop×FOQF(C(→x))⇀FOQF(C(→y))

a conditional acceleration technique. It is sound if

 →x⟶n⟨\widecheckφ,→a⟩→x′∧accel(⟨χ,→a⟩,\widecheckφ)implies→x⟶n⟨χ,→a⟩→x′

for all , , and . It is exact if additionally

 →x⟶n⟨χ∧\widecheckφ,→a⟩→x′impliesaccel(⟨χ,→a⟩,\widecheckφ)

for all , , and .

Note that every acceleration technique gives rise to a conditional acceleration technique in a straightforward way (by disregarding the second argument of in Def. 7). Soundness and exactness can be lifted directly to the conditional setting:

Lemma 1 (Acceleration as Conditional Acceleration).

Let be an acceleration technique following Def. 1. Then for the conditional acceleration technique given by , the following holds:

1. is sound if and only if is sound

2. is exact if and only if is exact

thm:accel-as-cond-accel For the “if” direction of 1., we need to show that

 →x⟶n⟨\widecheckφ,→a⟩→x′∧accel(⟨χ,→a⟩,\widecheckφ)implies→x⟶n⟨χ,→a⟩→x′

if is a sound acceleration technique. Thus:

 →x⟶n⟨\widecheckφ,→a⟩→x′∧accel(⟨χ,→a⟩,\widecheckφ) ⟹ accel(⟨χ,→a⟩,\widecheckφ) ⟺ accel0(⟨χ,→a⟩) (by definition of accel) ⟹ →x⟶n⟨χ,→a⟩→x′ (by soundness of accel0)

The “only if” direction of 1. is trivial.

For the “if” direction of 2., soundness of follows from 1. We still need to show that

 →x⟶n⟨χ∧\widecheckφ,→a⟩→x′impliesaccel(⟨χ,→a⟩,\widecheckφ)

if is a sound acceleration technique. Thus:

 →x⟶n⟨χ∧\widecheckφ,→a⟩→x′ ⟹ →x⟶n⟨χ,→a⟩→x′ ⟺ accel0(⟨χ,→a⟩) (by exactness of accel0) ⟺ accel(⟨χ,→a⟩,\widecheckφ) (by definition of accel)

The “only if” direction of 2. is trivial.

We are now ready to present our acceleration calculus, which combines loop acceleration techniques in a modular way. In the following, w.l.o.g. we assume that formulas are in CNF, and we identify the formula with the set of clauses .

Definition 8 (Acceleration Calculus).

The relation on acceleration problems is defined by the rule

where is a sound conditional acceleration technique. A -step is exact (written ) if is exact.

So our calculus allows us to pick a subset (of clauses) from the yet unprocessed condition and “move” it to , which has already been processed successfully. To this end, needs to be accelerated by a conditional acceleration technique, i.e., when accelerating we may assume .

With Lem. 1, our calculus allows for combining arbitrary existing acceleration techniques without adapting them. However, many acceleration techniques can easily be turned into more sophisticated conditional acceleration techniques (see Sec. 5), which increases the power of our approach.

Example 5.

We continue creftypecap 4 and fix . Thus, we need to accelerate the loop to enable a -step. The resulting derivation is shown in Fig. 1, where Thm. 2 was applied to the loop in the second step. Thus, we successfully constructed the formula , which is equivalent to . ∎

The crucial property of our calculus is the following.

Lemma 2.

The relation preserves consistency, and the relation preserves exactness.

thm:calculus-sound For the first part of the lemma, assume

where is consistent and

 accel(⟨χ,→a⟩,\widecheckφ)=ψ2.

We get

 ψ1∧ψ2 ⟹ →x⟶n⟨\widecheckφ,→a⟩→x′∧ψ2 ⟹ →x⟶n⟨\widecheckφ,→a⟩→x′∧→x⟶n⟨χ,→a⟩→x ⟺ →x⟶n⟨\widecheckφ∧χ,→a⟩→x′

The first step holds since is consistent and the second step holds since is sound. This proves consistency of

 =

For the second part of the lemma, assume

where is exact and . We get

 →x⟶n⟨\widecheckφ∧χ,→a⟩→x′ ⟺ →x⟶n⟨\widecheckφ∧χ,→a⟩→x′∧ψ2 (by exactness of accel) ⟺ →x⟶n⟨\widecheckφ,→a⟩→x′∧ψ2 ⟺ ψ1∧ψ2 (by exactness of )

which, together with consistency, proves exactness of

Then the correctness of our calculus follows immediately. The reason is that

implies .

Theorem 5 (Correctness of ⇝).

If

then approximates . If

then is equivalent to .

Termination of our calculus is trivial, as the size of the third component of the acceleration problem is decreasing.

Theorem 6 (Termination of ⇝).

The relation terminates.

5 Conditional Acceleration Techniques

We now show how to turn the acceleration techniques from Sec. 3 into conditional acceleration techniques, starting with acceleration via monotonic decrease.

Theorem 7 (Conditional Acceleration via Monotonic Decrease).

If

 \widecheckφ(→x)∧χ(→a(→x))⟹χ(→x), (3)

then the following conditional acceleration technique is exact:

 (⟨χ,→a⟩,\widecheckφ)↦→x′=→an(→x)∧χ(→an−1(→x))

thm:conditional-one-way For soundness, we need to prove

 →x⟶m⟨\widecheckφ,→a⟩→am(→x)∧χ(→am−1(→x))⟹→x⟶m⟨χ,→a⟩→am(→x) (4)

for all . We use induction on . If , then

 →x⟶m⟨\widecheckφ,→a⟩→am(→x)∧χ(→am−1(→x)) ⟹ χ(→x) (as m=1) ⟺ →x⟶⟨χ,→a⟩→a(→x) ⟺ →x⟶m⟨χ,→a⟩→am(→x). (as m=1)

In the induction step, we have

 →x⟶m+1⟨\widecheckφ,→a⟩→am+1(→x)∧χ(→am(→x)) ⟹ →x⟶m⟨\widecheckφ,→a⟩→am(→x)∧χ(→am(→x)) ⟺ →x⟶m⟨\widecheckφ,→a⟩→am(→x)∧\widecheckφ(→am−1(→x))∧χ(→am(→x)) (as m>0) ⟹ →x⟶m⟨\widecheckφ,→a⟩→am(→x)∧χ(→am−1(→x))∧χ(→am(→x)) (due to (3)) ⟹ →x⟶m⟨χ,→a⟩→am(→x)∧χ(→am(→x)) (by the induction hypothesis (4)) ⟺ →x⟶m+1⟨χ,→a⟩→am+1(→x).

For exactness, we need to prove

 →x⟶m⟨χ∧\widecheckφ,→a⟩→am(→x)⟹χ(→am−1(→x))

for all , which is trivial.

So we just add to the premise of the implication that needs to be checked to apply acceleration via monotonic decrease. Thm. 2 can be adapted analogously.

Theorem 8 (Conditional Acceleration via Monotonic Increase).

If

 \widecheckφ(→x)∧χ(→x)⟹χ(→a(→x)), (5)

then the following conditional acceleration technique is exact:

 (⟨χ,→a⟩,\widecheckφ)↦→x′=→an(→x)∧χ(→x)

thm:conditional-recurrent For soundness, we need to prove

 →x⟶m⟨\widecheckφ,→a⟩→am(→x)∧χ(→x)⟹→x⟶m⟨χ,→a⟩→am(→x) (6)

for all . We use induction on . If , then

 →x⟶m⟨\widecheckφ,→a⟩→am(→x)∧χ(→x) ⟹ →x⟶⟨χ,→a⟩→a(→x) ⟺ →x⟶m⟨χ,→a⟩→am(→x). (as m=1)

In the induction step, we have

 →x⟶m+1⟨\widecheckφ,→a⟩→am+1(→x)∧χ(→x) ⟹ →x⟶m⟨\widecheckφ,→a⟩→am(→x)∧χ(→x) ⟹ →x⟶m⟨\widecheckφ,→a⟩→am(→x)∧→x⟶m⟨χ,