# A New Proof Rule for Almost-Sure Termination

An important question for a probabilistic program is whether the probability mass of all its diverging runs is zero, that is that it terminates "almost surely". Proving that can be hard, and this paper presents a new method for doing so; it is expressed in a program logic, and so applies directly to source code. The programs may contain both probabilistic- and demonic choice, and the probabilistic choices may depend on the current state. As do other researchers, we use variant functions (a.k.a. "super-martingales") that are real-valued and probabilistically might decrease on each loop iteration; but our key innovation is that the amount as well as the probability of the decrease are parametric. We prove the soundness of the new rule, indicate where its applicability goes beyond existing rules, and explain its connection to classical results on denumerable (non-demonic) Markov chains.

## Authors

• 8 publications
• 4 publications
• 11 publications
• 46 publications
• ### Compositional Analysis for Almost-Sure Termination of Probabilistic Programs

In this work, we consider the almost-sure termination problem for probab...
01/18/2019 ∙ by Mingzhang Huang, et al. ∙ 0

• ### Automated Termination Analysis of Polynomial Probabilistic Programs

The termination behavior of probabilistic programs depends on the outcom...
10/07/2020 ∙ by Marcel Moosbrugger, et al. ∙ 0

• ### Proving Almost-Sure Termination of Probabilistic Programs via Incremental Pruning

The extension of classical imperative programs with real-valued random v...
08/14/2020 ∙ by Krishnendu Chatterjee, et al. ∙ 0

• ### The Probabilistic Termination Tool Amber

We describe the Amber tool for proving and refuting the termination of a...
07/27/2021 ∙ by Marcel Moosbrugger, et al. ∙ 0

• ### Proving Expected Sensitivity of Probabilistic Programs

Program sensitivity, also known as Lipschitz continuity, describes how s...
08/08/2017 ∙ by Gilles Barthe, et al. ∙ 0

• ### Consistency Management of Normal Logic Program by Top-down Abductive Proof Procedure

This paper presents a method of computing a revision of a function-free ...
03/05/2000 ∙ by Ken Satoh, et al. ∙ 0

• ### Synthesizing Datalog Programs using Numerical Relaxation

The problem of learning logical rules from examples arises in diverse fi...
06/01/2019 ∙ by Xujie Si, et al. ∙ 0

##### This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

## 1. Introduction

This paper concerns termination proofs for sequential, imperative probabilistic programs, i.e. those that, in addition to the usual constructs, include a binary operator for probabilistic choice. Writing “standard” to mean “non-probabilistic”, we recall that the standard technique for loop termination is to find an integer-valued function over the program’s state space, a “variant”, that satisfies the “progress” condition that each iteration is guaranteed to decrease the variant strictly and further that the loop guard and invariant imply that the variant is bounded below by a constant (typically zero). Thus it cannot continually decrease without eventually making the guard false; and so existence of such a variant implies the loop’s termination.

For probabilistic programs, the definition of loop termination is often weakened to “almost-sure termination”, or “termination with probability one”, by which is meant that (only) the probability of the loop’s iterating forever is zero. For example if you flip a fair coin repeatedly until you get heads, it is almost sure that you will eventually stop — for the probability of flipping tails forever is , i.e. zero. We will write AS for “almost sure” and AST for “almost-sure termination” or “almost-surely terminating”.

But the standard variant rule we mentioned above is too weak for AST in general. Write for choice of with probability resp. and consider the AST program

 (1)

It has no standard variant, because that variant would have to be decreased strictly by both updates to . Also the simple AST program

 (2) 1dSRW:while(x≠0){ x\coloneqqx+1 \nicefrac12⊕ x\coloneqqx−1 } ,

the symmetric random walk over integers , is beyond the reach of the standard rule.

Thus we need AST-rules for properly probabilistic programs, and indeed many exist already. One such, designed to be as close as possible to the standard rule, is that an integer-valued variant must be bounded above as well as below, and its strict decrease need only occur with non-zero probability on each iteration, i.e. not necessarily every time (McIver and Morgan, 2005, Lem.2.7.1). 111Over an infinite state space, the second condition becomes “with some probability bounded away from zero”. That rule suffices for Program (1) above, with variant and upper bound 2; but still it does not suffice for Program (2).

The 1dSRW is however an elementary Markov process, and it is frustrating that a simple termination rule like the above (and some others’ rules too) cannot deal with its AST. This (and other examples) has led to many variations in the design of AST-rules, a competition in which the rules’ assumptions are weakened as much as one dares, to increase their applicability beyond what one’s colleagues can do; and yet of course the assumptions must not be weakened so much that the rule becomes unsound. This is our first principal Theme (A) — the power of AST-rules.

A second Theme (B) in the design of AST-rules is their applicability at the source level (of program texts), i.e. whether they are expressible and provable in a (probabilistic) program logic without “descending into the model”. We discuss that practical issue in §2 and App. D.3 — it is important e.g. for enabling theorem proving.

Finally, a third Theme (C) is the characterisation of the kinds of iteration for which a given rule is guaranteed to work, i.e. a completeness result stating for which AST programs a variant is guaranteed to exist, even if it is hard to find. Typical characterisations are “over a finite state space” (Hart et al., 1983),(McIver and Morgan, 2005, Lem. 7.6.1) or “with finite expected time to termination” (Ferrer Fioriti and Hermanns, 2015). 222The difficult feature of the 1dSRW is that its expected time to termination is infinite.

The contribution of this paper is to cover those three themes. We give a novel rule for AST, one that: (A) proves almost-sure termination in some cases that lie beyond what some other rules can do; (B) is applicable directly at the source level to probabilistic programs even if they include demonic choice, for which we give examples; and (C) is supported by mathematical results from pre- computer-science days that even give some limited completeness criteria. In particular, one of those classical works shows that our new rule must work for the two-dimensional random walk: a variant is guaranteed to exist, and to satisfy all our criteria. That guarantee notwithstanding, we have yet to find a 2dSRW-variant in closed form.

## 2. Overview

Expressed very informally, the new rule is this:

Find a non-negative real-valued variant function of the state such that: (1) iteration cannot increase ’s expected value; (2) on each iteration the actual value of must decrease by at least with probability at least for some fixed non-increasing strictly positive real-valued functions ; 333As §8.2 explains, functions must have those properties for all positive reals, not only the ’s that are reachable. and (3) iteration must cease if .

The formal statement of the rule, and a more detailed but still informal explanation, is given in §4.2.

Section 3 gives notation, and a brief summary of the programming logic we use. Section 4.3 uses that logic to prove the new rule rigorously; thus we do not reason about transition systems directly in our proof. Instead we rely on the logic’s being valid

for transition systems (e.g. valid for Markov decision processes), for the following two reasons:

Recall Theme (A) —:

The programming logic we use –its theorems to which we appeal– are valid even for programs that contain demonic choice. And so our result is valid for demonic choice as well. (In §8.1 and App. G we discuss the degree of demonic choice that is permitted.)

Recall Theme (B) —:

Expressing the termination rule in terms of a programming logic means that it can be applied to source code directly and that theorems can be (machine-) proved about it: there is no need to translate the program first into a transition system or any other formalism. The logic we use is a probabilistic generalisation of (standard) Hoare/Dijkstra logic (Dijkstra, 1976), due to Kozen (1985) and later extended by Morgan et al. (1996) and McIver and Morgan (2005) to (re-)incorporate demonic choice.

Section 5 carefully applies the rule to several small examples, illustrating its power and the logical manipulations it induces. Section 6 explores the classical literature on AST. Section 7 examines other contemporary AST rules. Section 8 treats some theoretical aspects and limitations.

## 3. Preliminaries

### 3.1. Programming Language and Semantics

is an imperative language based on Dijkstra’s guarded command language (1976) but with an additional operator of binary probabilistic choice introduced by Kozen (1985) and extended by Morgan et al. (1996) and McIver and Morgan (2005) to restore demonic choice: the combination of the two allows one easily to write “with probability no more than, or no less than, or between.” 444Kozen’s ground-breaking work replaced demonic choice with probabilistic choice. Its forward, operational model is functions from states to sets of discrete distributions on states, where any non-singleton sets represent demonic nondeterminism: this is essentially Markov decision processes, but also probabilistic/demonic transition systems. (In §8.1 we describe some of the conditions imposed on the “demonic” sets.) Its backwards, logical model is functions from so-called “post-expectations” to “pre-expectations”, non-negative real valued functions on the state that generalise the postconditions and preconditions of Hoare/Dijkstra (Hoare, 1969) that are Boolean functions on the state: that innovation, and the original link between the forwards and backwards semantics, due to Kozen (1985) but using our terminology here, is that , for program and post-expectation , means that pre-expectation is a function that gives for every initial state the expected value of in the final distribution reached by executing . The demonic generalisation of that (Morgan et al., 1996; McIver and Morgan, 2005) is that gives the infimum over all possible final distributions of ’s expected value. Both of these generalise the “standard” Boolean interpretation exactly if false is interpreted as zero, true as one, implication as and therefore conjunction as infimum.

’s weakest pre-expectation logic, like Dijkstra’s weakest precondition logic, is designed to be applied at the source-code level of programs, as the case studies in §5 illustrate. Its theorems etc. are also expressed at the source-code level, but apply of course to whatever semantics into which the logic is (validly) interpreted.

We now set out more precisely the framework in which we operate. Let be the set of program states. We call a subset of a predicate, equivalently a function from to the Booleans. If is the Cartesian product of named-variable types, we can describe functions on as expressions in which those variables appear free, and predicates are then Boolean-valued expressions.

We use Iverson bracket notation to denote the indicator function of a predicate , that is with value 1 on those states where holds and 0 otherwise.

An expectation

is a random variable that maps program states to non-negative reals:

###### Definition 3.1 (Expectations (McIver and Morgan, 2005)).

The set of expectations on , denoted by , is defined as  . We say that expectation is bounded iff there exists a (non-negative) real such that for all states . The natural complete partial order on is obtained by pointwise lifting, that is

 △

Thus Iverson brackets map predicates to expectations, and to similarly — that is, we have just when .

Following Kozen (1985), here we are are based on Dijkstra’s guarded-command language (Dijkstra, 1976), but it is extended with a probabilistic-choice operator between program (fragments) that chooses its left operand with probability (and its right complementarily). Beyond Kozen however, we use where demonic choice is retained (Morgan et al., 1996; McIver and Morgan, 2005) — i.e.  contains both probabilistic- and demonic choice. The syntax of is given in Table 1, and its semantics of expectation transformers, the generalisation of predicate transformers, is defined as follows:

###### Definition 3.2 (The wp-Transformer (McIver and Morgan, 2005)).

The weakest pre-expectation transformer semantic function is defined in Table 1 by induction on all programs.

If is an expectation on the final state, then is an expectation on the initial state: thus is the infimum, over all distributions of final states that can reach from , of the expected value of on each of them: there will be more than one just when contains demonic choice. In the special case where is for predicate , that value is thus the least guaranteed probability with which from will reach a final state satisfying .

The natural connection between the standard world of predicate transformers (Dijkstra) and the probabilistic expectation transformers (Kozen/) is the indicator function: for example is and is , 555We will blur the distinction between Booleans and constant predicates, so that false is just as well the predicate that holds for no state. The same applies to reals and constant expectations. and the predicate implication is equivalent to the expectation inequality . The standard , using standard and program (i.e. without probabilistic choice in ), becomes when using the we adopt here. Finally, the idiom

 (3) p⋅[A]≤wp.Com.[B] ,

where “” is real-valued multiplication (pointwise lifted if necessary), means “with probability at least the program will take an initial state satisfying to a final state satisfying ”, where is a -valued expression on (or equivalently a function of) the program state: in most cases however is constant. (See App. D.1.) This is because if the initial state does not satisfy , i.e.  is false, then the lhs of (3) is zero so that the inequality is trivially true; and if does satisfy then the lhs is (or more generally) and the rhs is the least guaranteed probability of reaching , because the expected value of over a distribution is the probability that distribution assigns to . (The “least” is, again, because of possible demonic nondeterminism.)

There are many properties of ’s probabilistic that are analogues of for standard programs; but one that is not an analogue is “scaling” (McIver and Morgan, 2005, Def. 1.6.2)

, an intrinsically numeric property whose justification rests ultimately on the distribution of multiplication through expected value from elementary probability theory. For us it is that for all commands

, post-expectations and non-negative reals we have

 (4) wp.Com.(c⋅Post)=c⋅(wp.Com.Post) .

We use it in the proof of Thm. 4.1 below. (See also App. D.2.)

### 3.2. Probabilistic Invariants, Variants, and Termination with Probability 1

With the above correspondence, the following probabilistic analogues of standard termination and invariants are natural.

###### Definition 3.3 (Probabilistic Invariants  (McIver and Morgan, 2005, p. 39, Definition 2.2.1)).

Let be a predicate, a loop guard, and be a program, a loop body. Then bounded expectation is a probabilistic invariant of the loop  just when

 (5) [Guard]⋅Inv≤wp.Com.Inv .

In this case we say that is preserved by each iteration of . 666If (real valued) expectation were equal to for some predicate , we’d have , exactly the standard meaning of “preserves ”.

When some predicate is such that is a probabilistic invariant, we can equivalently say that itself is a standard invariant (predicate). 777For any standard program , i.e. without probabilistic choice, Dijkstra’s judgement  is equivalent to our judgement  for any predicate .

In §1 we recalled that the standard method of proving (standard) loop termination is to find an integer-valued variant function on the state such that the loop’s guard (and the invariant, if one is given) imply that and that strictly decreases on each iteration. A probabilistic analogue of loop termination is “terminates with probability one”, i.e. terminates almost-surely, and one (of many) probabilistic analogue(s) of the standard loop-termination rule is the following:

###### Theorem 3.4 (Variant rule for loops (existing: (McIver and Morgan, 2005, p. 55, Lemma 2.7.1))).

Let be predicates; let be an integer-valued function on the state space; let be fixed integers; let be a fixed strictly positive probability that bounds away from zero the probability that decreases; and let be a program. Then the three conditions are

1. is a standard invariant (equiv.  an invariant) of  , and

2. , and 888The original rule (McIver and Morgan, 2005, Lem. 2.7.1) had . We make this inessential change for later neatness.

3. for any constant integer we have

and, when taken all together, they imply   , that from any initial state satisfying the loop terminates AS.

The “for any integer ” in (iii) above is the usual Hoare-logic technique for capturing an expression’s initial value (in this case ’s) for use in the postcondition: we can write “” there for “the current value , here in the final state, is strictly less than the value it had in the initial state.” 999In greater detail: if the universally quantified is instantiated to anything other than ’s initial value then the left-hand side of (iii) is zero, satisfying the inequality trivially since the right-hand side is non-negative by definition of expectations. Recalling (3), we see that assumption (iii) thus reads

On every iteration of the loop the variant is guaranteed to decrease strictly with probability at least some (fixed) strictly positive  .

The probabilistic variant rule above differs from the standard rule in two essential respects: the probabilistic variant must be bounded above as well as below (which tends to make the rule weaker); and the decrease need not be certain, rather only bounded away from zero (which tends to make the rule stronger). Although this rule does have wide applicability (McIver and Morgan, 2005, Chp. 3), it nevertheless is not sufficient for example to show AST of the symmetric random walk, Program (2). 101010Any variant that works for (McIver and Morgan, 2005, p. 55, Lemma 2.7.1) must be bounded above and -below, and integer-valued. And it must be able (with some non-zero probability) to decrease strictly on each step. If its bounds were say , then it must therefore be able to terminate from anywhere in no more than steps, a fixed and finite number. But (2) does not have that property.

The advance incorporated in our new rule, as explained in the next section, is to strengthen Thm. 3.4 in three ways: (1) we remove the need for an upper bound on the variant; (2) we allow the probability to vary; and (3) we allow the variant to be real-valued. (Thm. 3.4 is itself used as a lemma in the proof of soundness of the new rule.)

We will need the following theorem, a probabilistic analogue of the standard technique that partial correctness plus termination gives total correctness, and with similar significance: proving “only” that a standard loop terminates certainly indeed does not necessarily give information about the loop’s efficiency; but the termination proof is still an essential prerequisite for other proofs about the loop’s functional correctness. The same applies in the probabilistic case.

###### Theorem 3.5 (Almost-sure termination for probabilistic loops  (existing:  (McIver and Morgan, 2005, p. 43, Lemma 2.4.1, Case 2.))).

Let satisfy  , that is that from any initial state satisfying the loop terminates AS (termination), and let bounded expectation be preserved by whenever holds, i.e. it is a probabilistic invariant of   (partial correctness). Then

 (total correctness) [Term]⋅Sub≤wp.while(Guard){Com}.([¬Guard]⋅Sub) .

The intuitive import of this theorem is that if bounded is a probabilistic invariant preserved by each iteration of the loop body, then also the whole loop “preserves" from any state where the loop’s termination is AS. This holds even if contains demonic choice.

Bounding is required by (McIver and Morgan, 2005), where Thm. 3.5 is found, and it is necessary here (§8.4).

## 4. A New Proof Rule for Almost-Sure Termination

### 4.1. Martingales

Important for us in extending the AST rule is reasoning about “sub- and super-martingales”.

A martingale is a sequence of random variables for which the expected value of each random variable next in the sequence is equal to the current value (irrespective of any earlier values). A super-martingale is more general: the current value may be larger than the expected subsequent value; and a sub-martingale is the complementary generalisation. In probabilistic programs, as we treat them here, such a sequence of random variables is some expectation evaluated over the succession of program states as a loop executes, and an exact/super/sub -martingale is an expectation whose exact value at the beginning of an iteration (a single state) is equal-to/no-less-than/no-more-than its expected value at the end of that iteration.

A trivial example of a sub-martingale is the invariant predicate of a loop in standard programming, provided we interpret , for if the invariant is true at the beginning of the loop body it must be true at the end — provided the loop guard is true. More generally in Def. 3.3 above we defined a probabilistic invariant, and at (5) there we see that it is a sub-martingale, again provided the loop guard holds. (If the loop guard does not hold, then is 0 and the inequality is trivial.) To take the loop guard into account, we say in that case that is a sub-martingale on .

### 4.2. Introduction, Informal Explanation and Example of the New Rule

The new rule is presented here, with an informal explanation; just below it we highlight the way in which it differs from the existing rule referred to in Thm. 3.4; then we give an overview of the new rule’s proof; and finally we give an informal example. The detailed proof follows in Section §4.3, and fully worked-out examples are given in §5. To distinguish material in this section from the earlier rules above, here we use single-letter identifiers for predicates and expectations.

We say that a function is antitone just when for all .

###### Theorem 4.1 (New Variant Rule for Loops).

Let be predicates; let be a non-negative real-valued function not necessarily bounded; let (for “probability”) be a fixed function of type ; let (for “decrease") be a fixed function of type , both of them antitone on strictly positive arguments; and let be a program.

Suppose the following four conditions hold:

1. is a standard invariant of   , and

2.  , and

3. For any we have  , and

4. satisfies the “super-martingale” condition that

where is defined as .

Then we have  , i.e. AST from any initial state satisfying .

Note that our theorem is stated (and will be proved) in terms of . Our justification however for calling (iv) a “super-martingale condition” on is that decrease (in expectation) of is equivalent to increase of . (App. B gives more detail.) Further, in our coming appeal to Thm. 3.5 the expectation must be bounded — and is not (necessarily). Thus we use for arbitrary instead, each instance of which is bounded by ; and decreases when increases.

The other reason for using the “inverted” formulation is that interprets demonic choice by minimising over possible final distributions, and so the direction of the inequality in Thm. 3.5 means we must express the “super-martingale property” of in this complementary way.

As in Thm. 3.4(iii), we have written in the Hoare style in the pre-expectation at (iii) above to make ’s initial value available (as the real ) in the post-expectation. The overall effect is

If a predicate is a standard invariant, and there is a non-negative real-valued variant function , on the state, that is a super-martingale on with the progress condition that every iteration of the loop decreases it by at least of its initial value with probability at least of its initial value, then the loop  terminates AS from any inital state satisfying .

The differences from the earlier variant rule Thm. 3.4 are these:

1. The variant is now real-valued, with no upper bound (but is bounded below by zero). We call a quasi-variant to distinguish it from traditional integer-valued variants.

2. Quasi-variants are not required to decrease by a fixed non-zero amount with a fixed non-zero probability. Instead there are two functions that give for each variant-value how much Com must decrease it (at least) and with what probability (at least). The only restriction on those functions (aside from the obvious ones) is that they be antitone, i.e. that for larger arguments they must give equal-or-smaller (but never zero) values. The reason for requiring and to be antitone is to exclude Zeno-like behavior where the variant decreases less and less, and/or with less and less probability. Otherwise, each loop iteration could decrease the variant by a positive amount with positive probability –bringing it ever closer to zero– but never actually reaching the zero that implies negation of the guard, and thus termination.

3. Quasi-variants are required to be super-martingales: that from every state satisfying the expected value of the quasi-variant after cannot increase.

Note that Thm. 3.4 did not have a super-martingale assumption: although the probability that VInt decreased by at least 1 was required there to be at least , the change in expected value of VInt was unconstrained. For example, if with the remaining probability it increased by a lot (but still not above High), then its expected value could actually increase as well.

A simple example of the power of Thm. 4.1 (Theme A in §1) is in fact the symmetric random walk mentioned earlier. Let the state-space be the integers , and let each loop iteration when either decrease by 1 or increase it by 1 with equal probability. AST is out of reach of the earlier rule Thm. 3.4 because is not bounded above, and out of reach of some others’ rules too, because the expected time to termination is infinite (Ferrer Fioriti and Hermanns, 2015). Yet termination at is shown immediately with Thm. 4.1 by taking , trivially an exact martingale when , and and .

### 4.3. Rigorous Proof of Thm. 4.1

We begin with an informal description of the strategy of the proof that follows.

1. We choose an arbitrary real value and temporarily strengthen the loop’s guard by conjoining . From the antitone properties of we know that each execution of Com with that strengthened guard decreases quasi-variant by at least with probability at least . Using that to “discretise” , making it an integer bounded above and below, we can appeal to the earlier Thm. 3.4 to show that this guard-strengthened loop terminates AS for any .

2. Using the super-martingale property of , we argue that the probability of “bad” escape to decreases to zero as increases: for escape from the strengthened loop to with some probability say implies a contribution of at least to ’s expected value at that point. But that expected value cannot exceed ’s original value, because is a super-martingale. (For this we appeal to Thm. 3.5 after converting into a sub-martingale as required there.) Thus as gets larger must get smaller.

3. Since approaches 0 as increases indefinitely, we argue finally that, wherever we start, we can make the probability of escape to as small as we like by increasing sufficiently; complementarily we are making the only remaining escape probability, i.e. of “good” escape to , as close to 1 as we like. Thus it equals 1, since was arbitrary. Because this last argument depends essentially on increasing without bound, it means that must be defined, non-zero and antitone on all positive reals, not only on those resulting from ) on some state the program happens to reach. This is particularly important when is bounded. (See §8.2.)

We now give the rigorous proof of Thm. 4.1, following the strategy explained just above.

###### Proof.

(of Thm. 4.1)
Let be a quasi-variant for  , satisfying progress for some as defined in the statement of the theorem, and recall that is a standard invariant for that loop.

#### A

For any , the loop (6) below terminates AS from any initial state satisfying .
Fix arbitrary in , and strengthen the loop guard of   with the conjunct . We show that

 (6) [I]≤wp.while(G∧V≤H){Com}.1 ,

i.e. that standard invariant describes a set of states from which the loop (6) terminates AS.

We apply Thm. 3.4 to (6), after using ceiling to make an integer-valued variant , and with other instantiations as follows:

 (7) Inv:=IGuard:=G∧V≤HVInt:=⌈Vd(H)⌉Low:=0High:=⌈Hd(H)⌉ε:=p(H)

The can be thought of as a discretised version of — the original moves between and with down-steps of at least while integer moves between and with down-steps of at least . In both cases, the down-steps occur with probability at least .

We now verify that our choices (7) satisfy the assumptions of Thm. 3.4:

1. is a standard invariant of (6) because is by assumption a standard invariant of the loop  , and the only difference is that (6) has a stronger guard.

2. Now note that implies for any strictly positive . Then

 Guard∧Inv instantiations Guard,Inv ⟺ (G∧V≤H)∧I G∧I⇒00 ⟹ 0<⌈\nicefracVd(H)⌉≤⌈\nicefracHd(H)⌉ instantiations Low,VInt,High ⟹ Low
3. In this final section of Step (A) we will write in an explicit style that relies less on Hoare-logic conventions and more on exposing clearly the types involved and the role of the initial- and final state. In this style, our assumption for appealing to Thm. 3.4 is that for all (initial) states we have

 (8) p(H)⋅[G(σ)∧V(σ)≤H∧I(σ)] (9) ≤ wp.Com.(λσ′.[VInt(σ′)

Here both the lhs and rhs are real-valued expressions in which an arbitrary initial state appears free. On the left are predicates on , and is a non-negative real-valued function on , and are constants of type and respectively.

On the right  is a (weakest pre-) expectation, a real-valued function on ; applying it to the initial state –the final in (9) at rhs– produces a non-negative real scalar.

The second argument of  is a post-expectation, again a function of type , but  takes that ’s expected value over the final distribution(s) that reaches from — for mnemonic advantage, we bind its states with . And using also allows us to refer in to the initial state as , not captured by , so that we can compare the initial and final values of as required.

What we have now is our assumption of progress for the original loop  , which was

 (10) p(V(σ))⋅[G(σ)∧I(σ)]≤wp.Com.(λσ′.[V(σ′)≤V(σ)−d(V(σ))])(σ) ,

and we must use (10), together with the antitone properties of to show . We begin with (8) and reason

 (8) above p(H)⋅[G(σ)∧V(σ)≤H∧I(σ)] G∧I⇒V>0 by assumption Thm. 4.1(ii) = p(H)⋅[G(σ)∧0

Now continuing only within the of the post-expectation we have 141414This reduces clutter, and in general implies , and is itself monotonic for any .

 V(σ′)≤V(σ)−d(V(σ)) d(H)>0, ⌈−⌉ monotonic ⟹ ⌈V(σ′)/d(H)⌉≤⌈V(σ)/d(H)−d(V(σ))/d(H)⌉ V(σ)≤H, d antitone, lhs (8) ⟹ ⌈V(σ′)/d(H)⌉≤⌈V(σ)/d(H)⌉−1 ⟹ ⌈V(σ′)/d(H)⌉<⌈V(σ)/d(H)⌉ definition VInt ⟹ VInt(σ′)

Placing the last line back within  gives what was required at (9) and establishes (6) — that escape from occurs AS from any initial state satisfying .

#### B

Loop (6)’s probability of termination at tends to 1 as .
For the probabilistic invariant, i.e. sub-martingale in Theorem 3.5, we choose . Note that, as required by Thm. 3.5, expectation is bounded (by  ). Let predicate be which from (6) we know ensures AST of the modified loop. Thus the assumptions of Thm. 3.5 are satisfied: reasoning from its conclusion we have

 [I]⋅H⊖V ≤ wp.% while(G∧V≤H){Com}.([¬(G∧V≤H)]⋅H⊖V) V>H⇒H⊖V=0 ⟺ [I]⋅H⊖V ≤ wp.% while(G∧V≤H){Com}.([¬G]⋅H⊖V) scaling (4) by /1H ⟺ [I]⋅1⊖\nicefracVH ≤ wp.while(G∧V≤H){Com}.([¬G]⋅1⊖\nicefracVH) monotonicity ⟹ 1⊖\nicefracVH⋅[I] ≤ wp.while(G∧V≤H){Com}.[¬G] ,

that is, recalling (3), that from any initial state satisfying the loop (6) terminates in a state satisfying with probability at least . As required, that probability (for fixed initial state) tends to 1 as tends to infinity.

#### C

The original loop terminates AS from any initial state satisfying .
From App. A, instantiating and , we have for any that

 wp.while(G∧V≤H){Com}.[¬G]≤wp.while(G){Com}.[¬G]

and, referring to the last line in (B) just above, we conclude  . Since that holds for any no matter how large, we have finally that

 [I]≤wp.while(G){Com}.[¬G]≤wp.while(G){Com}.1 ,

that is that from any initial state satisfying the loop  terminates AS. ∎

## 5. Case Studies

In this section, we examine a few (mostly) non-trivial examples to show the effectiveness of Thm. 4.1. For all examples we provide a quasi-variant that proves AST; and we will always choose so that they are strictly positive and antitone. We will not provide proofs of the properties, because they will be self-evident and are in any case “external” mathematical facts. We do however carefully set-out any proofs that depend on the program text: that indicates termination, that satisfies the super-martingale property, and that , , and satisfy the progress condition.

For convenience in these examples, we define a derived expectation transformer , over terminating straight-line programs only (as our loop bodies are, in this section), that “factors out” the ; it has the same definition as of in Table 1 except that nondeterminism is interpreted angelically rather than demonically: that is, we define

 awp.{C1}□{C2}.f={max}{awp.C1%.f,awp.C2.f} ,

and otherwise as for (except for loops, which we do not need here). A straightforward structural induction then shows that for straight-line programs , constant and any expectation that

 (11) H⊖awp.Com.V≤wp.Com.(H⊖V) .

And from there we have immediately that

 (12) V≥awp.Com.V⟹H⊖V≤wp.Com.(H⊖V) ,

and finally therefore that

 (13) V≥[G∧I]⋅awp.Com.V⟹[G∧I]⋅(H⊖V)≤wp.Com%.(H⊖V) ,

since if holds then (13) reduces to (12) and, if it does not hold, both sides of (13) are trivially true. Thus when the loop body is a straight-line program, by establishing lhs (13) we establish also rhs (13) as required by Thm. 4.1(iv). We stress that is used here for concision and intuition only: applied only to finite, non-looping programs, it can always be replaced by .

Thus lhs (13) expresses clearly and directly that is a super-martingale when holds, and handles any nondeterminism correctly in that respect: because maximises rather than minimises over nondeterministic outcomes (the opposite of ), the super-martingale inequality holds for every individual outcome, as required.

In §8.3 we discuss the reasons for not using in Thm. 4.1 directly, i.e. not eliminating “” at the very start: in short, it is because our principal reference (McIver and Morgan, 2005) does not support .

### 5.1. The Negative-Binomial Loop

Our first example is also proved by other AST rules, so we do not need the extra power of Thm. 4.1 for it; but we begin with this to illustrate Theme B with a familar example how Thm. 4.1 is used in formal reasoning over program texts.

#### Description of the loop.

Consider the following while-loop over the real-valued variable :

 (14) while(x≠0){ x\coloneqqx−1 \nicefrac12⊕ skip } .

An interpretation of this loop as a transition system is illustrated in Figure 1.

Intuitively, this loop keeps flipping a coin until it flips, say, heads times (not necessarily in a row); every time it flips tails, the loop continues without changing the program state.

We call it the negative binomial loop because its runtime is distributed according to a negative binomial distribution (with parameters

and ), and thus the expected runtime is linear (on average loop iterations) even though it allows for infinite executions, namely those runs of the program that flip heads fewer than times and then keep flipping tails ad infinitum.

A subtle intricacy is that this loop will not terminate at all, if is initially not a non-negative integer, because then the execution of the loop never reaches a state in which . This is where we use Theorem 4.1’s ability of incorporating an invariant into the AST proof, as standard arguments over loop termination do.

#### Proof of almost-sure termination

The guard is given by  ,
and the loop body by  .
And with the standard invariant  ,
we can now prove AST of the loop with an appropriate and quasi-variant :

 V = |x|,for d = 1andp = 1/2 .

Notice that are strictly speaking constant functions mapping any positive real to respectively. Intuitively, this choice of , , , and tells us that if is a positive integer different from , then after one iteration of the loop body (a) is still a non-negative integer (by invariance of ) and (b) the distance of from has decreased by at least with probability at least  (implied by the progress condition).

We first check that is indeed an invariant:

 [G]⋅[I] = [x≠0]⋅[x∈Z≥0] = [x∈Z>0] ≤ 12([x∈Z>0]+[x∈Z≥0]) = 12([x−1∈Z≥0]+[x∈Z≥0]) = wp.{x\coloneqqx−1}\nicefrac12⊕{skip}.[x∈Z≥0] = wp.Com.[I] .

Next, the second precondition of Theorem 4.1 is satisfied because of

 G∧I ⟺ x≠0∧x∈Z≥0 ⟹ x≠0 ⟹ |x|>0 ⟺ V>0 .

Furthermore, satisfies the super-martingale property:

 [G∧I]⋅awp.Com.V = [x≠0∧x∈Z≥0]⋅awp.({x\coloneqqx−1}\nicefrac12⊕{skip}).|x| = [x∈Z>0]⋅12⋅(|x−1|+|x|) = [x∈Z>0]⋅(|x|−12) ≤ [x∈Z>0]⋅|x| ≤ |x| = V .

Lastly, , , and satisfy the progress condition for all :

 ⟺ 12⋅[x≠0∧x∈Z≥0∧|x|=R] ≤ wp.{x\coloneqqx−1}\nicefrac12⊕{skip}.[|x|≤R−1] ⟺ 12⋅[x∈Z>0∧|x|=R] ≤ wp.{x\coloneqqx−1}\nicefrac12⊕{skip}.[|x|≤R−1] ⟺ 12⋅[x∈Z>0∧|x|=R] ≤ 12⋅([|x−1|≤R−1]+[|x|≤R−1]) ⟺ [x∈Z>0∧|x|=R] ≤ ([|x−1|≤R−1]+[|x|≤R−1]) ⟺ [x∈Z>0∧|x|=R] ≤ [x∈Z>0∧|x|=R]⋅([|x−1|≤R−1]+[|x|≤R−1]) ⟺ [x∈Z>0∧|x|=R] ≤ [x∈Z>0∧|x|=R]⋅(1+0) ⟺ [x∈Z>0∧|x|=R] ≤ [x∈Z>0∧|x|=R] ⟺ true .

This shows that all preconditions of Theorem 4.1 are satisfied: thus we have   , i.e. that the negative binomial loop terminates almost-surely from all initial states in which is a non-negative integer.

### 5.2. The Demonically Fair Random Walk

Next, we consider a while-loop that contains both probabilistic- and demonic choice.

#### Description of the loop.

Consider the following while-loop:

 while(x>0){ {(x\coloneqqx−1}\nicefrac12⊕{{x\coloneqqx+1}□{skip}(} }