1. Introduction
This paper concerns termination proofs for sequential, imperative probabilistic programs, i.e. those that, in addition to the usual constructs, include a binary operator for probabilistic choice. Writing “standard” to mean “nonprobabilistic”, we recall that the standard technique for loop termination is to find an integervalued function over the program’s state space, a “variant”, that satisfies the “progress” condition that each iteration is guaranteed to decrease the variant strictly and further that the loop guard and invariant imply that the variant is bounded below by a constant (typically zero). Thus it cannot continually decrease without eventually making the guard false; and so existence of such a variant implies the loop’s termination.
For probabilistic programs, the definition of loop termination is often weakened to “almostsure termination”, or “termination with probability one”, by which is meant that (only) the probability of the loop’s iterating forever is zero. For example if you flip a fair coin repeatedly until you get heads, it is almost sure that you will eventually stop — for the probability of flipping tails forever is , i.e. zero. We will write AS for “almost sure” and AST for “almostsure termination” or “almostsurely terminating”.
But the standard variant rule we mentioned above is too weak for AST in general. Write for choice of with probability resp. and consider the AST program
(1) 
It has no standard variant, because that variant would have to be decreased strictly by both updates to . Also the simple AST program
(2) 
the symmetric random walk over integers , is beyond the reach of the standard rule.
Thus we need ASTrules for properly probabilistic programs, and indeed many exist already. One such, designed to be as close as possible to the standard rule, is that an integervalued variant must be bounded above as well as below, and its strict decrease need only occur with nonzero probability on each iteration, i.e. not necessarily every time (McIver and Morgan, 2005, Lem.2.7.1). ^{1}^{1}1Over an infinite state space, the second condition becomes “with some probability bounded away from zero”. That rule suffices for Program (1) above, with variant and upper bound 2; but still it does not suffice for Program (2).
The 1dSRW is however an elementary Markov process, and it is frustrating that a simple termination rule like the above (and some others’ rules too) cannot deal with its AST. This (and other examples) has led to many variations in the design of ASTrules, a competition in which the rules’ assumptions are weakened as much as one dares, to increase their applicability beyond what one’s colleagues can do; and yet of course the assumptions must not be weakened so much that the rule becomes unsound. This is our first principal Theme (A) — the power of ASTrules.
A second Theme (B) in the design of ASTrules is their applicability at the source level (of program texts), i.e. whether they are expressible and provable in a (probabilistic) program logic without “descending into the model”. We discuss that practical issue in §2 and App. D.3 — it is important e.g. for enabling theorem proving.
Finally, a third Theme (C) is the characterisation of the kinds of iteration for which a given rule is guaranteed to work, i.e. a completeness result stating for which AST programs a variant is guaranteed to exist, even if it is hard to find. Typical characterisations are “over a finite state space” (Hart et al., 1983),(McIver and Morgan, 2005, Lem. 7.6.1) or “with finite expected time to termination” (Ferrer Fioriti and Hermanns, 2015). ^{2}^{2}2The difficult feature of the 1dSRW is that its expected time to termination is infinite.
The contribution of this paper is to cover those three themes. We give a novel rule for AST, one that: (A) proves almostsure termination in some cases that lie beyond what some other rules can do; (B) is applicable directly at the source level to probabilistic programs even if they include demonic choice, for which we give examples; and (C) is supported by mathematical results from pre computerscience days that even give some limited completeness criteria. In particular, one of those classical works shows that our new rule must work for the twodimensional random walk: a variant is guaranteed to exist, and to satisfy all our criteria. That guarantee notwithstanding, we have yet to find a 2dSRWvariant in closed form.
2. Overview
Expressed very informally, the new rule is this:
Find a nonnegative realvalued variant function of the state such that: (1) iteration cannot increase ’s expected value; (2) on each iteration the actual value of must decrease by at least with probability at least for some fixed nonincreasing strictly positive realvalued functions ; ^{3}^{3}3As §8.2 explains, functions must have those properties for all positive reals, not only the ’s that are reachable. and (3) iteration must cease if .
The formal statement of the rule, and a more detailed but still informal explanation, is given in §4.2.
Section 3 gives notation, and a brief summary of the programming logic we use. Section 4.3 uses that logic to prove the new rule rigorously; thus we do not reason about transition systems directly in our proof. Instead we rely on the logic’s being valid
for transition systems (e.g. valid for Markov decision processes), for the following two reasons:
 Recall Theme (A) —:
 Recall Theme (B) —:

Expressing the termination rule in terms of a programming logic means that it can be applied to source code directly and that theorems can be (machine) proved about it: there is no need to translate the program first into a transition system or any other formalism. The logic we use is a probabilistic generalisation of (standard) Hoare/Dijkstra logic (Dijkstra, 1976), due to Kozen (1985) and later extended by Morgan et al. (1996) and McIver and Morgan (2005) to (re)incorporate demonic choice.
3. Preliminaries
3.1. Programming Language and Semantics
is an imperative language based on Dijkstra’s guarded command language (1976) but with an additional operator of binary probabilistic choice introduced by Kozen (1985) and extended by Morgan et al. (1996) and McIver and Morgan (2005) to restore demonic choice: the combination of the two allows one easily to write “with probability no more than, or no less than, or between.” ^{4}^{4}4Kozen’s groundbreaking work replaced demonic choice with probabilistic choice. Its forward, operational model is functions from states to sets of discrete distributions on states, where any nonsingleton sets represent demonic nondeterminism: this is essentially Markov decision processes, but also probabilistic/demonic transition systems. (In §8.1 we describe some of the conditions imposed on the “demonic” sets.) Its backwards, logical model is functions from socalled “postexpectations” to “preexpectations”, nonnegative real valued functions on the state that generalise the postconditions and preconditions of Hoare/Dijkstra (Hoare, 1969) that are Boolean functions on the state: that innovation, and the original link between the forwards and backwards semantics, due to Kozen (1985) but using our terminology here, is that , for program and postexpectation , means that preexpectation is a function that gives for every initial state the expected value of in the final distribution reached by executing . The demonic generalisation of that (Morgan et al., 1996; McIver and Morgan, 2005) is that gives the infimum over all possible final distributions of ’s expected value. Both of these generalise the “standard” Boolean interpretation exactly if false is interpreted as zero, true as one, implication as and therefore conjunction as infimum.
’s weakest preexpectation logic, like Dijkstra’s weakest precondition logic, is designed to be applied at the sourcecode level of programs, as the case studies in §5 illustrate. Its theorems etc. are also expressed at the sourcecode level, but apply of course to whatever semantics into which the logic is (validly) interpreted.
We now set out more precisely the framework in which we operate. Let be the set of program states. We call a subset of a predicate, equivalently a function from to the Booleans. If is the Cartesian product of namedvariable types, we can describe functions on as expressions in which those variables appear free, and predicates are then Booleanvalued expressions.
We use Iverson bracket notation to denote the indicator function of a predicate , that is with value 1 on those states where holds and 0 otherwise.
An expectation
is a random variable that maps program states to nonnegative reals:
Definition 3.1 (Expectations (McIver and Morgan, 2005)).
The set of expectations on , denoted by , is defined as . We say that expectation is bounded iff there exists a (nonnegative) real such that for all states . The natural complete partial order on is obtained by pointwise lifting, that is
Thus Iverson brackets map predicates to expectations, and to similarly — that is, we have just when .
Following Kozen (1985), here we are are based on Dijkstra’s guardedcommand language (Dijkstra, 1976), but it is extended with a probabilisticchoice operator between program (fragments) that chooses its left operand with probability (and its right complementarily). Beyond Kozen however, we use where demonic choice is retained (Morgan et al., 1996; McIver and Morgan, 2005) — i.e. contains both probabilistic and demonic choice. The syntax of is given in Table 1, and its semantics of expectation transformers, the generalisation of predicate transformers, is defined as follows:
In the table above is a program, and is an expectation. The notation is function overridden at argument by the value . A period “” denotes (Curried) function application, so that for example is semanticfunction applied to the syntax ; the resulting transformer is then applied to the “postexpectation” . A centred dot is multiplication, either of scalars or of an expectation by a scalar.
In the probability can be an expression in the program variables (equivalently a valued function of ). Often however it is a constant.
The operator is demonic choice.
Definition 3.2 (The Transformer (McIver and Morgan, 2005)).
The weakest preexpectation transformer semantic function is defined in Table 1 by induction on all programs.
If is an expectation on the final state, then is an expectation on the initial state: thus is the infimum, over all distributions of final states that can reach from , of the expected value of on each of them: there will be more than one just when contains demonic choice. In the special case where is for predicate , that value is thus the least guaranteed probability with which from will reach a final state satisfying .
The natural connection between the standard world of predicate transformers (Dijkstra) and the probabilistic expectation transformers (Kozen/) is the indicator function: for example is and is , ^{5}^{5}5We will blur the distinction between Booleans and constant predicates, so that false is just as well the predicate that holds for no state. The same applies to reals and constant expectations. and the predicate implication is equivalent to the expectation inequality . The standard , using standard and program (i.e. without probabilistic choice in ), becomes when using the we adopt here. Finally, the idiom
(3) 
where “” is realvalued multiplication (pointwise lifted if necessary), means “with probability at least the program will take an initial state satisfying to a final state satisfying ”, where is a valued expression on (or equivalently a function of) the program state: in most cases however is constant. (See App. D.1.) This is because if the initial state does not satisfy , i.e. is false, then the lhs of (3) is zero so that the inequality is trivially true; and if does satisfy then the lhs is (or more generally) and the rhs is the least guaranteed probability of reaching , because the expected value of over a distribution is the probability that distribution assigns to . (The “least” is, again, because of possible demonic nondeterminism.)
There are many properties of ’s probabilistic that are analogues of for standard programs; but one that is not an analogue is “scaling” (McIver and Morgan, 2005, Def. 1.6.2)
, an intrinsically numeric property whose justification rests ultimately on the distribution of multiplication through expected value from elementary probability theory. For us it is that for all commands
, postexpectations and nonnegative reals we have(4) 
We use it in the proof of Thm. 4.1 below. (See also App. D.2.)
3.2. Probabilistic Invariants, Variants, and Termination with Probability 1
With the above correspondence, the following probabilistic analogues of standard termination and invariants are natural.
Definition 3.3 (Probabilistic Invariants (McIver and Morgan, 2005, p. 39, Definition 2.2.1)).
Let be a predicate, a loop guard, and be a program, a loop body. Then bounded expectation is a probabilistic invariant of the loop just when
(5) 
In this case we say that is preserved by each iteration of . ^{6}^{6}6If (real valued) expectation were equal to for some predicate , we’d have , exactly the standard meaning of “preserves ”.
When some predicate is such that is a probabilistic invariant, we can equivalently say that itself is a standard invariant (predicate). ^{7}^{7}7For any standard program , i.e. without probabilistic choice, Dijkstra’s judgement is equivalent to our judgement for any predicate .
In §1 we recalled that the standard method of proving (standard) loop termination is to find an integervalued variant function on the state such that the loop’s guard (and the invariant, if one is given) imply that and that strictly decreases on each iteration. A probabilistic analogue of loop termination is “terminates with probability one”, i.e. terminates almostsurely, and one (of many) probabilistic analogue(s) of the standard looptermination rule is the following:
Theorem 3.4 (Variant rule for loops (existing: (McIver and Morgan, 2005, p. 55, Lemma 2.7.1))).
Let be predicates; let be an integervalued function on the state space; let be fixed integers; let be a fixed strictly positive probability that bounds away from zero the probability that decreases; and let be a program. Then the three conditions are

is a standard invariant (equiv. an invariant) of , and

, and ^{8}^{8}8The original rule (McIver and Morgan, 2005, Lem. 2.7.1) had . We make this inessential change for later neatness.

for any constant integer we have
and, when taken all together, they imply , that from any initial state satisfying the loop terminates AS.
The “for any integer ” in (iii) above is the usual Hoarelogic technique for capturing an expression’s initial value (in this case ’s) for use in the postcondition: we can write “” there for “the current value , here in the final state, is strictly less than the value it had in the initial state.” ^{9}^{9}9In greater detail: if the universally quantified is instantiated to anything other than ’s initial value then the lefthand side of (iii) is zero, satisfying the inequality trivially since the righthand side is nonnegative by definition of expectations. Recalling (3), we see that assumption (iii) thus reads
On every iteration of the loop the variant is guaranteed to decrease strictly with probability at least some (fixed) strictly positive .
The probabilistic variant rule above differs from the standard rule in two essential respects: the probabilistic variant must be bounded above as well as below (which tends to make the rule weaker); and the decrease need not be certain, rather only bounded away from zero (which tends to make the rule stronger). Although this rule does have wide applicability (McIver and Morgan, 2005, Chp. 3), it nevertheless is not sufficient for example to show AST of the symmetric random walk, Program (2). ^{10}^{10}10Any variant that works for (McIver and Morgan, 2005, p. 55, Lemma 2.7.1) must be bounded above and below, and integervalued. And it must be able (with some nonzero probability) to decrease strictly on each step. If its bounds were say , then it must therefore be able to terminate from anywhere in no more than steps, a fixed and finite number. But (2) does not have that property.
The advance incorporated in our new rule, as explained in the next section, is to strengthen Thm. 3.4 in three ways: (1) we remove the need for an upper bound on the variant; (2) we allow the probability to vary; and (3) we allow the variant to be realvalued. (Thm. 3.4 is itself used as a lemma in the proof of soundness of the new rule.)
We will need the following theorem, a probabilistic analogue of the standard technique that partial correctness plus termination gives total correctness, and with similar significance: proving “only” that a standard loop terminates certainly indeed does not necessarily give information about the loop’s efficiency; but the termination proof is still an essential prerequisite for other proofs about the loop’s functional correctness. The same applies in the probabilistic case.
Theorem 3.5 (Almostsure termination for probabilistic loops (existing: (McIver and Morgan, 2005, p. 43, Lemma 2.4.1, Case 2.))).
Let satisfy , that is that from any initial state satisfying the loop terminates AS (termination), and let bounded expectation be preserved by whenever holds, i.e. it is a probabilistic invariant of (partial correctness). Then
(total correctness) 
The intuitive import of this theorem is that if bounded is a probabilistic invariant preserved by each iteration of the loop body, then also the whole loop “preserves" from any state where the loop’s termination is AS. This holds even if contains demonic choice.
4. A New Proof Rule for AlmostSure Termination
4.1. Martingales
Important for us in extending the AST rule is reasoning about “sub and supermartingales”.
A martingale is a sequence of random variables for which the expected value of each random variable next in the sequence is equal to the current value (irrespective of any earlier values). A supermartingale is more general: the current value may be larger than the expected subsequent value; and a submartingale is the complementary generalisation. In probabilistic programs, as we treat them here, such a sequence of random variables is some expectation evaluated over the succession of program states as a loop executes, and an exact/super/sub martingale is an expectation whose exact value at the beginning of an iteration (a single state) is equalto/nolessthan/nomorethan its expected value at the end of that iteration.
A trivial example of a submartingale is the invariant predicate of a loop in standard programming, provided we interpret , for if the invariant is true at the beginning of the loop body it must be true at the end — provided the loop guard is true. More generally in Def. 3.3 above we defined a probabilistic invariant, and at (5) there we see that it is a submartingale, again provided the loop guard holds. (If the loop guard does not hold, then is 0 and the inequality is trivial.) To take the loop guard into account, we say in that case that is a submartingale on .
4.2. Introduction, Informal Explanation and Example of the New Rule
The new rule is presented here, with an informal explanation; just below it we highlight the way in which it differs from the existing rule referred to in Thm. 3.4; then we give an overview of the new rule’s proof; and finally we give an informal example. The detailed proof follows in Section §4.3, and fully workedout examples are given in §5. To distinguish material in this section from the earlier rules above, here we use singleletter identifiers for predicates and expectations.
We say that a function is antitone just when for all .
Theorem 4.1 (New Variant Rule for Loops).
Let be predicates; let be a nonnegative realvalued function not necessarily bounded; let (for “probability”) be a fixed function of type ; let (for “decrease") be a fixed function of type , both of them antitone on strictly positive arguments; and let be a program.
Suppose the following four conditions hold:

is a standard invariant of , and

, and

For any we have , and

satisfies the “supermartingale” condition that
where is defined as .
Then we have , i.e. AST from any initial state satisfying .
Note that our theorem is stated (and will be proved) in terms of . Our justification however for calling (iv) a “supermartingale condition” on is that decrease (in expectation) of is equivalent to increase of . (App. B gives more detail.) Further, in our coming appeal to Thm. 3.5 the expectation must be bounded — and is not (necessarily). Thus we use for arbitrary instead, each instance of which is bounded by ; and decreases when increases.
The other reason for using the “inverted” formulation is that interprets demonic choice by minimising over possible final distributions, and so the direction of the inequality in Thm. 3.5 means we must express the “supermartingale property” of in this complementary way.
As in Thm. 3.4(iii), we have written in the Hoare style in the preexpectation at (iii) above to make ’s initial value available (as the real ) in the postexpectation. The overall effect is
If a predicate is a standard invariant, and there is a nonnegative realvalued variant function , on the state, that is a supermartingale on with the progress condition that every iteration of the loop decreases it by at least of its initial value with probability at least of its initial value, then the loop terminates AS from any inital state satisfying .
The differences from the earlier variant rule Thm. 3.4 are these:

The variant is now realvalued, with no upper bound (but is bounded below by zero). We call a quasivariant to distinguish it from traditional integervalued variants.

Quasivariants are not required to decrease by a fixed nonzero amount with a fixed nonzero probability. Instead there are two functions that give for each variantvalue how much Com must decrease it (at least) and with what probability (at least). The only restriction on those functions (aside from the obvious ones) is that they be antitone, i.e. that for larger arguments they must give equalorsmaller (but never zero) values. The reason for requiring and to be antitone is to exclude Zenolike behavior where the variant decreases less and less, and/or with less and less probability. Otherwise, each loop iteration could decrease the variant by a positive amount with positive probability –bringing it ever closer to zero– but never actually reaching the zero that implies negation of the guard, and thus termination.

Quasivariants are required to be supermartingales: that from every state satisfying the expected value of the quasivariant after cannot increase.
Note that Thm. 3.4 did not have a supermartingale assumption: although the probability that VInt decreased by at least 1 was required there to be at least , the change in expected value of VInt was unconstrained. For example, if with the remaining probability it increased by a lot (but still not above High), then its expected value could actually increase as well.
A simple example of the power of Thm. 4.1 (Theme A in §1) is in fact the symmetric random walk mentioned earlier. Let the statespace be the integers , and let each loop iteration when either decrease by 1 or increase it by 1 with equal probability. AST is out of reach of the earlier rule Thm. 3.4 because is not bounded above, and out of reach of some others’ rules too, because the expected time to termination is infinite (Ferrer Fioriti and Hermanns, 2015). Yet termination at is shown immediately with Thm. 4.1 by taking , trivially an exact martingale when , and and .
4.3. Rigorous Proof of Thm. 4.1
We begin with an informal description of the strategy of the proof that follows.

We choose an arbitrary real value and temporarily strengthen the loop’s guard by conjoining . From the antitone properties of we know that each execution of Com with that strengthened guard decreases quasivariant by at least with probability at least . Using that to “discretise” , making it an integer bounded above and below, we can appeal to the earlier Thm. 3.4 to show that this guardstrengthened loop terminates AS for any .

Using the supermartingale property of , we argue that the probability of “bad” escape to decreases to zero as increases: for escape from the strengthened loop to with some probability say implies a contribution of at least to ’s expected value at that point. But that expected value cannot exceed ’s original value, because is a supermartingale. (For this we appeal to Thm. 3.5 after converting into a submartingale as required there.) Thus as gets larger must get smaller.

Since approaches 0 as increases indefinitely, we argue finally that, wherever we start, we can make the probability of escape to as small as we like by increasing sufficiently; complementarily we are making the only remaining escape probability, i.e. of “good” escape to , as close to 1 as we like. Thus it equals 1, since was arbitrary. Because this last argument depends essentially on increasing without bound, it means that must be defined, nonzero and antitone on all positive reals, not only on those resulting from ) on some state the program happens to reach. This is particularly important when is bounded. (See §8.2.)
We now give the rigorous proof of Thm. 4.1, following the strategy explained just above.
Proof.
(of Thm. 4.1)
Let be a quasivariant for , satisfying progress for some as defined in the statement of the theorem,
and recall that is a standard invariant for that loop.
A
For any , the loop (6) below terminates AS from any initial state satisfying .
Fix arbitrary in , and strengthen the loop guard of with the conjunct . We show that
(6) 
i.e. that standard invariant describes a set of states from which the loop (6) terminates AS.
We apply Thm. 3.4 to (6), after using ceiling to make an integervalued variant , and with other instantiations as follows:
(7) 
The can be thought of as a discretised version of — the original moves between and with downsteps of at least while integer moves between and with downsteps of at least . In both cases, the downsteps occur with probability at least .
We now verify that our choices (7) satisfy the assumptions of Thm. 3.4:

In this final section of Step (A) we will write in an explicit style that relies less on Hoarelogic conventions and more on exposing clearly the types involved and the role of the initial and final state. In this style, our assumption for appealing to Thm. 3.4 is that for all (initial) states we have
(8) (9) Here both the lhs and rhs are realvalued expressions in which an arbitrary initial state appears free. On the left are predicates on , and is a nonnegative realvalued function on , and are constants of type and respectively.
On the right is a (weakest pre) expectation, a realvalued function on ; applying it to the initial state –the final in (9) at rhs– produces a nonnegative real scalar.
The second argument of is a postexpectation, again a function of type , but takes that ’s expected value over the final distribution(s) that reaches from — for mnemonic advantage, we bind its states with . And using also allows us to refer in to the initial state as , not captured by , so that we can compare the initial and final values of as required.
What we have now is our assumption of progress for the original loop , which was
(10) and we must use (10), together with the antitone properties of to show . We begin with (8) and reason
B
Loop (6)’s probability of termination at tends to 1 as .
For the probabilistic invariant, i.e. submartingale in Theorem 3.5, we choose . Note that, as required by Thm. 3.5, expectation is bounded (by ). Let predicate be which from (6) we know ensures AST of the modified loop. Thus the assumptions of Thm. 3.5 are satisfied: reasoning from its conclusion we have
C
The original loop terminates AS from any initial state satisfying .
From App. A, instantiating and ,
we have for any that
and, referring to the last line in (B) just above, we conclude . Since that holds for any no matter how large, we have finally that
that is that from any initial state satisfying the loop terminates AS. ∎
5. Case Studies
In this section, we examine a few (mostly) nontrivial examples to show the effectiveness of Thm. 4.1. For all examples we provide a quasivariant that proves AST; and we will always choose so that they are strictly positive and antitone. We will not provide proofs of the properties, because they will be selfevident and are in any case “external” mathematical facts. We do however carefully setout any proofs that depend on the program text: that indicates termination, that satisfies the supermartingale property, and that , , and satisfy the progress condition.
For convenience in these examples, we define a derived expectation transformer , over terminating straightline programs only (as our loop bodies are, in this section), that “factors out” the ; it has the same definition as of in Table 1 except that nondeterminism is interpreted angelically rather than demonically: that is, we define
and otherwise as for (except for loops, which we do not need here). A straightforward structural induction then shows that for straightline programs , constant and any expectation that
(11) 
And from there we have immediately that
(12) 
and finally therefore that
(13) 
since if holds then (13) reduces to (12) and, if it does not hold, both sides of (13) are trivially true. Thus when the loop body is a straightline program, by establishing lhs (13) we establish also rhs (13) as required by Thm. 4.1(iv). We stress that is used here for concision and intuition only: applied only to finite, nonlooping programs, it can always be replaced by .
Thus lhs (13) expresses clearly and directly that is a supermartingale when holds, and handles any nondeterminism correctly in that respect: because maximises rather than minimises over nondeterministic outcomes (the opposite of ), the supermartingale inequality holds for every individual outcome, as required.
In §8.3 we discuss the reasons for not using in Thm. 4.1 directly, i.e. not eliminating “” at the very start: in short, it is because our principal reference (McIver and Morgan, 2005) does not support .
5.1. The NegativeBinomial Loop
Our first example is also proved by other AST rules, so we do not need the extra power of Thm. 4.1 for it; but we begin with this to illustrate Theme B with a familar example how Thm. 4.1 is used in formal reasoning over program texts.
Description of the loop.
Consider the following whileloop over the realvalued variable :
(14) 
An interpretation of this loop as a transition system is illustrated in Figure 1.
Intuitively, this loop keeps flipping a coin until it flips, say, heads times (not necessarily in a row); every time it flips tails, the loop continues without changing the program state.
We call it the negative binomial loop because its runtime is distributed according to a negative binomial distribution (with parameters
and ), and thus the expected runtime is linear (on average loop iterations) even though it allows for infinite executions, namely those runs of the program that flip heads fewer than times and then keep flipping tails ad infinitum.A subtle intricacy is that this loop will not terminate at all, if is initially not a nonnegative integer, because then the execution of the loop never reaches a state in which . This is where we use Theorem 4.1’s ability of incorporating an invariant into the AST proof, as standard arguments over loop termination do.
Proof of almostsure termination
The guard is given by ,
and the loop body by .
And with the standard invariant ,
we can now prove AST of the loop with
an appropriate and quasivariant :
Notice that are strictly speaking constant functions mapping any positive real to respectively. Intuitively, this choice of , , , and tells us that if is a positive integer different from , then after one iteration of the loop body (a) is still a nonnegative integer (by invariance of ) and (b) the distance of from has decreased by at least with probability at least (implied by the progress condition).
We first check that is indeed an invariant:
Next, the second precondition of Theorem 4.1 is satisfied because of
Furthermore, satisfies the supermartingale property:
Lastly, , , and satisfy the progress condition for all :
This shows that all preconditions of Theorem 4.1 are satisfied: thus we have , i.e. that the negative binomial loop terminates almostsurely from all initial states in which is a nonnegative integer.
5.2. The Demonically Fair Random Walk
Next, we consider a whileloop that contains both probabilistic and demonic choice.
Description of the loop.
Consider the following whileloop:
Comments
There are no comments yet.