Compositional Analysis for Almost-Sure Termination of Probabilistic Programs

01/18/2019 ∙ by Mingzhang Huang, et al. ∙ Shanghai Jiao Tong University Institute of Science and Technology Austria 0

In this work, we consider the almost-sure termination problem for probabilistic programs that asks whether a given probabilistic program terminates with probability 1. Scalable approaches for program analysis often rely on compositional analysis as their theoretical basis. In non-probabilistic programs, the classical variant rule (V-rule) of Floyd-Hoare logic is the foundation for compositional analysis. Extension of this rule to almost-sure termination of probabilistic programs is quite tricky, and a probabilistic variant was proposed in [15]. While the proposed probabilistic variant cautiously addresses the key issue of integrability, we show that the proposed compositional rule is still not sound for almost-sure termination of probabilistic programs. Besides establishing unsoundness of the previous rule, our contributions are as follows: First, we present a sound compositional rule for almost-sure termination of probabilistic programs. Our approach is based on a novel notion of descent supermartingales. Second, for algorithmic approaches, we consider descent supermartingales that are linear and show that they can be synthesized in polynomial time. Finally, we present experimental results on several natural examples that model various types of nested while loops in probabilistic programs and demonstrate that our approach is able to efficiently prove their almost-sure termination property.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

Probabilistic programs.

Extending classical imperative programs with randomness, i.e. generation of random values according to probability distributions, gives rise to probabilistic programs 

[21]. Such programs provide a flexible framework for many different applications, ranging from the analysis of network protocols [17, 41, 26]

, to machine learning applications 

[37, 20, 40, 11], and robot planning [42, 43]. The recent interest in probabilistic programs has led to many probabilistic programming languages (such as Church [18], Anglican [44] and WebPPL [19]) and their analysis is an active research area in formal methods and programming languages (see [5, 45, 35, 1, 9, 7, 13, 28, 27]).

Termination problems.

In program analysis the most basic liveness problem is that of termination, that given a program asks whether the program always terminates. In presence of probabilistic behavior, there are two natural extensions of the termination problem: first, the almost-sure termination problem that asks whether the program terminates with probability 1; and second, the finite-termination problem that asks whether the expected termination time is finite. While finite-termination implies almost-sure termination, the converse is not true. Both problems have been widely studied for probabilistic programs, e.g. [27, 7, 28, 9].

Compositional approaches.

Scalable approaches for program analysis are often based on compositional analysis as their theoretical foundation. For non-probabilistic programs, the classical variant rule (V-rule) of Floyd-Hoare logic [16, 29] provides the necessary foundations for compositional analysis. Such compositional methods allow decomposition of the programs into smaller parts, reasoning about the parts, and then combining the results on the parts to deduce the desired result for the entire program. Thus, they are the key technique in many automated methods for large programs.

Compositional approaches for probabilistic programs.

The compositional approach for almost-sure termination of probabilistic programs was considered in [15]. First, it was shown that a direct extension of the V-rule of non-probabilistic programs is not sound for almost-sure termination of probabilistic programs, as there is a crucial issue regarding integrability. Then, a compositional rule, which cautiously addresses the integrability issue, was proposed as a sound rule for almost-sure termination of probabilistic programs. We refer to this rule as the FHV-rule.

Our contributions.

Our main contributions are as follows:

  1. First, we show that the FHV-rule of [15], which is the natural extension of the V-rule with integrability condition, is not sound for almost-sure termination of probabilistic programs.

  2. Second, we show that besides the issue of integrability, there is another crucial issue, regarding the non-negativity requirement in ranking supermartingales, that is not addressed by [15]. We present a sound compositional rule for almost-sure termination of probabilistic programs that addresses both crucial issues. Our approach is based on a novel notion called “descent supermartingales” (DSMs), which is an important technical contribution of our work.

  3. Third, while we present our compositional approach for general DSMs, for algorithmic approaches we focus on DSMs that are linear. We present an efficient polynomial-time algorithm for the synthesis of linear DSMs.

  4. Finally, we present an implementation of our synthesis algorithm for linear DSMs and demonstrate that our approach is applicable to probabilistic programs containing various types of nested while-loops and can efficiently prove that they terminate almost-surely.

Ii Preliminaries

Throughout the paper, we denote by , , , and

the sets of positive integers, nonnegative integers, integers, and real numbers, respectively. We first review several useful concepts in probability theory and then present the syntax and semantics of our probabilistic programs.

Ii-a Stochastic Processes and Martingales

We provide a short review of some necessary concepts in probability theory. For a more detailed treatment, see [46].

Discrete Probability Distributions.

A discrete probability distribution over a countable set is a function such that . The support of is defined as .

Probability Spaces.

A probability space is a triple , where is a non-empty set (called sample space), is a -algebra over (i.e. a collection of subsets of that contains the empty set and is closed under complementation and countable union) and is a probability measure on , i.e. a function such that (i) and (ii) for all set-sequences that are pairwise-disjoint (i.e.  whenever ) it holds that . Elements of are called events. An event holds almost-surely (a.s.) if .

Random Variables.

A random variable from a probability space is an -measurable function , i.e. a function such that for all , the set belongs to .

Expectation.

The expected value of a random variable from a probability space , denoted by , is defined as the Lebesgue integral of w.r.t , i.e. . The precise definition of Lebesgue integral is somewhat technical and is omitted here (cf. [46, Chapter 5] for a formal definition). If is countable, then we have .

Filtrations.

A filtration of a probability space is an infinite sequence of -algebras over such that for all . Intuitively, a filtration models the information available at any given point of time.

Conditional Expectation.

Let be any random variable from a probability space such that . Then, given any -algebra , there exists a random variable (from ), denoted by , such that

  • is -measurable, and

  • , and

  • for all , we have .

The random variable is called the conditional expectation of given . The random variable is a.s. unique in the sense that if is another random variable satisfying (E1)–(E3), then . We refer to  [46, Chapter 9] for details. Intuitively, is the expectation of , when assuming the information in .

Discrete-Time Stochastic Processes.

A discrete-time stochastic process is a sequence of random variables where ’s are all from some probability space . The process is adapted to a filtration if for all , is -measurable. Intuitively, the random variable models some value at the -th step of the process.

Difference-Boundedness.

A discrete-time stochastic process adapted to a filtration is difference-bounded if there exists a such that for all almost-surely.

Supermartingales.

A discrete-time stochastic process adapted to a filtration is a supermartingale if for every , and it holds a.s. that . We refer to  [46, Chapter 10] for a deeper treatment. Intuitively, a supermartingale is a discrete-time stochastic process in which for an observer who has seen the values of , the expected value at the next step, i.e. , is no more than the last observed value .

Ii-B Syntax

In the sequel, we fix two disjoint countable sets: the set of program variables and the set of sampling variables. Informally, program variables are directly related to the control flow of a program, while sampling variables represent random inputs sampled from distributions. We assume that every program variable is integer-valued, and every sampling variable is bound to a discrete probability distribution over integers. We first define several basic notions and then present the syntax.

Valuations.

A valuation over a finite set of variables is a function that assigns a value to each variable. The set of all valuations over is denoted by .

Arithmetic Expressions.

An arithmetic expression over a finite set of variables is an expression built from the variables in , integer constants, and arithmetic operations such as addition, subtraction, multiplication, exponentiation, etc. For our theoretical results we consider a general setting for arithmetic expressions in which the set of allowed arithmetic operations can be chosen arbitrarily.

Propositional Arithmetic Predicates.

A propositional arithmetic predicate over a finite set of variables is a propositional formula built from (i) atomic formulae of the form where are arithmetic expressions and , and (ii) propositional connectives such as . The satisfaction relation between a valuation and a propositional arithmetic predicate is defined through evaluation and standard semantics of propositional connectives, e.g. (i)  iff holds when the variables in are substituted by their values in , (ii) iff and (iii) (resp. ) iff and (resp. or) .

The Syntax.

Our syntax is illustrated by the grammar in Figure 1. Below, we explain the grammar.

  • Variables. Expressions (resp. ) range over program (resp. sampling) variables.

  • Arithmetic Expressions. Expressions (resp. ) range over arithmetic expressions over all program and sampling variables (resp. all program variables).

  • Boolean Expressions. Expressions range over propositional arithmetic predicates over program variables.

  • Programs. A program can either be a single assignment statement (indicated by ‘’), or ‘skip’ which is the special statement that does nothing, or a conditional branch (indicated by ‘if ’), or a non-deterministic branch (indicated by ‘if ’), or a probabilistic branch (indicated by ‘if prob()’, where is the probability of executing the then branch and that of the else branch), or a while-loop (indicated by the keyword ‘while’), or a sequential composition of two sub-programs (indicated by semicolon).

Program Counters.

We assign a program counter to each assignment statement, skip, if branch and while-loop. Intuitively, the counter specifies the current point in the execution of a program. We also refer to program counters as labels.

Fig. 1: The Syntax of Probabilistic Programs

Ii-C Semantics

To specify the semantics of our probabilistic programs, we follow previous approaches, such as [5, 9, 7]

, and use Control Flow Graphs (CFGs) and Markov Decision Processes (MDPs) (see 

[2, Chapter 10]). Informally, a CFG describes how the program counter and valuations over program variables change along an execution of a program. Then, based on the CFG, one can construct an MDP as the semantical model of the probabilistic program.

Definition 1 (Control Flow Graphs).

A Control Flow Graph (CFG) is a tuple

with the following components:

  • is a finite set of labels, which is partitioned into the set of conditional-branch labels, the set of assignment labels, the set of probabilistic labels and the set of nondeterministic-branch labels;

  • and are disjoint finite sets of program and sampling variables, respectively;

  • is a transition relation in which every member (called a transition) is a tuple of the form for which (i) (resp. ) is the source label (resp. target label) in and (ii) is either a propositional arithmetic predicate if , or an update function if , or if or if .

We always specify an initial label representing the starting point of the program, and a terminal label that represents termination and has no outgoing transitions.

Intuition for CFGs.

Informally, a control flow graph specifies how the program counter and values for program variables change in a program. We have three types of labels, namely branching, assignment and nondeterministic. The initial label corresponds to the initial statement of the program. A conditional branch label corresponds to a conditional-branching statement indicated by ‘if ’ or ‘while ’ , and leads to the next label determined by without changing the valuation. An assignment label corresponds to an assignment statement indicated by ‘’ or , and leads to the next label right after the statement and an update to the value of the variable in the left-hand-side of ‘’ that is specified by its right-hand side. This update can be seen as a function that gives the next valuation over program variables based on the current valuation and the sampled values. The statement ‘skip’ is treated as an assignment statement that does not change values. A probabilistic branch label corresponds to a probabilistic-branching statement indicated by ‘if prob’, and leads to the label of ‘then’ (resp. ‘else’) branch with probability (resp. ). A nondeterministic branch labels corresponds to nondeterministic choice statement indicated by ‘if ’, and has transitions to the two labels corresponding to the ‘then’ and ‘else’ branches.

By standard constructions, one can transform any probabilistic program into an equivalent CFG. We refer to  [5, 9, 7] for details.

Example 1.

Consider the probabilistic program in Figure 2. Its CFG is given in Figure 3. In this program, , and are program variables, and is a sampling variable that observes the probability distribution . The numbers are the program counters (labels). In particular, is the initial label and is the terminal label. The arcs represent transitions in the CFG. For example, the arc from to specifies the transition from label to label with the update function that assigns to program variable , the value of the expression , obtained by adding the value of to a sampled value for the sampling variable .

    1:  while  do
    2:      ;
    3:      while  do
    4:          if  then
    5:              
                else
    6:              skip
                fi;
    7:          
            od;
    8:      ;
    9:      
        od
    10:
Fig. 2: A Probabilistic Program and its Labels
Fig. 3: The CFG of the Program in Figure 2

The Semantics.

Based on CFGs, we define the semantics of probabilistic programs through the standard notion of Markov decision processes. Below, we fix a probabilistic program with its CFG in form (1). We define the notion of configurations such that a configuration is a pair , where is a label (that represents the current program counter) and is a valuation (that represents the current valuation for program variables). We also fix a sampling function which assigns to every sampling variable , a discrete probability distribution over . Then, the joint discrete probability distribution over is defined as for all valuations over sampling variables.

The semantics is described by a Markov decision process (MDP). Intuitively, the MDP models the stochastic transitions, i.e. how the current configuration jumps to the next configuration. The state space of the MDP is the set of all configurations. The actions are , th and el and correspond to the absence of nondeterminism, taking the then-branch of a nondeterministic-branch label, and taking the else-branch of a nondeterministic-branch label, respectively. The MDP transition probabilities are determined by the current configuration, the action chosen for the configuration and the statement at the current configuration.

To resolve nondeterminism in MDPs, we use schedulers. A scheduler is a function which maps every history, i.e. all information up to the current execution point, to a probability distribution over the actions available at the current state. Informally, it resolves nondeterminism at nondeterministic-branch labels by discrete probability distributions over actions that specify the probability of taking each action.

From the MDP semantics, the behaviour of a probabilistic program with its CFG in the form (1) is described as follows: Consider an arbitrary scheduler . The program starts in an initial configuration where . Then in each step (), given the current configuration , the next configuration is determined as follows:

  1. a valuation

    of the sampling variables is sampled according to the joint distribution

    ;

  2. if and is the transition in with source label and update function , then is set to be ;

  3. if and are the two transitions in with source label , then is set to be either (i) if , or (ii) if ;

  4. if and , ) are the two transitions in with source label , then is set to be , where the label is chosen from using the scheduler .

  5. if and are the two transitions in with source label , then is set to be either (i) with probability , or (ii) with probability ;

  6. if there is no transition in emitting from (i.e. if ), then is set to be .

For a detailed construction of the MDP, see Appendix -A.

Runs and the Probability Space.

A run is an infinite sequence of configurations. Informally, a run specifies that the configuration at the -th step of a program execution is , i.e. the program counter (resp. the valuation for program variables) at the -th step is (resp. ). By construction, with an initial configuration (as the initial state of the MDP) and a scheduler , the Markov decision process for a probabilistic program induces a unique probability space over the runs (see  [2, Chapter 10] for details). In the rest of the paper, we denote by the probability measure under the initial configuration and the scheduler , and by the corresponding expectation operator.

Iii Problem Statement

In this section, we define the compositional verification problem of almost-sure termination over probabilistic programs. Below, we fix a probabilistic program with its CFG in the form (1). We first define the notion of almost-sure termination. Informally, the property of almost-sure termination requires that a program terminates with probability . We follow the definitions in  [5, 15, 9].

Definition 2 (Almost-sure Termination).

A run of a program is terminating if for some . We define the termination time as a random variable such that for a run , is the smallest such that if such an exists (this case corresponds to program termination), and otherwise (this corresponds to non-termination). The program is said to be almost-surely (a.s.) terminating under initial configurations if for all schedulers .

Lemma 1.

Let the program be the sequential (resp. conditional) composition of two other programs and , i.e.  (resp. ), and assume that both and are a.s. terminating for any initial value. Then, is also a.s. terminating for any initial value. See Appendix -B for a detailed proof.

Remark 1.

The lemma above shows that a.s. termination is closed under branching and sequential composition. Hence, in this paper, we only consider the major problem of compositional verification for a.s. termination of nested while loops.

We now define the problem of compositional verification of a.s. termination.

Definition 3 (Compositional Properties).

We first describe the notion of compositionality in general. Consider a compositional operator op (e.g. sequential composition or loop nesting) over general objects. We say that a property is op-compositional under a side condition if we have

holds for all objects . In other words, the property is op-compositional if the following assertion holds: for all objects , if the side condition and the property hold for , then the property also holds on the (bigger) composed object .

Compositional Verification.

A compositional property can be proven by a natural divide-and-conquer approach: to prove for , we first prove the same property on the (smaller) objects and and then prove a side condition . Using compositional properties is an effective method for mitigating the state-space explosion problem usually arising in real-world verification problems.

The Almost-sure Termination Property.

In this paper, we are concerned with a.s. termination of while-loops. Our aim is to prove this based on the assumption that the loop body is a.s. terminating for every initial value. We consider to be the target property expressing that the probabilistic program is a.s. terminating for every initial value, and we consider the compositional operator to be the while-loop operator while, i.e. given a probabilistic program and a propositional arithmetic predicate (as the loop guard), the probabilistic program is defined as . Since might itself be another while-loop, our setting encompasses probabilistic nested loops of any depth.

We focus on the compositional verification of under the while-loop operator and solve the problem in the following steps: First, we establish a sufficient side condition so that the assertion

(1)

holds for all probabilistic programs and propositional arithmetic predicates 111Note that we do not define or consider any assertion of the form , because checking the condition always takes finite time.. Second, based on the proposed side conditions, we explore possible algorithmic approaches.

Iv Previous Approaches

In this section, we describe previous approaches for compositional verification of the (a.s.) termination property for (probabilistic) while-loops. We first present the variant rule from the Floyd-Hoare logic [16, 29] that is sound for non-probabilistic programs. Then we describe the probabilistic extension proposed in [15].

Iv-a Classical Approach for Non-probabilistic Programs

Consider a non-probabilistic while-loop

where the programs may contain nested while-loops and are assumed to be terminating. The fundamental approach for compositional analysis is the following classical variant rule (V-rule) from the Floyd-Hoare logic [16, 29]:

In the V-rule above, is an arithmetic expression over program variables that acts as a ranking function. The relation represents a well-founded relation when restricted to the loop guard , while the relation is the “non-strict” version of such that (i) and (ii) . Then, the premise of the rule says that (i) for all , the value of after the execution of does not increase in comparison with its initial value before the execution, and (ii) there is some such that the execution of leads to a decrease in the value of . If holds, then is said to be unaffecting for . Similarly, if holds, then is ranking for . Informally, the variant rule says that if all ’s are unaffecting and there is at least one that is ranking, then terminates.

The variant rule is sound for proving termination of non-probabilistic programs, because the value of cannot be decremented infinitely many times, given that the relation is well-founded when restricted to the loop guard .

Iv-B A Previous Approach for Probabilistic Programs

Fioriti and Hermanns’s approach in [15] can be viewed as an extension of the abstract V-rule, which is a proof system for a.s. terminating property. We call this abstract rule the FHV-rule:

Note that while the FHV-rule looks identical to the V-rule, semantics of the Hoare triple in the FHV-rule are different from that of the V-rule.

The FHV-rule is a direct probabilistic extension of the V-rule through the notion of ranking supermartingales (RSMs, see  [5, 9, 7]). RSMs are discrete-time stochastic processes that satisfy the following conditions: (i) their values are always non-negative; and (ii) at each step of the process, the conditional expectation of the value is decreased by at least a positive constant . The decreasing and non-negative nature of RSMs ensures that with probability and in finite expected number of steps, the value of any RSM hits zero. When embedded into programs through the notion of RSM-maps (see e.g. [5, 9]), RSMs serve as a sound approach for proving termination of probabilistic programs within finite expected time, which implies a.s. termination as well.

In [15], the in the FHV-rule is a propositionally linear expression that represents an RSM, while is the well-founded relation on non-negative real numbers such that iff for some fixed positive constant and is interpreted simply as . Unaffecting and ranking conditions are extended to the probabilistic setting through conditional expectation (see on [15, Page 9]). Concretely, we say that (i) is unaffecting if the expected value of after the execution of is no greater than its initial value before the execution; and (ii) is ranking if the expected value of after the execution of is decreased by at least compared with its initial value before the execution. Note that in [15], is also called a compositional RSM.

Crucial Issue 1 (Difference-boundedness and Integrability).

The authors of [15] accurately observed that simply extending the variant rule with expectation is not enough. They provided a counterexample in [15, Section 7.2] that is not a.s. terminating but has a compositional RSM. The problem is that random variables may not be integrable after the execution of a probabilistic while-loop. In order to resolve this integrability issue, they introduced the difference-boundedness condition (see Section II) for conditional expectation. Then, using the Optional Sampling/Stopping Theorem, they proved that, under the difference-boundedness condition, the random variables are integrable after the execution of while-loops. To ensure the difference-bounded condition, they established sound inference rules (see [15, Table 2 and Theorem 7.6]). With the integrability issue resolved, [15] finally claims that compositional ranking supermartingales provide a sound approach for proving a.s. termination of probabilistic while-loops (see [15, Theorem 7.7]).

V A Counterexample to the FHV-rule

Although [15] takes care of the integrability issue, we show that, unfortunately, the FHV-rule is still not sound. We present an explicit counterexample on which the FHV-rule proves a.s. termination, while the program is actually not a.s. terminating.

Example 2 (The Counterexample).

Consider the probabilistic program in Figure 2 (Page 2). We will show that this program is not a.s. terminating. Recall that are program variables and is a sampling variable that observes . Intuitively, the program variable models a random walk with one absorbing barrier at in the inner loop, as indicated by the loop guard . By the structure of the outer loop, the program does not terminate only if, after a finite number of executions of the inner loop, the random walk stops at the absorbing barrier for every next iteration of the inner loop. Note that after each execution of the inner loop, the value of does not increase in expectation. Furthermore, the value of is decreased by one at the end of the loop body of the outer loop. Thus, the expected value of decreases by after each outer-loop iteration. As the outer loop guard is , this suggests that the program should be a.s. terminating. In contrast, we show that this program is not a.s. terminating (see Proposition 1 below).

Proposition 1.

The probabilistic program in Example 2 (Figure 2, Page 2) is not a.s. terminating. Specifically, it does not terminate with probability  when the initial value for the program variable is and the initial value for the program variable is sufficiently large.

Proof.

The program does not terminate only if the value of in label is 2 after every execution of the inner loop. The key point is to prove that in the inner loop, the value of the program variable will be 2 with higher and higher probability when the value of increases. Consider the random walk in the inner loop. We abstract the values of as three states ‘’,‘’ and ‘’. From the structure of the program, we have that if we start with the state ‘’, then after the inner loop, the successor state may transit to either ‘’, ‘’ or ‘’. If the successor state is either ‘’ or ‘’, then the outer-loop will terminate immediately. However, there is a positive probability that the successor state is ‘’ and the outer-loop does not terminate in this loop iteration (as the value of will be set back to ). This probability depends on the steps of the random walk in the inner loop (determined by the value of ), and we show that it is higher and higher when the value of increases. Thus, after more and more executions of the outer loop, the value of continues to increase exponentially, and with higher and higher probability the program would be not terminating in the current execution of the loop body.

The detailed demonstration is as follows: W.l.o.g, we assume that at every beginning of the inner loop. The values of at label are the results of the execution of inner loop with the same initial value, hence they are independent mutually. We now temporarily fix the value for at the beginning of the outer-loop body and consider the probability that the value of in label is not 2. We use the random variable to describe the value of at label and analyze the situation after the ( loop iterations of) the inner loop. Suppose that the sampled values for during the execution of the inner loop consist of instances of and instances of . Since , we have . Then, there are different possible paths that avoid being absorbed by the barrier. The reason is that the only way to avoid absorption is to always have more ’s than ’s in any prefix of the path. Hence, the number of possible paths is the Catalan number. so we have . Since for (applying Stirling’s approximation), we have for every even . Note that , where is the value of at the -th arrival to the label and recall that is sufficiently large. Furthermore, from the program we have . Letting , we obtain that . A well-known convergence criterion for infinite products is that converges to a non-zero number if and only if converges for . Since converges, we have the infinite product converges to a non-zero number. Thus, . ∎

We now show that, using the FHV-rule proposed in [15], one can deduce that the probabilistic program in Example 2 is a.s. terminating.

Proposition 2.

The FHV-rule in [15] derives that the probabilistic program in Example 2 is a.s. terminating.

Proof.

To see that the FHV-rule derives a.s. termination on this example, we show that the expression is a compositional RSM that satisfies the integrability and difference-boundedness conditions. First, we can show that the program variable is integrable and difference-bounded at every label. For example, for assignment statements at labels 2, 5, 7, 8 and 9 in Figure 2, the expression is integrable and difference-bounded after these statements simply because either they do not involve at the left-hand-side or the assignment changes the value of by 1 unit. Similarly, within the nested loop, the loop body (from label 4 to label 7) causes bounded change to the value of , so the expression is integrable after the inner loop (using the while-rule in [15, Table 2]). Second, it is easy to see that the expression is a compositional RSM as from [15, Definition 7.1] we have the following:

  • The value of does not increase after the assignment statements and ;

  • In the loop body of the nested loop, the expected value of does not increase, given that it does not increase in any of the conditional branches;

  • By definition of , the expected value of does not increase after the inner loop;

  • The value of is decreased by after the last assignment statement .

Thus, by applying [15]’s main theorem for compositionality ([15, Theorem 7.7]), we can conclude that the program should be a.s. terminating. ∎

From Proposition 1 and Proposition 2, we establish the main theorem of this section, i.e. that the FHV-rule is not sound. For a detailed explanation of the unsoundness of the FHV-rule, see Appendix -C.

Theorem 1.

The FHV-rule, i.e. the probabilistic extension of the V-rule as proposed in [15], is not sound for a.s. termination of probabilistic programs, even if we require the compositional RSM to be difference-bounded and integrable.

Note that integrability is a very natural requirement in probability theory. Hence, Theorem 1 states that a natural probabilistic extension of the variant rule is not sufficient for proving a.s. termination of probabilistic programs.

Vi Our Compositional Approach

In the previous section, we showed that the FHV-rule is not sound for proving a.s. termination of probabilistic programs. In this section, we show how the FHV-rule can be strengthened to a sound approach.

Crucial Issue 2 (Non-negativity of RSMs).

The reason why the approach of [15] is not sound lies in the fact that ranking supermartingales (RSMs) are required to be non-negative stochastic processes (see e.g. [24, Example 3] for a counterexample, showing that the non-negativity condition is necessary). In the classical V-rule for non-probabilistic programs, non-negativity is not required, given that negative values in a non-probabilistic setting simply mean that is negative. However, in the presence of probability, negative values only mean that the expected value of the expression is negative. Thus, it is possible that the expected value of decreases and becomes arbitrarily negative, tending to , while simultaneously the value of increases with higher and higher probability. In our counterexample (Example 2), the expected value of decreases after each outer-loop iteration, however the probability that the value of remains the same increases with the value of . More specifically, the expected-value decrease results from the fact that after the inner loop, the value of may get arbitrarily negative towards .

The general idea of our approach is to require the expected value of the expression in the variant rule to always decrease by at least a positive amount . We call this the strict decrease condition. This condition is in contrast with the FHV-rule that allows the value of at certain statements to remain the same (in expectation). We show that after this strengthening, the resulting rule is sound for compositional verification of a.s. termination over probabilistic programs. Our main mathematical tools are the concentration inequalities (e.g. [31]) that give tight upper bounds on the probability that a stochastic process deviates from its mean value.

Instead of following an inference-rule-based method, we present our approach using martingales. This is because martingale-based approaches often lead to completely automated methods (e.g. [5, 9, 7]), while rule-based approaches mostly result in semi-automatic methods that require the use of interactive theorem provers (e.g. [28, 36, 34]). To clarify that our approach is indeed a strengthening of the FHV-rule in [15], we first write the rule-based approach of [15] in an equivalent martingale-based format.

Below, we fix a probabilistic program and a loop guard and let . For the purpose of compositional verification, we assume that is a.s. terminating. We recall that is the termination-time random variable (see Definition 2) and is the joint discrete probability distribution for sampling variables. We also use the standard notion of invariants, which are over-approximations of the set of reachable configurations at every label.

Invariants.

An invariant is a function , such that for each label , the set at least contains all valuations of program variables for which the configuration can be visited in some run of the program. An invariant is linear if every is a finite union of polyhedra.

We can now describe the FHV-rule approach in [15] using V-rule supermartingale maps. A V-rule supermartingale map w.r.t an invariant is a function satisfying the following conditions:

  • Non-increasing property. The value of does not increase in expectation after the execution of any of the statements in the outer-loop body. For example, the non-increasing condition for an assignment statement with (recall that is the update function) is equivalent to for all . This condition can be similarly derived for other types of labels.

  • Decrease property. There exists a statement that will definitely be executed in every loop iteration and will cause to decrease (in expectation). For example, the condition for strict decrease at an assignment statement with says that for all we have , where is a fixed positive constant.

  • Well-foundedness. The values of should be bounded from below when restricted to the loop guard. Formally, this condition requires that for a fixed constant and all such that , we have .

  • Conditional difference-boundedness. The conditional expected change in the value of after the execution of each statement is bounded. For example, at an assignment statement with , this condition says that there exists a fixed positive bound , such that for all . The purpose of this condition is to ensure the integrability of (see [15, Lemma 7.4]).

We strengthen the FHV-rule of [15] in two ways. First, as the major strengthening, we require that the expression should strictly decrease in expectation at every statement, as opposed to [15] where the value of is only required to decrease at some statement. Second, we slightly extend the conditional difference-boundedness condition and require that the difference caused in the value of after the execution of each statement should always be bounded, i.e. we require difference-boundedness not only in expectation, but in every run of the program.

The core notion in our strengthened approach is that of descent supermartingale maps (DSM-maps). A DSM-map is a function representing a decreasing amount (in expectation) at each step of the execution of the program.

Definition 4 (Descent Supermartingale Maps).

A descent supermartingale map (DSM-map) w.r.t real numbers , , a non-empty interval and an invariant is a function satisfying the following conditions:

  • For each with , it holds that

    • for all and ;

    • for all ;

  • For each and , it holds that for all such that ;

  • For each and , it holds that for all ;

  • For each with , it holds that

    • for all ,

    • for all ,

    • for all ;

  • For all such that (recall that is the loop guard), it holds that .

Informally, is a DSM-map if:

  • Its value decreases in expectation by at least after the execution of each statement (the strict decrease condition), and its change of value before and after each statement falls in (the strengthened difference-boundedness condition);

  • Its value is bounded from below by at every entry into the loop body (the well-foundedness condition).

By the decreasing nature of DSM-maps, it is intuitively true that the existence of a DSM-map implies a.s. termination. However, this point is non-trivial as counterexamples will arise if we drop the difference-boundedness condition and only require the strict decrease condition (see e.g. [24, Example 3]). In the following, we use the difference-boundedness condition to derive a concentration property on the termination time (see [9]). Under this concentration property, we prove that DSM-maps are sound for proving a.s. termination.

We first present a well-known concentration inequality called Hoeffding’s Inequality.

Theorem (Hoeffding’s Inequality [23, 9]).

Let be a supermartingale w.r.t some filtration and be a sequence of intervals with positive length in . If is a constant random variable and a.s. for all , then

for all and .

Hoeffding’s Inequality states that for any difference-bounded supermartingale, it is unlikely that its value at the -th step exceeds its initial value by much (measured by ).

Using Hoeffding’s Inequality, we prove the following lemma.

Lemma 2.

Let be a supermartingale w.r.t some filtration and be an interval with positive length in . If is a constant random variable, it holds that for some and a.s. for all , then for any ,

for all sufficiently large .

Proof.

Let , then . Given that

we conclude that is a supermartingale. Now we apply Hoeffding’s Inequality for all such that , and we get

Thus, we have the following corollary by calculation.

Corollary 1.

Let be a supermartingale satisfying the conditions of Lemma 2. Then, .

We are now ready to prove the soundness of DSM-maps.

Theorem 2 (Soundness of DSM-maps).

If there exists a DSM-map for , then for any initial valuation and for all schedulers , we have .

Proof Sketch.

Let be as defined in Definition 4. For a given program with its DSM-map , we define the stochastic process where is the pair of random variables that represents the configuration at the -th step of a run. We also define the stochastic process in which each represents the number of steps in the execution of until the -th arrival at the initial label . Then, is the random variable representing the value of at the -th arrival at . Recall that, by condition (D5) in the definition of DSM-maps, the program stops if . We now prove the crucial property that , where

is the random variable that measures the number of outer-loop iterations in a run. We want to estimate the probability of

which is bounded by . Note that satisfies the conditions of Lemma 2. We use Corollary 1 to bound the probability. Since iff (as is a.s. terminating), we obtain that . For a more detailed proof, see Appendix -D. ∎

We illustrate an example application of Theorem 2.

Example 3.

Consider the following probabilistic while-loop.

 while  do
     
     while  do
        if  then
           if prob (6/13) then
               
           else
               
           fi
        else
           if prob (4/13) then
               
           else
               
           fi
        fi;
        
      od
    od

where the probability distribution for the sampling variable is given by for .

The while-loop models a variant of gambler’s ruin based on the mini-roulette game with slots [8]. Initially, the gambler has units of money and he continues betting until he has no money. At the start of each outer-loop iteration, the number of gambling rounds is chosen uniformly at random from (i.e. the program variable is the number of gambling rounds in this iteration). Then, at each round, the gambler takes one unit of money, and either chooses an even-money bet that bets the ball to stop at even numbers between and , which has a probability of to win one unit of money (see the nondeterministic branch from label to label ), or a -to- bet that bets the ball to stop at selected slots and wins two units of money with probability (see the branch from label to label ). During each outer-loop iteration, it is possible that the gambler runs out of money temporarily, but the gambler is allowed to continue gambling in the current loop iteration, and the program terminates only if he depletes his money when the program is back to the start of the outer-loop.

An invariant for the program is as follows: