Synthesizing Probabilistic Invariants via Doob's Decomposition

by   Gilles Barthe, et al.

When analyzing probabilistic computations, a powerful approach is to first find a martingale---an expression on the program variables whose expectation remains invariant---and then apply the optional stopping theorem in order to infer properties at termination time. One of the main challenges, then, is to systematically find martingales. We propose a novel procedure to synthesize martingale expressions from an arbitrary initial expression. Contrary to state-of-the-art approaches, we do not rely on constraint solving. Instead, we use a symbolic construction based on Doob's decomposition. This procedure can produce very complex martingales, expressed in terms of conditional expectations. We show how to automatically generate and simplify these martingales, as well as how to apply the optional stopping theorem to infer properties at termination time. This last step typically involves some simplification steps, and is usually done manually in current approaches. We implement our techniques in a prototype tool and demonstrate our process on several classical examples. Some of them go beyond the capability of current semi-automatic approaches.


page 1

page 2

page 3

page 4


Polynomial Probabilistic Invariants and the Optional Stopping Theorem

In this paper we present methods for the synthesis of polynomial invaria...

New Approaches for Almost-Sure Termination of Probabilistic Programs

We study the almost-sure termination problem for probabilistic programs....

Proving Almost-Sure Termination of Probabilistic Programs via Incremental Pruning

The extension of classical imperative programs with real-valued random v...

Expected-Cost Analysis for Probabilistic Programs and Semantics-Level Adaption of Optional Stopping Theorems

In this article, we present a semantics-level adaption of the Optional S...

Tail Bound Analysis for Probabilistic Programs via Central Moments

For probabilistic programs, it is usually not possible to automatically ...

Kernel-Matrix Determinant Estimates from stopped Cholesky Decomposition

Algorithms involving Gaussian processes or determinantal point processes...

1 Introduction

Probabilistic computations are a key tool in modern computer science. They are ubiquitous in machine learning, privacy-preserving data mining, cryptography, and many other fields. They are also a common and flexible tool to model a broad range of complex real-world systems. Not surprisingly, probabilistic computations have been extensively studied from a formal verification perspective. However, their verification is particularly challenging.

In order to understand the difficulty, consider the standard way to infer properties about the final state of a non-probabilistic program using a strong invariant (an assertion which is preserved throughout program execution) and a proof of termination. This proof principle is not easily adapted to the probabilistic case. First, probabilistic programs are interpreted as distribution transformers [25] rather than state transformers. Accordingly, assertions (including strong invariants) must be interpreted over distributions. Second, the notion of termination is different for probabilistic programs. We are usually not interested in proving that all

executions are finite, but merely that the probability of termination is

, a slightly weaker notion. Under this notion, there may be no finite bound on the number of steps over all possible executions. So, we cannot use induction to transfer local properties to the end of the program—more complex limiting arguments are needed.

We can avoid some of these obstacles by looking at the average behavior of a program. That is, we can analyze numerical expressions (over program variables) whose average value is preserved. These expressions are known as martingales, and have several technical advantages. First, martingales are easy to manipulate symbolically and can be checked locally. Second, the average value of martingale is preserved at termination, even if the control-flow of the program is probabilistic. This fact follows from the optional stopping theorem (OST), a powerful result in martingale theory.

While martingales are quite useful, they can be quite non-obvious. Accordingly, recent investigation has turned to automatically synthesizing martingales. State-of-the-art frameworks are based on constraint solving, and require the user to provide either a template expression [22, 6] or a limit on the search space [10, 7]. The main advantage of such approaches is that they are generally complete—they find all possible martingales in the search space. However, they have their drawbacks: a slightly wrong template can produce no invariant at all, and a lot of search space may be needed to arrive at the martingale.

We propose a framework that complements current approaches—we rely on purely symbolic methods instead of solving constraints or searching. We require the user to provide a “seed” expression, from which we always generate a martingale. Our approach uses Doob’s decomposition theorem, which gives a symbolic method to construct a martingale from any sequence of random values. Once we have the martingale, we can apply optional stopping to reason about the martingale at loop termination. While the martingale and final fact may be quite complex, we can use non-probabilistic invariants and symbolic manipulations to automatically simplify them.

We demonstrate our techniques in a prototype implementation, implementing Doob’s decomposition and the Optional Stopping Theorem. Although these proof principles have been long known to probability theory, we are the first to incorporate them into an automated program analysis. Given basic invariants and hints, our prototype generates martingales and facts for a selection of examples.

2 Mathematical preliminaries

We briefly introduce some definitions from probability theory required for our technical development. We lack the space to discuss the definitions in-depth, but we will explain informally what the various concepts mean in our setting. Interested readers can find a more detailed presentation in any standard probability theory textbook (e.g., Williams [33]).

First, we will need some basic concepts from probability theory.

Definition 1

Let be the set of outcomes.

  • A sigma algebra is a set of subsets of , closed under complements and countable unions, and countable intersections.

  • A probability measure is a countably additive mapping such that .

  • A probability space is a triple .

Next, we can formally define stochastic processes. These constructions are technical but completely standard.

Definition 2

Let be a probability space.

  • A

    (real) random variable

    is a function . is -measurable if for every .

  • A filtration is a sequence of sigma algebras such that and for every . When there is a fixed filtration on , we will often abuse notation and write for the filtration.

  • A stochastic process adapted to filtration is a sequence of random variables such that each is -measurable.

Intuitively, we can think of as a set where each element represents a possible outcome of the samples. In our setting, grouping samples according to the loop iteration gives a natural choice for the filtration: we can take to be the set of events that are defined by samples in iteration or before. A stochastic process is adapted to this filtration if is defined in terms of samples from iteration or before. Sampled variables at step are independent of previous steps, so they are not -measurable.


To define martingales, we need to introduce expected values and conditional expectations. The expected value of a random variable is defined as

where is the Lebesgue integral [33]. We say that a random variable is integrable if is finite. Given a integrable random variable and a sigma algebra , a conditional expectation of with respect to is a random variable such that is -measurable, and for all events . (Recall that the indicator function of an event maps to , and all other elements to .) Since one can show that this is essentially unique, we denote it by .


Our method relies on computing higher-order moments. Suppose

is a random variable with distribution . If takes numeric values, the th moment of is defined as

for . If ranges over tuples, the correlations of are defined as

for , and similarly for products of three or more projections. Here, the projection for a tuple-valued random variable is the marginal distribution of the th coordinate of the tuple.


A martingale is a stochastic process with a special property: the average value of the current step is equal to the value of the previous step.

Definition 3

Let be a stochastic process adapted to filtration . We say that is a martingale with respect to if it satisfies the property

For a simple example, consider a symmetric random walk on the integers. Let denote the current position of the walk. At each step, we flip a fair coin: if heads, we increase the position by , otherwise we decrease the position by . The sequence of positions forms a martingale since the average position at time is simply the position at time :

Doob’s decomposition.

One important result in martingale theory is Doob’s decomposition. Informally, it establishes that any integrable random process can be written uniquely as a sum of a martingale and a predictable process. For our purposes, it gives a constructive and purely symbolic method to extract a martingale from any arbitrary random process.

Theorem 1 (Doob’s decomposition)

Let be a stochastic process adapted to filtration where each has finite expected value. Then, the following process is a martingale:

If is already a martingale, then .

We will think of the stochastic process as a seed process which generates the martingale. While the definition of the martingale involves conditional expectations, we will soon see how to automatically simplify these expectations.

Optional stopping theorem.

For any martingale , it is not hard to show that the expected value of remains invariant at each time step. That is, for any fixed value , we have

The optional stopping theorem extends this equality to situations where itself may be random, possibly even a function of the martingale.

Definition 4

Let be a probability space with filtration . A random variable is a stopping time if the subset is a member of for each .

Returning to our random walk example, the first time that the position is farther than from the origin is a stopping time since this time depends only on past samples. In contrast, the last time that a position is farther than from the origin is not a stopping time, since this time depends on future samples. More generally, the iteration count when we exit a probabilistic loop is a stopping time since termination is a function of past samples only.

If we have a stopping time and a few mild conditions, we can apply the optional stopping theorem.111A basic version of the optional stopping theorem will suffice for our purposes, but there are alternative versions that don’t require finite expected stopping time and bounded increments.

Theorem 2 (Optional stopping)

Let be a stopping time, and let be a martingale. If the expected value of is finite, and if for all and some constant , then

To see this theorem in action, consider the random walk martingale and take the stopping time to be the first time that . It is possible to show that has finite expected value, and clearly . So, the optional stopping theorem gives

Since we know that the position is at time , this immediately shows that the probability of hitting is equal to the probability of hitting . This intuitive fact can be awkward to prove using standard probabilistic invariants, but falls out directly from a martingale analysis.

3 Overview of method

Now that we have seen the key technical ingredients of our approach, let us see how to combine these tools into an automated program analysis. We will take an imperative program specifying a stochastic process and a seed expression, and we will automatically synthesize a martingale and an assertion that holds at termination. We proceed in three stages: extracting a polynomial representing the stochastic process in the program, applying Doob’s decomposition to the polynomial representation, and applying optional stopping. We perform symbolic manipulations to simplify the martingale and final fact.


We consider programs of the form:222We focus on programs for which our method achieves full automation. For instance, we exclude conditional statements because it is difficult to fully automate the resulting simplifications. We note however that there are standard transformations for eliminating conditionals; one such transformation is if-conversion, a well-known compiler optimization [1].

where and are sequences of deterministic assignments (of the form ), and is a sequence of probabilistic samplings (of the form ).

Note that we separate sample variables , which are the target of random samplings, from process variables , which are the target of deterministic assignments. This distinction will be important for our simplifications: we know the moments and correlations of sample variables, while we have less information for process variables. We require that programs assign to sample variables before assigning to process variables in each loop iteration; this restriction is essentially without loss of generality.

We take

to be a set of standard distributions over the integers or over finite tuples of integers, to model joint distributions. For instance, we often consider the distribution

that returns and with equal probability. We assume that all distributions in have bounded support; all moments and correlations of the primitive distributions are finite. We will also assume that distributions do not depend on the program state.

The set of expressions is mostly standard, with a few notational conveniences for defining stochastic processes:

History variables are indexed by a positive integer and are used inside loops. The variable refers to the value of assigned iterations before the current iteration. If the loop has run for fewer than iterations, is specified by the initialization step of our programs:

Extracting the polynomial.

For programs in our fragment, each variable assigned in the loop determines a stochastic process: is the most recent value, is the previous value, etc. In the first stage of our analysis, we extract polynomial representations of each stochastic process from the input program.

We focus on the variables that are mutated in —each of these variables determines a stochastic process. To keep the notation light, we will explain our process for first-order stochastic processes: we only use history variables from the past iteration. We will also suppose that there is just one process variable and one sample variable, and only samples from the current iteration are used.

Since our expression language only has addition and multiplication as operators, we can represent the program variable as a polynomial of other program variables:


Next, we pass to a symbolic representation in terms of (mathematical) random variables. To variable , we associate the random variable modeling the corresponding stochastic process, and likewise for the sample variable . By convention, corresponds to the initialization step, and corresponds to the stochastic process during the loop. In other words,

desugars to

in a language with arrays instead of history variables. Then, the program variable corresponds to the random variable .

Then, Equation 1 and the initial conditions specified by the command give an inductive definition for the stochastic process:


Applying Doob’s decomposition.

The second stage of our analysis performs Doob’s decomposition on the symbolic representation of the process. We know that the seed expression must be a polynomial, so we can form the associated stochastic process by replacing program variables by their associated random variable:


(Recall that the initial conditions and , which define , are specified by the initialization portion of the program.)

Then, Doob’s decomposition produces the martingale:

To simplify the conditional expectation, we unfold via Equation 3 and unroll the processes by one step with Equation 2.




Figure 1: Selection of simplification rules

Now, we apply our simplification rules; we present a selection in Figure 1. The rules are divided into three groups (from top): linearity of expectation, conditional independence, and distribution information. The first two groups reflect basic facts about expectations and conditional expectations. The last group encodes the moments and correlations of the primitive distributions. We can pre-compute these quantities for each primitive distribution and store the results in a table.

By the form of Equation 3, the simplification removes all expectations and we can give an explicit definition for the martingale:


where is a polynomial.

Applying optional stopping.

With the martingale in hand, the last stage of our analysis applies the optional stopping theorem. To meet the technical conditions of the theorem, we need two properties of the loop:

  • The expected number of iterations must be finite.

  • The martingale must have bounded increments.

These side conditions are non-probabilistic assertions that can already be handled using existing techniques. For instance, the first condition follows from the existence of a bounded variant [19]: an integer expression such that

  • ;

  • implies the guard is false; and

  • the probability that decreases is strictly bigger than

throughout the loop, for and positive constants. However in general, finding a bounded variant may be difficult; proving finite expected stopping time is an open area of research which we do not address here.

The second condition is also easy to check. For one possible approach, one can replace stochastic sampling by non-deterministic choice over the support of the distribution, and verify that the seed expression is bounded using standard techniques [13, 28, 14]. This suffices to show that the martingale has bounded increments. To see why, suppose that the seed expression is always bounded by a constant . By Doob’s decomposition, we have

so the martingale has bounded increments.

Thus, we can apply the optional stopping theorem to Equation 4 to conclude:

Unlike the simplification step after applying Doob’s decomposition, we may not be able to eliminate all expected values. For instance, there may be expected values of at times before . However, if we have additional invariants about the loop, we can often simplify the fact with basic symbolic transformations.


Figure 2: Tool pipeline

We have implemented our process in a Python prototype using the sympy library for handling polynomial and summation manipulations [20]. Figure 2 shows the entire pipeline. There are three parts of the input: the program describing a stochastic process, the seed expression, and hint facts. The output is a probabilistic formula that holds at termination.

The most challenging part of our analysis is the last stage: applying OST. First, we need to meet the side conditions of the optional stopping theorem: finite expected iteration count and bounded increments. Our prototype does not verify these side conditions automatically since available termination tools are either not fully automatic [17] or can only synthesize linear ranking supermartingales [6, 9] that are insufficient for the majority of our case studies333Although most of the ranking supermatingales needed in our case studies are non-linear, the bounded variants are always linear.. Furthermore, the final fact typically cannot be simplified without some basic information about the program state at loop termination. We include this information as hints. Hints are first-order formulae over the program variables and the loop counter (represented by the special variable ), and are used as auxiliary facts during the final simplification step. Hints can be verified using standard program verification tools since they are non-probabilistic. In our examples, we manually translate the hints and the program into the input language of EasyCrypt [4], and perform the verifications there. Automating the translation poses no technical difficulty and is left for future work.

We note that the performance of the tool is perfectly reasonable for the examples considered in the next section. For instance, it handles the “ABRACADABRA” example in less than 2 seconds on a modern laptop.

4 Examples

Now, we demonstrate our approach on several classic examples of stochastic processes. In each case, we describe the code, the seed expression, and any hints needed for our tool to automatically derive the final simplified fact.

Geometric distribution.

As a first application, we consider a program that generates a draw from the geometric distribution by running a sequence of coin flips.

x[0] := 0;
while (z  0) do
    z ~~ Bern(p, {1, 0});
    x := x[-1] + z;

Here, Bern(p, {1, 0}) is the distribution that returns with probability , and otherwise. The program simply keeps drawing until we sample , storing the number of times we sample in .

We wish to apply our technique to the seed expression . First, we can extract the polynomial equation:

Applying Doob’s decomposition, our tool constructs and reduces the martingale:

To apply optional stopping, we first need to check that the stopping time is integrable. This follows by taking as a bounded variant—it remains in and decreases with with probability . Also, the martingale has bounded increments: should be bounded by a constant. But this is clear since we can use a loop invariant to show that , and the increment is

So, optional stopping shows that

With the hint —which holds at termination—our tool replaces by and automatically derives the expected running time:

Gambler’s ruin.

Our second example is the classic Gambler’s ruin process. The program models a game where a player starts with dollars and keeps tossing a fair coin. The player wins one dollar for each head and loses one dollar for each tail. The game ends either when the player runs out of money, or reaches his target of dollars. We can encode this process as follows:

x[0] := a;
while (0 < x < b) do
    z ~~ Bern(1/2, {-1, 1});
    x := x + z;

We will synthesize two different martingales from this program, which will yield complementary information once we apply optional stopping. For our first martingale, we use as the seed expression. Our tool synthesizes the martingale

So in fact, is already a martingale.

To apply optional stopping, we first note that is a bounded variant: it remains in and decreases with probability at each iteration. Since the seed expression is bounded, the martingale has bounded increments. Thus, optional stopping yields

If we give the hint


at termination, our prototype automatically derives

so the probability of exiting at is exactly .

Now, let us take a look at a different martingale generated by the seed expression . Our prototype synthesizes the following martingale:

Again, we can apply optional stopping: is a bounded variant, and the seed expression remains bounded in . So, we get

By using the same hint Equation 5, our prototype automatically derives

Since we already know that from the first martingale , this implies that the expected running time of the Gambler’s ruin process is

Gambler’s ruin with momentum.

Our techniques extend naturally to stochastic processes that depend on variables beyond the previous iteration. To demonstrate, we’ll consider a variant of Gambler’s ruin process with momentum: besides just the coin flip, the gambler will also gain profit equal to the difference between the previous two dollar amounts. Concretely, we consider the following process:

x[0] := a;
x[1] := a;
while (0 < x < b) do
  z ~~ Bern(p, {-1, 1});
  x := x[-1] + (x[-1] - x[-2]) + z;

Note that we must now provide the initial conditions for two steps, since the process is second-order recurrent. Given seed expression , our tool synthesizes the following martingale:

Identical to the Gambler’s ruin process, we can verify the side conditions and apply optional stopping, yielding

Unfolding and simplifying, our tool derives the fact

We are not aware of existing techniques that can prove this kind of fact—reasoning about the expected value of a variable in the iteration just prior to termination.


Our final example is a classic application of martingale reasoning. In this process, a monkey randomly selects a character at each time step, stopping when he has typed a special string, say “ABRACADABRA”. We model this process as follows:

match$_0$[0] :=  1;
match$_1$[0] :=  0;

Here, UnifMatches is a distribution over tuples that represents a uniform draw from the letters, where the th entry is if the matches the th word and if not. The variables record whether the most recent letters match the first letters of target word; is always , since we always match letters.

Now, we will apply Doob’s decomposition. Letting be the number of possible letters and taking the seed expression

our tool synthesizes the following martingale:

where is the stochastic process corresponding to . The dependence on is from the expectations of projections of UnifMatches, which are each —the probability of a uniformly random letter matching any fixed letter.

To apply the optional stopping theorem, note that the seed expression is bounded in , and serves as a bounded variant: take the highest index such that , and there is probability that we increase the match to get , decreasing the variant. So, we have

Our tool simplifies and uses the hints and for to derive

For the target string “ABRACADABRA”, we use hints

(for )

For example, if is set then the full string is matching “ABRACADABRA”, so the most recently seen four characters are “ABRA”. This matches the first four letters of “ABRACADABRA”, so is also set. The hint can be proved from a standard loop invariant.

Our tool derives the expected running time:


To give an idea of the efficiency of our tool, we present some benchmarks for our examples in Table 1. We measured timing on a recent laptop with a 2.4 GHz Intel Core processor with 8 GB of RAM. We did not optimize for the performance; we expect that the running time could be greatly improved with some tuning.

Example Running time (s)
geom 0.14
gamble 0.11
gamble2 0.17
miniabra 0.87
fullabra 3.58
Table 1: Preliminary benchmarks.

The example miniabra is a smaller version of the ABRACADABRA example, where the alphabet is just , and we stop when we sample the sequence ; fullabra is the full ABRACADABRA example.

While there is a growing body of work related to martingale techniques for program analysis (see the following section), it is not obvious how to compare benchmarks. Existing work focuses on searching for martingale expressions within some search space; this is a rather different challenge than synthesizing a single martingale from a programmer-provided seed expression. In particular, if the seed expression happens to already be a martingale by some lucky guess, our tool will simply return the seed expression after checking that it is indeed a martingale.

5 Related work


Martingale theory is a central tool in fields like statistics, applied mathematics, control theory, and finance. When analyzing randomized algorithms, martingales can show tight bounds on tail events [30]. In the verification community, martingales are used as invariant arguments, and as variants arguments to prove almost sure termination [5, 6, 18, 9]. Recently, martingale approaches were extended to prove more complex properties. Chakarov et al. [8] propose proof rules for proving persistence and recurrence properties. Dimitrova et al. [16] develop a deductive proof system for PCTL, with proof rules based on martingales and supermartingales.

Probabilistic Hoare logic.

McIver and Morgan [27] propose a Hoare-like logic that is quite similar to our approach of using martingales and OST. Their approach is based on weakest pre-expectations, which are an extension of Dijkstra’s weakest preconditions [15] based on “backward” conditional expectations. Their probabilistic invariants are similar to submartingales, as the expected value of the invariant at the beginning of the execution lower bounds the expected value of the invariant at termination. Their proof rule also requires an additional constraint to ensure soundness, but it requires a limiting argument that is more difficult to automate compared to our bounded increment condition. We could relax our condition using a weaker version of OST that generalizes their condition [33]. Another substantial difference with our approach is that their logic supports non-deterministic choices—ours does not. It is not obvious how we can extend our synthesis approach to the non-probabilistic case as we heavily rely on the concept of filtration, not applicable in the presence of non-determinism.

Probabilistic model checking.

In the last twenty years, model checking technology for probabilistic models have made impressive strides

[26, 11, 23] (Baier and Katoen [3] provide overview). The main advantage of model checking is that it requires nothing from the user; our technique requires a seed expression. However, model checking techniques suffer from the state explosion problem—the time and memory consumption of the model checking algorithm depends on the number of reachable states of the program. Our approach can be used to verify infinite and parametric programs without any performance penalty, as we work purely symbolically. For example, a probabilistic model checker can find the expected running time of the gambler’s ruin process for concrete values of and but they cannot deduce the solution for the general case, unlike our technique.

Invariant synthesis.

There are several approaches for synthesizing probabilistic invariants. Katoen et al. [22] propose the first complete method for the synthesis of McIver and Morgan’s probabilistic linear invariants. It is an extension of the constraint solving approach by Colón et al. [12] for the synthesis of (non-probabilistic) linear invariants. Chakarov and Sankaranarayanan [6] later extended this work to martingales and ranking supermartingales. Chakarov and Sankaranarayanan [7] propose a new notion of probabilistic invariants that generalizes the notion of supermatingales. They give a synthesis approach based on abstract interpretation, but it is not clear how their techniques can prove properties at termination time. Chen et al. [10]

propose a synthesis tool for verifying Hoare triples in the McIver and Morgan logic, using a combination of Lagrange’s interpolation, sampling, and counterexample guided search. One of the novelties is that they can synthesize non-linear invariants. The main disadvantages is that one must manually check the soundness condition, and one must provide a pre-expectation. For instance, we can apply the method of

Chen et al. [10] to the gambler’s ruin process only if we already know that the expected running time is . In contrast, we can deduce knowing only that is finite.

Expected running time.

As the termination time of a probabilistic program is a random quantity, it is natural to measure its performance using the average running time. Rough bounds can be obtained from martingale-based termination proofs [18]. Recently, Chatterjee et al. [9] showed that arbitrary approximations can be obtained from such proofs when the program is linear. They use Azuma’s inequality to obtain a tail distribution of the running time, and later they model check a finite unrolling of the loop. Monniaux [29] propose a similar approach that uses abstract interpretation to obtain the tail distribution of the running time. Kaminski et al. [21] extend Nielson’s proof system [31] to bound the expected running time of probabilistic programs.

Recurrence analysis.

Our synthesis approach is similar to the use of recurrences relations for the synthesis of non-probabilistic invariants [2, 32, 24]. The main idea is to find syntactic or semantic recurrences relations, and later simplify them using known closed forms to obtain loop invariants. In essence, we apply algebraic identities to simplify the complex martingales from Doob’s decomposition. The difference is that our simplifications are more complex as we cannot always obtain a closed form but a simpler summation. However, we obtain the same closed form when we apply Doob’s decomposition to inductive variables. Another difference is that we rely on the syntactic criteria to identify which values are predictable and which values are random.

6 Conclusion

We proposed a novel method for automatically synthesizing martingales expressions for stochastic processes. The basic idea is to transform any initial expression supplied by the user into a martingale using Doob’s decomposition theorem. Our method complements the state-of-the-art synthesis approaches based on constraint solving. On one hand, we always output a martingale expression, we are able to synthesize non-inductive martingales, and since we do not rely on quantifier elimination, we can synthesize polynomial expression of very high degree. On the other hand, we do not provide any completeness result, and the shape of martingale is difficult to predict.

We considered several classical case studies from the literature, combining our synthesis method with the optional stopping theorem and non-probabilistic invariants to infer properties at termination time in a fully automatic fashion.

Future work includes extending our approach to programs with arrays and improving the tool with automated procedures for checking side-conditions. It would also be interesting to consider richer programs, say distributions with parameters that depend on program state. Another possible direction would be improving the simplification procedures; possibly, the tool could produce simpler facts. Experimenting with more advanced computer-algebra systems and designing simplification heuristics specialized to handling the conditional expectations synthesized by Doob’s decomposition are both promising future directions. It would also be interesting to integrate our method as a special tool in systems for interactive reasoning about probabilistic computations.


We thank the anonymous reviewers for their helpful comments. This work was partially supported by NSF grants TWC-1513694 and CNS-1065060, and by a grant from the Simons Foundation ( to Justin Hsu).