1 Introduction
The important roles of randomization in algorithms and software systems are nowadays wellrecognized. In algorithms, randomization can bring remarkable speed gain at the expense of small probabilities of imprecision. In cryptography, many encryption algorithms are randomized in order to conceal the identity of plaintexts. In software systems, randomization is widely utilized for the purpose of fairness, security and privacy.
Embracing randomization in programming languages has therefore been an active research topic for a long time. Doing so does not only offer a solid infrastructure that programmers and system designers can rely on, but also opens up the possibility of languagebased, static analysis of properties of randomized algorithms and systems.
The current paper’s goal is to analyze imperative programs with randomization constructs—the latter come in two forms, namely probabilistic branching and assignment from a designated, possibly continuous, distribution. We shall refer to such programs as randomized programs.^{1}^{1}1
With the rise of statistical machine learning,
probabilistic programs attract a lot of attention. Randomized programs can be thought of as a fragment of probabilistic programs without conditioning (or observation) constructs. In other words, the Bayesian aspect of probabilistic programs is absent in randomized programs.Runtime and Termination Analysis of Randomized Programs
The runtime of a randomized program is often a problem of our interest; so is almostsure termination, that is, whether the program terminates with probability . In the programming language community, these problems have been taken up by many researchers as a challenge of both practical importance and theoretical interest.
Most of the existing works on runtime and termination analysis follow either of the following two approaches.

Martingalebased methods, initiated with a notion of ranking supermartingale in [5] and extended [1, 7, 12, 8, 15], have their origin in the theory of stochastic processes. They can also be seen as a probabilistic extension of ranking functions, a standard proof method for termination of (nonrandomized) programs. Martingalebased methods have seen remarkable success in automated synthesis using templates and constraint solving (like LP or SDP).
The essential difference between the two approaches is not big: an invariant notion in the latter is easily seen to be an adaptation of a suitable notion of supermartingale. The work [33] presents a comprehensive account on the ordertheoretic foundation behind these techniques.
These existing works are mostly focused on the following problems: deciding almostsure termination, computing termination probabilities, and computing expected runtime. (Here “computing” includes giving upper/lower bounds.) See [33] for a comparison of some of the existing martingalebased methods.
Our Problem: Tail Probabilities for Runtimes
In this paper we focus on the problem of tail probabilities that is not studied much so far.^{2}^{2}2An exception is [6]; see §7 for comparison with the current work. We present a method for overapproximating tail probabilities; here is the problem we solve. Input: a randomized program , and a deadline Output: an upper bound of the tail probability , where is the runtime of
Our target language is a imperative language that features randomization (probabilistic branching and random assignment). We also allow nondeterminism; this makes the program’s runtime depend on the choice of a scheduler (i.e. how nondeterminism is resolved). In this paper we study the longest, worstcase runtime (therefore our scheduler is demonic). In the technical sections, we use the presentation of these programs as probabilistic control graphs (pCFGs)—this is as usual in the literature. See e.g. [1, 33].
An example of our target program is in Fig. 1. It is an imperative program with randomization: in Line 3, the value of
is sampled from the uniform distribution over the interval
. The symbol in the line 4 stands for a nondeterministic Boolean value; in our analysis, it is resolved so that the runtime becomes the longest.Given the program in Fig. 1 and a choice of a deadline (say ), we can ask the question “what is the probability for the runtime of the program to exceed steps?” As we show in §6, our method gives a guaranteed upper bound . This means that, if we allow the time budget of steps, the program terminates with the probability at least 93%.
Our Method: Concentration Inequalities, Higher Moments, andVectorValued Supermartingales
Towards the goal of computing tail probabilities, our approach is to use concentration inequalities
, a technique from probability theory that is commonly used for overapproximating various tail probabilities. There are various concentration inequalities in the literature, and each of them is applicable in a different setting, such as a nonnegative random variable (Markov’s inequality), known mean and variance (Chebyshev’s inequality), a differencebounded martingale (Azuma’s inequality), and so on. Some of them were used for analyzing randomized programs
[6] (see §7 for comparison).In this paper, we use a specific concentration inequality that uses higher moments of runtimes , up to a choice of the maximum degree . The concentration inequality is taken from [4]; it generalizes Markov’s and Chebyshev’s. We observe that a higher moment yields a tighter bound of the tail probability, as the deadline grows bigger. Therefore it makes sense to strive for computing higher moments.
For computing higher moments of runtimes, we systematically extend the existing theory of ranking supermartingales, from the expected runtime (i.e. the first moment) to higher moments. The theory features a vectorvalued supermartingale, which not only generalizes easily to degrees up to arbitrary , but also allows automated synthesis much like usual supermartingales.
We also claim that the soundness of these vectorvalued supermartingales is proved in a mathematically clean manner. Following our previous work [33], our arguments are based on the ordertheoretic foundation of fixed points (namely the KnasterTarski, Cousot–Cousot and Kleene theorems), and we give upper bounds of higher moments by suitable least fixed points.
Overall, our workflow is as shown in Fig. 2. We note that the step 2 in Fig. 2 is computationally much cheaper than the step 1: in fact, the step 2 yields a symbolic expression for an upper bound in which is a free variable. This makes it possible to draw graphs like the ones in Fig. 3. It is also easy to find a deadline for which is below a given threshold .
We implemented a prototype that synthesizes vectorvalued supermartingales using linear and polynomial templates. The resulting constraints are solved by LP and SDP solvers, respectively. Experiments show that our method can produce nontrivial upper bounds in reasonable computation time. We also experimentally confirm that higher moments are useful in producing tighter bounds.
Our Contributions
Summarizing, the contribution of this paper is as follows.

We extend the existing theory of ranking supermartingales from expected runtimes (i.e. the first moment) to higher moments. The extension has a solid foundation of ordertheoretic fixed points. Moreover, its clean presentation by vectorvalued supermartingales makes automated synthesis as easy as before. Our target randomized programs are rich, embracing nondeterminism and continuous distributions.

We study how these vectorvalued supermartingales (and the resulting upper bounds of higher moments) can be used to yield upper bounds of tail probabilities of runtimes. We identify a concentration lemma that suits this purpose. We show that higher moments indeed yield tighter bounds.

Overall, we present a comprehensive languagebased framework for overapproximating tail probabilities of runtimes of randomized programs (Fig. 2). It has been implemented, and our experiments suggest its practical use.
Organization
We give preliminaries in §2. In §3, we review the ordertheoretic characterization of ordinary ranking supermartingales and present an extension to higher moments of runtimes. In §4, we discuss how to obtain an upper bound of the tail probability of runtimes. In §5, we explain an automated synthesis algorithm for our ranking supermartingales. In §6, we give experimental results. In §7, we discuss related work. We conclude and give future work in §8. Some proofs and details are deferred to the appendices.
2 Preliminaries
We present some preliminary materials, including the definition of pCFGs (we use them as a model of randomized programs) and the definition of runtime.
Given topological spaces and , let be the set of Borel sets on and be the set of Borel measurable functions . We assume that the set of reals, a finite set and the set are equipped with the usual topology, the discrete topology, and the order topology, respectively. We use the induced Borel structures for these spaces. Given a measurable space , let be the set of probability measures on . For any , let be the support of . We write for the expectation of a random variable .
Our use of pCFGs follows recent works including [1].
Definition 2.1 (pCFG)
A probabilistic control flow graph (pCFG) is a tuple that consists of the following.

A finite set of locations . It is the union of four mutually disjoint sets of deterministic, probabilistic, nondeterministic and assignment locations, respectively.

A finite set of program variables.

An initial location . An initial valuation

A transition relation which is total (i.e. ).

An update function for assignment.

A family
of probability distributions, where
, for probabilistic locations. We require that implies . 
A guard function such that for each and , there exists a unique location satisfying and .
The update function can be decomposed into three functions , and , under a suitable decomposition of assignment locations. The elements of , and represent deterministic, probabilistic and nondeterministic assignments, respectively.
An example of a pCFG is shown on the right. It models the program in Fig. 1. The node is a nondeterministic location. is the uniform distribution on the interval .
A configuration of a pCFG is a pair of a location and a valuation. We regard the set of configurations as a topological space by assuming that is equipped with the discrete topology and is equipped with the product topology. We say a configuration is a successor of , if and the following hold.

If , then and .

If , then .

If , then , where denotes the vector obtained by replacing the component of by . Here is such that , and is chosen as follows: 1) if ; 2) if ; and 3) if .
An invariant of a pCFG is a measurable set such that and is closed under taking successors (i.e. if and is a successor of then ). Use of invariants is a common technique in automated synthesis of supermartingales [1]: it restricts configuration spaces and thus makes the constraints on supermartingales weaker. A run of is an infinite sequence of configurations such that is the initial configuration and is a successor of for each . Let be the set of runs of .
A scheduler resolves nondeterminism: at a location in , it chooses a distribution of next configurations depending on the history of configurations visited so far. Given a pCFG and a scheduler of , a probability measure on is defined in the usual manner. See Appendix 0.B for details.
Definition 2.2 (reaching time )
Let be a pCFG and be a set of configurations called a destination. The reaching time to is a function defined by . Fixing a scheduler makes a random variable, since determines a probability measure on . It is denoted by .
Runtimes of pCFGs are a special case of reaching times, namely to the set of terminating configurations.
The following higher moments are central to our framework. Recall that we are interested in demonic schedulers, i.e. those which make runtimes longer.
Definition 2.3 ( and )
Assume the setting of Def. 2.2, and let and . We write for the th moment of the reaching time of from to under the scheduler , i.e. that is, where is a pCFG obtained from by changing the initial configuration to . Their supremum under varying is denoted by .
3 Ranking Supermartingale for Higher Moments
We introduce one of the main contributions in the paper, a notion of ranking supermartingale that overapproximates higher moments. It is motivated by the following observation: martingalebased reasoning about the second moment must concur with one about the first moment. We conduct a systematic theoretical extension that features an ordertheoretic foundation and vectorvalued supermartingales. The theory accommodates nondeterminism and continuous distributions, too. We omit some details and proofs; they are in Appendix 0.C.
The fully general theory for higher moments will be presented in §3.2; we present its restriction to the second moments in §3.1 for readability.
Prior to these, we review the existing theory of ranking supermartingales, through the lens of ordertheoretic fixed points. In doing so we follow [33].
Definition 3.1 (“nexttime” operation (preexpectation))
Given , let be the function defined as follows.

If and , then .

If , then .

If , then .

If , and ,

if , then ;

if , then ;

if , then .

Intuitively, is the expectation of after one transition. Nondeterminism is resolved by the maximal choice.
We define as follows.
The function space is a complete lattice structure, because is; moreover is easily seen to be monotone. It is not hard to see either that the expected reaching time to coincides with the least fixed point .
The following theorem is fundamental in theoretical computer science.
Theorem 3.2 (Knaster–Tarski, see e.g. [35])
Let be a complete lattice and be a monotone function. The least fixed point is the least prefixed point, i.e. . ∎
The significance of the KnasterTarski theorem in verification lies in the induced proof rule: . Instantiating to the expected reaching time , it means , i.e. an arbitrary prefixed point of —which coincides with the notion of ranking supermartingale [5]—overapproximates the expected reaching time. This proves soundness of ranking supermartingales.
3.1 Ranking Supermartingales for the Second Moments
We extend ranking supermartingales to the second moments. It paves the way to a fully general theory (up to the th moments) in §3.2.
The key in the martingalebased reasoning of expected reaching times (i.e. first moments) was that they are characterized as the least fixed point of a function . Here it is crucial that for an arbitrary random variable , we have and therefore we can calculate from . However, this is not the case for second moments. As , calculating the second moment requires not only but also . This encourages us to define a vectorvalued supermartingale.
Definition 3.3 (timeelapse function )
A function is defined by .
Then, an extension of for second moments can be defined as a combination of the timeelapse function and the preexpectation .
Definition 3.4 ()
Let be an invariant and be a Borel set. We define by
Here is applied componentwise: .
We can extend the complete lattice structure of to the function space in a pointwise manner. It is a routine to prove that is monotone with respect to this complete lattice structure. Hence has the least fixed point. In fact, while was characterized as the least fixed point of , a tuple is not the least fixed point of (cf. Example 3.8 and Thm. 3.9). However, the least fixed point of overapproximates the tuple of moments.
Theorem 3.5
For any configuration , . ∎
Let . To prove the above theorem, we inductively prove
for each and , and take the supremum. See Appendix 0.C for more details.
Like ranking supermartingale for first moments, ranking supermartingale for second moments is defined as a prefixed point of , i.e. a function such that . However, we modify the definition for the sake of implementation.
Definition 3.6 (ranking supermartingale for second moments)
A ranking supermartingale for second moments is a function such that: i) for each ; and ii) for each .
Even though we only have inequality in Thm. 3.5, we can prove the following desired property of our supermartingale notion.
Theorem 3.7
If is a supermartingale for second moments, then for each . ∎
The following example and theorem show that we cannot replace with in Thm. 3.5 in general, but it is possible in the absence of nondeterminism.
Example 3.8
The figure on the right shows a pCFG such that and all the other locations are in , the initial location is and is a terminating location. For the pCFG, the lefthand side of the inequality in Thm. 3.5 is . In contrast, if a scheduler takes a transition from to with probability , . Hence the righthand side is .
Theorem 3.9
If , . ∎
3.2 Ranking Supermartingales for the Higher Moments
We extend the result in §3.1 to moments higher than second.
Firstly, the timeelapse function is generalized as follows.
Definition 3.10 (timeelapse function )
For and , a function is defined by . Here is the binomial coefficient.
Again, a monotone function is defined as a combination of the timeelapse function and the preexpectation .
Definition 3.11 ()
Let be an invariant and be a Borel set. We define by , where is given by
As in Def. 3.6, we define a supermartingale as a prefixed point of .
Definition 3.12 (ranking supermartingale for th moments)
We define by . A ranking supermartingale for th moments is a function such that for each , i) for each ; and ii) for each .
For higher moments, we can prove an analogous result to Thm. 3.7.
Theorem 3.13
If is a supermartingale for th moments, then for each , . ∎
4 From Moments to Tail Probabilities via Concentration Inequalities
We discuss how to obtain upper bounds of tail probabilities of runtimes from upper bounds of higher moments of runtimes. Combined with the result in §3, it induces a martingalebased method for overapproximating tail probabilities.
We use a concentration inequality. There are many choices of concentration inequalities (see e.g. [4]), and we use a variant of Markov’s inequality. We prove that the concentration inequality is not only sound but also complete in a sense.
Formally, our goal is to calculate is an upper bound of for a given deadline , under the assumption that we know upper bounds of moments . In other words, we want to overapproximate where ranges over the set of probability measures on satisfying .
To answer the above problem, we make use of the following generalized form of Markov’s inequality.
Proposition 4.1 (see e.g. [4, §2.1])
Let be a realvalued random variable and be a nondecreasing and nonnegative function. For any with ,
∎ 
By letting in Prop 4.1, we obtain the following inequality. It gives an upper bound of the tail probability that is “tight.”
Proposition 4.2
Let be a nonnegative random variable. Assume for each . Then, for any ,
(1) 
Moreover, this upper bound is tight: for any , there exists a probability measure such that the above equation holds.
Proof
By combining Thm. 3.13 with Prop. 4.2, we obtain the following corollary. We can use it for overapproximating tail probabilities.
Corollary 4.3
Let be a ranking supermartingale for th moments. For each scheduler and a deadline ,
(2) 
Here are defined by and . ∎
For each there exists such that . Hence higher moments become useful in overapproximating tail probabilities as gets large. Later in §6, we demonstrate this fact experimentally.
5 TemplateBased Synthesis Algorithm
We discuss an automated synthesis algorithm that calculates an upper bound for the th moment of the runtime of a pCFG using a supermartingale in Def. 3.6 or Def. 3.12. It takes a pCFG , an invariant , a set of configurations, and a natural number as input and outputs an upper bound of th moment.
Our algorithm is adapted from existing templatebased algorithms for synthesizing a ranking supermartingale (for first moments) [5, 8, 7]
. It fixes a linear or polynomial template with unknown coefficients for a supermartingale and using numerical methods like linear programming (LP) or semidefinite programming (SDP), calculate a valuation of the unknown coefficients so that the axioms of ranking supermartingale for
th moments are satisfied.We hereby briefly explain the algorithms. See Appendix 0.D for details.
Linear Template
Our linear templatebased algorithm is adapted from [5, 8]. We should assume that , and are all “linear” in the sense that expressions appearing in are all linear and and are represented by linear inequalities. To deal with assignments from a distribution like , we also assume that expected values of distributions appearing in are known.
The algorithm first fixes a template for a supermartingale: for each location , it fixes a tuple of linear formulas. Here each and are unknown variables called parameters. The algorithm next collects conditions on the parameters so that the tuples constitute a ranking supermartingale for th moments. It results in a conjunction of formulas of a form . Here are linear formulas without parameters and is a linear formula where parameters linearly appear in the coefficients. By Farkas’ lemma (see e.g. [29, Cor. 7.1h]) we can turn such formulas into linear inequalities over parameters by adding new variables. Its feasibility is efficiently solvable with an LP solver. We naturally wish to minimize an upper bound of the th moment, i.e. the last component of . We can minimize it by setting it to the objective function of the LP problem.
Polynomial Template
The polynomial templatebased algorithm is based on [7]. This time, , and can be “polynomial.” To deal with assignments of distributions, we assume that the th moments of distributions in are easily calculated for each . It is similar to the linear templatebased one.
It first fixes a polynomial template for a supermartingale, i.e. it assigns each location a tuple of polynomial expressions with unknown coefficients. Likewise the linear templatebased algorithm, the algorithm reduces the axioms of supermartingale for higher moments to a conjunction of formulas of a form . This time, each is a polynomial formula without parameters and is a polynomial formula whose coefficients are linear formula over the parameters. In the polynomial case, a conjunction of such formula is reduced to an SDP problem using a theorem called Positivstellensatz (we used a variant called Schmüdgen’s Positivstellensatz [28]). We solve the resulting problem using an SDP solver setting as the objective function.
6 Experiments
We implemented two programs in OCaml to synthesize a supermartingale based on a) a linear template and b) a polynomial template. The programs translate a given randomized program to a pCFG and output an LP or SDP problem as described in §5. An invariant and a terminal configuration for the input program are specified manually. See e.g. [21] for automatic synthesis of an invariant. For linear templates, we have used GLPK (v4.65) [13] as an LP solver. For polynomial templates, we have used SOSTOOLS (v3.03) [31] (a sums of squares optimization tool that internally uses an SDP solver) on Matlab (R2018b). We used SDPT3 (v4.0) [30] as an SDP solver. The experiments were carried out on a Surface Pro 4 with an Intel Core i56300U (2.40GHz) and 8GB RAM. We tested our implementation for the following two programs and their variants, which were also used in the literature [20, 8]. Their code is in Appendix 0.E.
Coupon collector’s problem. A probabilistic model of collecting coupons enclosed in cereal boxes. There exist types of coupons, and one repeatedly buy cereal boxes until all the types of coupons are collected. We consider two cases: (11) and (12) . We tested the linear template program for them.
Random walk. We used three variants of 1dimensional random walks: (21) integervalued one, (22) realvalued one with assignments from continuous distributions, (23) with adversarial nondeterminism; and two variants of 2dimensional random walks (24) and (25) with assignments from continuous distributions and adversarial nondeterminism. We tested both the linear and the polynomial template programs for these examples.
Experimental results
We measured execution times needed for Step 1 in Fig. 2. The results are in Table 1. Execution times are less than 0.2 seconds for linear template programs and several minutes for polynomial template programs. Upper bounds of tail probabilities obtained from Prop. 4.2 are in Fig. 3.
We can see that our method is applicable even with nondeterministic branching ((23), (24) and (25)) or assignments from continuous distributions ((22), (24) and (25)). We can use a linear template for bounding higher moments as long as there exists a supermartingale for higher moments representable by linear expressions ((11), (12) and (23)). In contrast, for (21), (22) and (24), only a polynomial template program found a supermartingale for second moments.
It is expectable that the polynomial template program gives a better bound than the linear one because a polynomial template is more expressive than a linear one. However, it did not hold for some test cases, probably because of numerical errors of the SDP solver. For example, (21) has a supermartingale for third moments that can be checked by a hand calculation, but the SDP solver returned “infeasible” in the polynomial template program. It appears that our program fails when large numbers are involved (e.g. the third moments of (21), (22) and (23)). We have also tested a variant of (21) where the initial position is multiplied by 10000. Then the SDP solver returned “infeasible” in the polynomial template program while the linear template program returns a nontrivial bound. Hence it seems that numerical errors are likely to occur to the polynomial template program when large numbers are involved.
Fig. 3 shows that the bigger the deadline is, the more useful higher moments become (cf. a remark just after Cor. 4.3). For example, in (12), an upper bound of calculated from the upper bound of the first moment is , while we obtain from the upper bound of the fifth moment.
To show the merit of our method compared with samplingbased methods, we calculated a tail probability bound for a variant of (22) (shown in Fig. 4 on p. 4)) with a deadline . Because of its very long expected runtime, a samplingbased method would not work for it. In contrast, the linear templatebased program gave an upper bound in almost the same execution time as (22) ( seconds).
7 Related Work
MartingaleBased Analysis of Randomized Programs
Martingalebased methods are widely studied for the termination analysis of randomized programs. One of the first is ranking supermartingales, introduced in [5] for proving almost sure termination. The theory of ranking supermartingales has since been extended actively: accommodating nondeterminism [1, 7, 12, 8], syntaxoriented composition of supermartingales [12], proving properties beyond termination/reachability [15], and so on. Automated templatebased synthesis of supermartingales by constraint solving has been pursued, too [5, 8, 1, 7].
Other martingalebased methods that are fundamentally different from ranking supermartingales have been devised, too. They include: different notions of repulsing supermartingales for refuting termination (in [9, 33]; also studied in control theory [32]); and multiplyscaled submartingales for underapproximating reachability probabilities [37, 33]. See [33] for an overview.
In the literature on martingalebased methods, the one closest to this work is [6]. Among its contribution is the analysis of tail probabilities. It is done by either of the following combinations: 1) differencebounded ranking supermartingales and the corresponding Azuma’s martingale concentration inequality; and 2) (not necessarily differencebounded) ranking supermartingales and Markov’s concentration inequality. When we compare these two methods with ours, the first method requires repeated martingale synthesis for different parameter values, which can pose a performance challenge. The second method corresponds to the restriction of our method to the first moment; recall that we showed the advantage of use of higher moments, theoretically (§4) and experimentally (§6). See Appendix 0.F.1 for detailed discussions. Implementation is lacking in [6], too.
The work [1] is also close to ours in that their supermartingales are vectorvalued. The difference is in the orders: in [1] they use the lexicographic order between vectors, and they aim to prove almost sure termination. In contrast, we use the pointwise order between vectors, for overapproximating higher moments.
The PredicateTransformer Approach to Runtime Analysis
In the runtime/termination analysis of randomized programs, another principal line of work uses predicate transformers [20, 3, 19], following the precedent works on probabilistic predicate transformers such as [25, 22]. In fact, from the mathematical point of view, the main construct for witnessing runtime/termination in those predicate transformer calculi (called invariants, see e.g. in [20]) is essentially the same thing as ranking supermartingales. Therefore the difference between the martingalebased and predicatetransformer approaches is mostly the matter of presentation—the predicatetransformer approach is more closely tied to program syntax and has a stronger deductive flavor. It also seems that there is less work on automated synthesis in the predicatetransformer approach.
In the predicatetransformer approach, the work [19] is the closest to ours, in that it studies variance of runtimes of randomized programs. The main differences are as follows: 1) computing tail probabilities is not pursued [19]; 2) their extension from expected runtimes to variance involves an additional variable , which poses a challenge in automated synthesis as well as in generalization to even higher moments; and 3) they do not pursue automated analysis. See Appendix 0.F.2 for further details.
Higher Moments of Runtimes
Computing and using higher moments of runtimes of probabilistic systems—generalizing randomized programs—has been pursued before. In [10], computing moments of runtimes of finitestateMarkov chains is reduced to a certain linear equation. In the study of randomized algorithms, the survey [11] collects a number of methods, among which are some tail probability bounds using higher moments. Unlike ours, none of these methods are languagebased static ones. They do not allow automated analysis.
Other Potential Approaches to Tail Probabilities
We discuss potential approaches to estimating tail probabilities, other than the martingalebased one.
Sampling is widely employed for approximating behaviors of probabilistic systems; especially so in the field of probabilistic programming languages, since exact symbolic reasoning is hard in presence of conditioning. See e.g. [36]. We also used sampling to estimate tail probabilities in (22), Fig. 3. The main advantages of our current approach over sampling are threefold: 1) our upper bounds come with a mathematical guarantee, while the sampling bounds can always be erroneous; 2) it requires ingenuity to sample programs with nondeterminism; and 3) programs whose execution can take millions of years can still be analyzed by our method in a reasonable time, without executing them. The latter advantage is shared by static, languagebased analysis methods in general; see e.g. [3].
Another potential method is probabilistic model checkers such as PRISM [23]. Their algorithms are usually only applicable to finitestate models, and thus not to randomized programs in general. Nevertheless, fixing a deadline can make the reachable part of the configuration space finite, opening up the possibility of use of model checkers. It is an open question how to do so precisely, and the following challenges are foreseen: 1) if the program contains continuous distributions, the reachable part becomes infinite; 2) even if is finite, one has to repeat (supposedly expensive) runs of a model checker for each choice of . In contrast, in our method, an upper bound for the tail probability is symbolically expressed as a function of (Prop. 4.2). Therefore, estimating tail probabilities for varying is computationally cheap.
8 Conclusions and Future Work
We provided a technique to obtain an upper bound of the tail probability of runtimes given a randomized algorithm and a deadline. We first extended the ordinary ranking supermartingale notion using the ordertheoretic characterization so that it can calculate upper bounds of higher moments of runtimes for randomized programs. Then by using a suitable concentration inequality, we introduced a method to calculate an upper bound of tail probabilities from upper bounds of higher moments. Our method is not only sound but also complete in a sense. Our method was obtained by combining our supermartingale and the concentration inequality. We also implemented an automated synthesis algorithm and demonstrated the applicability of our framework.
Future Work
Example 3.8 shows that our supermartingale is not complete: it sometimes fails to give a tight bound for higher moments. Studying and improving the incompleteness is one possible direction of future work. For example, the following questions would be interesting: Can the bound given by our supermartingale be arbitrarily bad? Can we remedy the completeness by restricting the type of nondeterminism? Can we define a supermartingale that is complete?
We are also interested in improving the implementation. The polynomial template program failed to give an upper bound for higher moments because of numerical errors (see §6). We wish to remedy this situation. There exist several studies for using numerical solvers for verification without affected by numerical errors [16, 18, 17, 26, 27]. We might make use of these works for improvements.
References
 [1] Sheshansh Agrawal, Krishnendu Chatterjee, and Petr Novotný. Lexicographic ranking supermartingales: an efficient approach to termination of probabilistic programs. PACMPL, 2(POPL):34:1–34:32, 2018.
 [2] Robert B. Ash and Catherine A. DoleansDade. Probability and Measure Theory. Academic Press, second edition, 1999.

[3]
Kevin Batz, Benjamin Lucien Kaminski, JoostPieter Katoen, and Christoph
Matheja.
How long, O bayesian network, will I sample thee?  A program analysis perspective on expected sampling times.
In Amal Ahmed, editor, Programming Languages and Systems  27th European Symposium on Programming, ESOP 2018, Held as Part of the European Joint Conferences on Theory and Practice of Software, ETAPS 2018, Thessaloniki, Greece, April 1420, 2018, Proceedings, volume 10801 of Lecture Notes in Computer Science, pages 186–213. Springer, 2018.  [4] Stéphane Boucheron, Gábor Lugosi, and Pascal Massart. Concentration Inequalities: A Nonasymptotic Theory of Independence. Oxford University Press, 2013.
 [5] Aleksandar Chakarov and Sriram Sankaranarayanan. Probabilistic program analysis with martingales. In Natasha Sharygina and Helmut Veith, editors, Computer Aided Verification  25th International Conference, CAV 2013, Saint Petersburg, Russia, July 1319, 2013. Proceedings, volume 8044 of Lecture Notes in Computer Science, pages 511–526. Springer, 2013.
 [6] Krishnendu Chatterjee and Hongfei Fu. Termination of nondeterministic recursive probabilistic programs. CoRR, abs/1701.02944, 2017.
 [7] Krishnendu Chatterjee, Hongfei Fu, and Amir Kafshdar Goharshady. Termination analysis of probabilistic programs through positivstellensatz’s. In Swarat Chaudhuri and Azadeh Farzan, editors, Computer Aided Verification  28th International Conference, CAV 2016, Toronto, ON, Canada, July 1723, 2016, Proceedings, Part I, volume 9779 of Lecture Notes in Computer Science, pages 3–22. Springer, 2016.
 [8] Krishnendu Chatterjee, Hongfei Fu, Petr Novotný, and Rouzbeh Hasheminezhad. Algorithmic analysis of qualitative and quantitative termination problems for affine probabilistic programs. ACM Trans. Program. Lang. Syst., 40(2):7:1–7:45, 2018.
 [9] Krishnendu Chatterjee, Petr Novotný, and Dorde Zikelic. Stochastic invariants for probabilistic termination. In Giuseppe Castagna and Andrew D. Gordon, editors, Proceedings of the 44th ACM SIGPLAN Symposium on Principles of Programming Languages, POPL 2017, Paris, France, January 1820, 2017, pages 145–160. ACM, 2017.
 [10] Tugrul Dayar and Nail Akar. Computing moments of first passage times to a subset of states in markov chains. SIAM J. Matrix Analysis Applications, 27(2):396–412, 2005.
 [11] Benjamin Doerr. Probabilistic tools for the analysis of randomized optimization heuristics. CoRR, abs/1801.06733, 2018.
 [12] Luis María Ferrer Fioriti and Holger Hermanns. Probabilistic termination: Soundness, completeness, and compositionality. In Sriram K. Rajamani and David Walker, editors, Proceedings of the 42nd Annual ACM SIGPLANSIGACT Symposium on Principles of Programming Languages, POPL 2015, Mumbai, India, January 1517, 2015, pages 489–501. ACM, 2015.
 [13] The GNU linear programming kit. https://www.gnu.org/software/glpk/.
 [14] Roger A. Horn and Charles R. Johnson. Matrix Analysis. Cambridge University Press, New York, NY, USA, 2nd edition, 2012.
 [15] Pushpak Jagtap, Sadegh Soudjani, and Majid Zamani. Temporal logic verification of stochastic systems using barrier certificates. In Lahiri and Wang [24], pages 177–193.
 [16] Christian Jansson. Termination and verification for illposed semidefinite programming problems, 2005. http://www.optimizationonline.org/DB_HTML/2005/06/1150.html.
 [17] Christian Jansson. Vsdp: A matlab software package for verified semidefinite programming. NOLTA, pages 327–330, 2006.
 [18] Christian Jansson, Denis Chaykin, and Christian Keil. Rigorous error bounds for the optimal value in semidefinite programming. SIAM J. Numerical Analysis, 46(1):180–200, 2007.
 [19] Benjamin Lucien Kaminski, JoostPieter Katoen, and Christoph Matheja. Inferring covariances for probabilistic programs. In Gul Agha and Benny Van Houdt, editors, Quantitative Evaluation of Systems  13th International Conference, QEST 2016, Quebec City, QC, Canada, August 2325, 2016, Proceedings, volume 9826 of Lecture Notes in Computer Science, pages 191–206. Springer, 2016.
 [20] Benjamin Lucien Kaminski, JoostPieter Katoen, Christoph Matheja, and Federico Olmedo. Weakest precondition reasoning for expected runtimes of randomized algorithms. J. ACM, 65(5):30:1–30:68, 2018.
 [21] JoostPieter Katoen, Annabelle McIver, Larissa Meinicke, and Carroll C. Morgan. Linearinvariant generation for probabilistic programs:  automated support for proofbased methods. In Static Analysis  17th International Symposium, SAS 2010, pages 390–406, 2010.
 [22] Dexter Kozen. Semantics of probabilistic programs. J. Comput. Syst. Sci., 22(3):328–350, 1981.
 [23] Marta Z. Kwiatkowska, Gethin Norman, and David Parker. PRISM 4.0: Verification of probabilistic realtime systems. In Ganesh Gopalakrishnan and Shaz Qadeer, editors, Computer Aided Verification  23rd International Conference, CAV 2011, Snowbird, UT, USA, July 1420, 2011. Proceedings, volume 6806 of Lecture Notes in Computer Science, pages 585–591. Springer, 2011.
 [24] Shuvendu K. Lahiri and Chao Wang, editors. Automated Technology for Verification and Analysis  16th International Symposium, ATVA 2018, Los Angeles, CA, USA, October 710, 2018, Proceedings, volume 11138 of Lecture Notes in Computer Science. Springer, 2018.
 [25] Carroll Morgan, Annabelle McIver, and Karen Seidel. Probabilistic predicate transformers. ACM Trans. Program. Lang. Syst., 18(3):325–353, 1996.
 [26] Pierre Roux, Mohamed Iguernlala, and Sylvain Conchon. A nonlinear arithmetic procedure for controlcommand software verification. In Dirk Beyer and Marieke Huisman, editors, Tools and Algorithms for the Construction and Analysis of Systems  24th International Conference, TACAS 2018, Held as Part of the European Joint Conferences on Theory and Practice of Software, ETAPS 2018, Thessaloniki, Greece, April 1420, 2018, Proceedings, Part II, volume 10806 of Lecture Notes in Computer Science, pages 132–151. Springer, 2018.
 [27] Pierre Roux, YuenLam Voronin, and Sriram Sankaranarayanan. Validating numerical semidefinite programming solvers for polynomial invariants. Formal Methods in System Design, 53(2):286–312, 2018.
 [28] Konrad Schmüdgen. Thekmoment problem for compact semialgebraic sets. Mathematische Annalen, 289(1):203–206, Mar 1991.
 [29] Alexander Schrijver. Theory of Linear and Integer Programming. John Wiley & Sons, Inc., New York, NY, USA, 1986.
 [30] SDPT3. http://www.math.nus.edu.sg/~mattohkc/SDPT3.html.
 [31] SOSTOOLS. http://sysos.eng.ox.ac.uk/sostools/.
 [32] Jacob Steinhardt and Russ Tedrake. Finitetime regional verification of stochastic nonlinear systems. I. J. Robotics Res., 31(7):901–923, 2012.
 [33] Toru Takisaka, Yuichiro Oyabu, Natsuki Urabe, and Ichiro Hasuo. Ranking and repulsing supermartingales for reachability in probabilistic programs. In Lahiri and Wang [24], pages 476–493.
 [34] Terence Tao. An Introduction to Measure Theory. American Mathematical Society, 2011.
 [35] Alfred Tarski. A latticetheoretical fixpoint theorem and its applications. Pacific Journal of Mathematics, 5, 06 1955.
 [36] David Tolpin, JanWillem van de Meent, Hongseok Yang, and Frank D. Wood. Design and implementation of probabilistic programming language Anglican. In Tom Schrijvers, editor, Proceedings of the 28th Symposium on the Implementation and Application of Functional Programming Languages, IFL 2016, Leuven, Belgium, August 31  September 2, 2016, pages 6:1–6:12. ACM, 2016.
 [37] Natsuki Urabe, Masaki Hara, and Ichiro Hasuo. Categorical liveness checking by corecursive algebras. In Proc. of LICS 2017, pages 1–12. IEEE Computer Society, 2017.
Appendix 0.A Preliminaries on Measure Theory
In this section, we review some results from measure theory that is needed in the rest of the paper. For more details, see e.g. [2, 34].
Definition 0.A.1
Let be a measurable function and be a probability measure on . A pushforward measure is a measure on defined by for each measurable set .
Lemma 0.A.2
Let and be measurable functions and be a probability measure on .
(3) 
where denotes the composite function of and . ∎
Lemma 0.A.3
Let and be measurable spaces and be a probability measure on for each . The following conditions are equivalent.

For each , a mapping is measurable.

For each measurable function ,
is measurable.
Proof
(1 2)
We write for the product algebra of and . By the monotone convergence theorem (see e.g. [2, Theorem 1.6.2]) and the linearity of integration, it suffices to prove that for each , satisfies the condition 2. Let . By the monotone class theorem (see e.g. [2, Theorem 1.3.9]), to prove , it suffices to prove that is a monotone class and contains a Boolean algebra
The rest of the proof is easy.
(2 1)
Given , consider . ∎
For any and any set , denotes a precomposition of i.e. . If , we write for where is the inclusion mapping.
Corollary 0.A.4
Let be a measurable space and be an inner regular probability measure on for each . Assume . There exists a unique probability measure on such that . ∎
Appendix 0.B th moments of runtimes and rewards
We define a probably measure on the set of runs of a pCFG given a scheduler. We then define the th moment of runtimes. Here we slightly generalize runtime model by considering a reward function and redefine some of the notions to accommodate the reward function. However, this generalization is not essential, and therefore the readers can safely assume that we are just counting the number of steps until termination (by taking the constant function 1 as a reward function).
Let be a pCFG. A reward function on is a measurable function . Recall that we regard the set of configurations as the product measurable space of and . A scheduler of resolves two types of nondeterminism: nondeterministic transition and nondeterministic assignment.
Definition 0.B.1 (scheduler)
A scheduler of consists of the following components.

A function such that

if and is the last location of , then implies , and

for each , the mapping is measurable.


A function such that

if , is the last location of and , then , and

for each , the mapping is measurable.

Note that if and , then there exists only one scheduler that is trivial.
In the rest of the paper, the concatenation of two finite sequences is denoted by or by .
Given a scheduler and a history of configurations , let be a probability distribution of the next configurations determined by .
Definition 0.B.2
Let be a scheduler and . A probability measure on is defined as follows.

If and , .

If , .

If ,