Log In Sign Up

Verifying Relational Properties using Trace Logic

by   Gilles Barthe, et al.

We present a logical framework for the verification of relational properties in imperative programs. Our work is motivated by relational properties which come from security applications and often require reasoning about formulas with quantifier-alternations. Our framework reduces verification of relational properties of imperative programs to a validity problem into trace logic, an expressive instance of first-order predicate logic. Trace logic draws its expressiveness from its syntax, which allows expressing properties over computation traces. Its axiomatization supports fine-grained reasoning about intermediate steps in program execution, notably loop iterations. We present an algorithm to encode the semantics of programs as well as their relational properties in trace logic, and then show how first-order theorem proving can be used to reason about the resulting trace logic formulas. Our work is implemented in the tool Rapid and evaluated with examples coming from the security field.


page 1

page 2

page 3

page 4


Trace Logic for Inductive Loop Reasoning

We propose trace logic, an instance of many-sorted first-order logic, to...

Quantum Relational Hoare Logic with Expectations

We present a variant of the quantum relational Hoare logic from (Unruh, ...

Verification of Quantitative Hyperproperties Using Trace Enumeration Relations

Many important cryptographic primitives offer probabilistic guarantees o...

Towards Trace-based Deductive Verification (Tech Report)

Contracts specifying a procedure's behavior in terms of pre- and postcon...

RHLE: Relational Reasoning for Existential Program Verification

Reasoning about nondeterministic programs requires a specification of ho...

Scheduling Complexity of Interleaving Search

miniKanren is a lightweight embedded language for logic and relational p...

An algebra of alignment for relational verification

Relational verification encompasses information flow security, regressio...

1 Introduction

Program verification generally focuses on proving that all executions of a program lie within a specified set of executions, that is, properties are seen as sets of traces. However, this approach is not general enough to capture various fundamental properties, such as non-interference [26] and robustness [12]. These notions are naturally modelled as relational properties, that is as properties over sets of pairs of traces. Relational properties are special instances of hyperproperties [15], which are formally defined as sets of sets of traces.

Verification of relational properties can be achieved in different ways. One approach is by reduction to program verification: given a program and a hyperproperty , construct a program and a property , such that: (i) verifies and (ii) verifies implies verifies . The main advantage of this approach is that (i) can be verified using standard verification tools, whereas (ii) is proved generically for the method used for constructing , for instance self-composition [7, 20] and product programs [5, 13]. Another approach to verify relational properties is to use relational Hoare logic [10] or specialized logics that target specific properties [1]. While both approaches have been applied successfully in several use cases, they suffer from fundamental limitations: (i) they are typically not efficient enough to scale to large programs and (ii) they are only partly automated and tailored to specific properties.


In this paper, we develop a new approach based on reduction to first-order reasoning, with the intent of reconciling expressiveness and automation.

(1) We introduce and formally characterize trace logic , an instance of many-sorted first-order logic with equality, which allows expressing properties over program locations, loop iterations, and computation traces (Section 4).

(2) We encode the semantics of programs as well as relational program properties in (Section 4). Specifically, given a program and a relational property , we construct a first-order formula in such that validity of entails that satisfies . Note that this semantic characterization stands in contrast with methods based on product programs, Hoare logics, and relational Hoare logics, where verification is syntax-directed.

(3) We show that relational properties, such as non-interference, can naturally be encoded in trace logic (Section 5).

(4) We implemented our approach in the Rapid tool, which relies on the first-order theorem prover Vampire [33]. We conducted experiments on security-relevant hyperproperties, such as non-interference and sensitivity. Our results show that Rapid is more expressive than state-of-the-art non-interference verification tools and that Vampire is better suited to the verification of security-relevant hyperproperties than state-of-the-art SMT-solvers like Z3 and CVC4.

2 Motivating Example

 func main()
    const Int[] a;
    const Int alength;
    Int i = 0;
    Int hw = 0;
    while (i < alength)
       hw = hw + a[i];
       i = i + 1;
Figure 1: Motivating example.

We motivate our work with the simple program of Figure 1. This program iterates over an integer-valued array a and stores in the variable hw the sum of array elements. If a is a bitstring, then this program leaks the so-called Hamming weight of a in the variable hw. Our aim is to prove the following relational property over two arbitrary computation traces and of Figure 1: if the elements of the array variable a in are component-wise equal to the elements of a in except for two consecutive positions and , for some , and the elements of a in at positions are swapped versions of the elements of a in (that is, the -th element of a in is the -th element of a in and vice-versa), then the program variable hw is the same at the end of and . We formalize this property as


where and respectively specify that and are of sort integer . Further, denotes the value of the element at position of a in trace , whereas end refers to the last program location of Figure 1 (that is, line 14).

Property  (1) is challenging to verify, since it requires theory-specific reasoning over integers and it involves alternation of quantifiers, as the length of the array a is unbounded and the -th position (corresponding to the swap) is arbitrary. To understand the difficulty in automating such kind of reasoning, let us first illustrate how humans would naturally prove property (1). First, split the iterations of the loop of Figure 1 into three intervals: (i) The interval from the first iteration of the loop to the iteration where i has value , (ii) the interval from the iteration where i has value to the iteration where i has value , and (iii) the interval from the iteration where i has value to the last iteration of the loop. Next, for each of the intervals above, one proves that the equality of the value of hw in traces and is preserved; that is, if hw has the same value in and at the beginning of the interval, then hw also has the same value in and at the end of the interval. In particular, for the first and third intervals one uses inductive reasoning, to conclude the preservation of the equality across the whole interval from the step-wise preservation in the interval of the equality of the value hw in traces and . Further, for the second interval, one uses commutativity of addition to prove that the value of hw in traces and is preserved. By combining that the values of hw in traces and are preserved in each of the three intervals, one finally concludes that property (1) is valid.

While the above proof might be natural for humans, it is challenging for automated reasoners for the following reasons: (i) one needs to express and relate different iterations in the execution of the loop in Figure 

1 and use these iterations to split the reasoning about loop intervals; (ii) one needs to automatically synthesize the loop intervals whose boundaries depend on values of program variables; and (iii) one needs to combine theory-specific reasoning with induction for proving quantified properties, possibly with alternations of quantifiers. In our work we address these challenges: we introduce trace logic, allowing us to express and automatically prove relational properties, including property (1). The key advantages of trace logic are as follows.

(i) In trace logic, program variables are encoded as unary and binary functions over program execution timepoints. This way, we can precisely express the value of each program variable at any program execution timepoint, without introducing abstractions. For Figure 

1, for example, we write to denote to the value of hw in trace at timepoint end.

(ii) Trace logic further allows arbitrary quantification over iterations and values of program variables. In particular, we can express and reason about iterations that depend on (possibly non-ground) expressions involving program variables. We use superposition-based first-order reasoning to automate static analysis with trace logic and derive first-order properties about loop iterations, possibly with quantifier alternations. For Figure 1, we generate for example the property: where denotes the location where the loop condition is tested and denotes the first iteration of the loop upon which the loop condition does not hold anymore.

(iii) We guide superposition reasoning in trace logic by using a set of lemmas statically inferred from the program semantics. These lemmas express inductive properties about the program behavior. To illustrate such lemmas, we first introduce the following notation. For an arbitrary program variable v, let denote that v has the same value in both traces at iteration of the loop. For example, for every program variable v of Figure 1, we introduce the following definition:

In particular, for variable hw, we introduce:

We then derive the following inductive lemma for each program variable v:


where and denote iterations and denotes the successor of . Lemma (2) asserts that if v has the same value in traces and at the beginning of the loop (that is, at iteration ) and if the values of v are step-wise equal in traces and up to an arbitrary iteration , then the values of v are equal in traces and at iteration (and hence the values of v are preserved in and for the entire interval up to ). For Figure 1, we generate lemma (2) for hw as:


Note that lemma (2), and in particular lemma (3) for hw, is crucial for proving that the values of hw in traces and are the same up to iteration , as considered in the relational property of (1). With this lemma at hand, we automatically prove property (1) of Figure 1, using superposition reasoning in trace logic.

3 Preliminaries

This section fixes our terminology and programming model.

3.1 First-order logic

We consider standard many-sorted first-order logic with equality, where equality is denoted by . We allow all standard boolean connectives and quantifiers in the language and write instead of , for two arbitrary first-order terms and . A signature is any finite set of symbols. We consider equality as part of the language; hence, is not a symbol. We write to denote that the formula is a tautology. In particular, we write , if is valid.

By a first-order theory, or simply just theory, we mean the set of all formulas valid on a class of first-order structures. When we discuss a theory, we call symbols occurring in the signature of the theory interpreted, and all other symbols uninterpreted. In our work, we consider the combination (union) of the theory of natural numbers and the one of integers. The signature of consists of standard symbols , , and , respectively interpreted as zero, successor, predecessor and less. Note that does not contain interpreted symbols for (arbitrary) addition and multiplication. We use the theory to represent and reason about loop iterations (see Section 4). The signature of consists of the standard integer constants and integer operators , and . We use the theory to represent and reason about integer-valued program variables (see Section 4). Additionally we use two (uninterpreted) sorts as two sets of uninterpreted symbols: (i) the sort Timepoint, written as , for denoting (unique) timepoints in the execution of the program and (ii) the sort Trace, written as , for denoting computation traces of a program.

Given a logical variable and sort , we write to denote that the sort of is . We use standard first-order interpretations/models modulo a theory , for example modulo . We write to denote that holds in all models of (and hence valid). If is a model of , we write if holds in the interpretation .

3.2 Programming Model

We consider programs written in a standard while-like programming language, denoted as , with mutable and constant integer- and integer-array-variables. The language includes standard side-effect free expressions over booleans and integers. Each program in consists of a single top-level function main, with arbitrary nestings of if-then-else  and while-statements. For simplicity, whenever we refer to loops, we mean while-loops. For each statement s, we refer to while-statements in which s is nested in as enclosing loops of s. The semantics of is formalized in Section 4.3, with further details in Appendix 0.A.1.

4 Trace Logic

We now introduce the concept of trace logic for expressing both the semantics and (relational) properties of -programs.

4.1 Locations and Timepoints

We consider a program in as a set of locations, where each location intuitively corresponds to a point in the program at which an interpreter can stop. That is, for each program statement s, we introduce a program location . We denote by the location corresponding to the end of the program.

As program locations can be revisited during program executions, for example due to the presence of loops, we model locations as follows. For each location corresponding to a program statement s, we introduce a function symbol with target sort in our language, denoting the timepoint where the interpreter visits the location. For each enclosing loop of the statement s, the function symbol has an argument of type ; this way, we distinguish between different iterations of the enclosing loop of s. We denote the set of all such function-symbols as . When s is a loop, we additionally include a function symbol with target sort and an argument of sort for each enclosing loop of s. This way, denotes the iteration in which s terminates for given iterations of the enclosing loops of s. We denote the set of all such function symbols as .

Example 1

Consider Figure 1. We abbreviate each statement s by the line number of the first line of s. We use to refer to the timepoint corresponding to the first assignment of i in the program. We denote by and the timepoints corresponding to evaluating the loop condition in the first and, respectively, last loop iteration. Further, we write and for the timepoint corresponding to the beginning of the loop body in the -th and, respectively, second iteration of the loop. Note that is a term algebra expression of .

For simplicity, let us define terms over the most commonly used timepoints. First, define to be a function, which returns for each while-statement s a unique variable of sort . Second, let s be a statement, let be the enclosing loops of s and let iter be an arbitrary term of sort .

if s is not while-statement
if s is while-statement
if s is while-statement

Third, let s be an arbitrary statement. We refer to the timepoint where the execution of s has started (parameterized by the enclosing iterators) by

Fourth, for an arbitrary statement s, let denote the timepoint which follows immediately after s has been evaluated completely (including the evaluation of substatements of s):

4.2 Program Variables and Expressions

In our setting, we reason about program behavior by expressing properties over program variables v. To do so, we capture the value of program variables v at timepoints (from ) in arbitary program execution traces (from ). Hence, we model program variables v as functions , where gives the value of v at timepoint , in trace . If the program variable v is an array, we add an additional argument of sort , which corresponds to the position at which the array is accessed. We denote by the set of such introduced function symbols denoting program variables. We finally model arithmetic constants and program expressions using integer functions.

Note that our setting can be simplified for (i) non-mutable variables – in this case we omit the timepoint argument in the function representation of the variable; (ii) for non-relational properties about programs – in this case, we only focus on one computation trace and hence the trace argument in the function from can be omitted.

Example 2

Consider again Figure 1. By we refer to the value of program variable i in trace

at the moment before

i is first assigned. We use to refer to the value of variable alength in trace . As a is unchanged in the program, we write for the value of array a in trace at position , where is the value of i in trace at timepoint . In case a would have changed during the loop, we would have written instead. We denote by the value of the expression i+1 in trace at timepoint .

Consider now an arbitrary program expression e. We write to denote the value of e at timepoint , in trace . With these notations at hand, we introduce two definitions expressing properties about values of expressions e at arbitrary timepoints and traces. Consider now , that is a function denoting a program variable v, and let denote two timepoints. We define:


That is, in (4) states that the program variable v has the same values at and . We also define:


asserting that all program variables have the same values at the timepoints and .

4.3 Semantics of

We now describe the semantics of expressed in our trace logic . To do so, we state trace axioms of capturing the behavior of possible program computation traces and then define .

In what follows, we consider an arbitrary but fixed program in , and give all definitions relative to . Note that our semantics defines arbitrary executions, which are modeled by a free variable of sort .


Let be statements and be a program with top-level function func main \{s$_1; \ldots;_k$\}. The semantics of is defined by the conjunction of the semantics of the statements s in the top-level function and is the same for each trace. That is:


The semantics of is then defined by structural induction, by asserting trace axioms for each program statement s, as follows.


Let s be a statement skip. The evaluation of has no effect on the value of the program variables. Hence:


Integer assignments

Let s be an assignment v = e, where v is an integer program variable and e is an expression. We reason as follows. The assignment s is evaluated in one step. After the evaluation of s, the variable v has the same value as e before the evaluation, and all other variables remain unchanged. Hence:


Array assignments

Let s be an assignment a[e$_1$] = e$_2$, where a is an array variable and are expressions. We consider that the assignment is evaluated in one step. After the evaluation of s, the array a has the same value as before the evaluation, except for the position corresponding to the value of e before the evaluation, where the array now has the value of e before the evaluation. All other program variables remain unchanged and we have:


Conditional if-then-else Statements

Let s be the statement: if(Cond)\{s$_1;\ldots;_k$\} else \{s$_1‘;\ldots;_{k‘}‘

\}. The semantics of s is defined by the following two properties: (i) entering the if-branch and/or entering the else-branch does not change the values of the variables, (ii) the evaluation in the branches proceeds according to the semantics of the statements in each of the branches. Thus:



Let s be the while-statement while(Cond)\{s$_1;\ldots;_k$\}.We refer to Cond as the loop condition. We use the following four properties to defined the semantics of s: (i) the iteration is the first iteration where the loop condition does not hold, (ii) entering the loop body does not change the values of the variables, (iii) the evaluation in the body proceeds according to the semantics of the statements in the body, (iv) the values of the variables at the end of evaluating s are the same as the variable values at the loop condition location in iteration . We then have:


4.4 Trace Logic

We now have all ingredients to define our trace logic , allowing us to reason about both relational and non-relational properties of programs.

Let be a set of nullary function symbols of sort . Intuitively, these symbols denote traces and allow us to express relational properties. The signature of contains the symbols of the theories and together with symbols introduced in Section 4.1-4.2, that is symbols denoting timepoints, last iterations in loops, program variables and traces. Formally,

Recall that the semantics of is defined by the trace axioms (7)-(11). By extending standard small-step operational semantics with timepoints and traces, we obtain the small-step semantics of (see our Appendix for details). For proving soundness, of this semantics, we rely on so-called execution-interpretation of a program execution : such an interpretation is a model in which for every (array) variable v the term resp. is interpreted as the value of v at the execution step in corresponding to timepoint . We then refer to the soundness of the semantics of as -soundness, defined as:

Definition 1 (-Soundness)

Let be a program and let be a trace logic property. We say that is -sound, if for any execution-interpretation we have .

By using structural induction over program statements, we derive -soundness of the semantics of (see our Appendix for details). That is:

Theorem 4.1 (-Soundness of Semantics of )

For a given terminating program , the trace axioms (7)-(11) are -sound.

As a consequence, the semantics of any terminating program expressed in , as defined in (6), is -sound.

4.5 Program Correctness in Trace Logic

Let be a program and be a first-order property of , with expressed in . We use to express and prove that “satisfies” , that is is partially correct w.r.t. , as follows:

  1. We express in , as discussed in Section 4.3;

  2. We prove the partial correctness of with respect to ; that is, we prove

In what follows, we first discuss (relational) properties expressed in (Section 5) and then focus on proving partial correctness using (Section 6).

5 Hyperproperties in Trace Logic

We exemplify the expressiveness of trace logic by encoding non-interference [38], a fundamental security property. We also showcase the generic lemmas, similar to property (2), introduced by our work to automate the verification of hyperproperties. The examples considered in this section are deemed as insecure by existing syntax-driven, non-interference verification techniques, such as [38, 27].

Non-interference [26] is a security property that prevents information flow from confidential data to public channels. It is a so-called -safety property expressing that, given two runs of a program containing high and low confidentiality variables, denoted by and respectively, if the input for all variables is the same in both runs, the output of the computation should result in the same values for variables in both traces regardless of the initial value of any variable. Intuitively, this means that no private input leaks to any public sink. In what follows, we let lo denote an variable and hi an variable.

We formalize non-interference in trace logic as follows. Let denote the first timepoint of the execution and let denote that has the same value(s) in both traces at timepoint , that is:

We then express non-interference as:

Example 3

Consider the program illustrated in Figure 1(a), which branches on an guard. In the two branches, however, the variable is updated in the same way, thereby not leaking anything about the guard. The non-interference property for this program is a special instance of property (12), as follows:


By adjusting superposition reasoning to trace logic (see Section 6), we can automatically verify the property above. Traditional information-flow type systems [38] would however fail to prove this program secure, as they prevent any branching on guards. More permissive static analysis techniques based on program dependency graphs, such as Joana [27]

, would also classify this program as insecure.

func main()
    const Int hi;
    Int lo;
    if(hi > 0)
         lo = lo + 1;
         lo = lo + 1;
(a) Branching on a high variable.
func main()
    const Int k;
    const Int lo;
    Int hi = lo;
    Int i = 0;
    Int[] output;
    while(hi < k)
         output[i] = hi;
         hi = hi + 1;
         i = i + 1;
}               \end{lstlisting}
        \caption{Explicit flow.}
        \caption{Examples with non-interference behaviour.}\vspace*{-1em}
Let us now focus on another interesting security example.
  Figure~\ref{fig:noninterference9} models an
  interactive program  outputting on a public channel. The array
  variable  models the number and content of these
  outputs, which is determined by the loop. At a first glance, this
  program might look insecure because of the explicit flow at
  . Furthermore, the number of outputs, as well as their
  content, could also leak information about the secret. Indeed,
  value-insensitive  information-flow type
  systems~\cite{sabelfeld2003language} would consider this program to
  be insecure. In this specific case, however, the  variable in the
  loop guard is reset with an  input, and the program satisfies
  non-interference. As our semantic reasoning in trace logic
   is value sensitive,
  our work correctly validates Figure~\ref{fig:noninterference9},
  proving it to be secure.
Specifically, we prove the following property, stating that if all variables in
 are equal at the beginning of the execution, then the values of
the \pv{output} array are equal after the execution:
        (\mathit{EqTr}(k,l_{11}) \land \mathit{EqTr}(lo,l_{11}) \land
  \mathit{EqTr}(\mathit{output},l_{11})) \\
  ~ \rightarrow
        \mathit{EqTr}(output, l_{\mathit{end}})
% \big(\: k(t_1) = k(t_2) \:\wedge\: lo(t_1) = lo(t_2) \:\wedge\: \\ \forall pos. \; (output(l_{11}(0), pos, t_1) = output(l_{11}(0), pos, t_2)) \:\big) \rightarrow  \\
%  \forall pos. \; (output(l_{end}, pos, t_1) = output(l_{end}, pos, t_2))
Our framework generates and relies upon a set of generic \textit{trace
  lemmas} for hyperproperties, similar the lemma
~\eqref{eq:running:lemma}. We will now illustrate two further such lemmas.
\paragraph{Simultaneous-loop-termination\ifnotfmcad . \fi}
Our semantic formalization of  in trace logic
 defines  to be the smallest iteration, in
which the loop condition does not hold in trace . Due to
well-founded orderings over naturals, there can only be one iteration
with such a property. Thus,  if we can conclude this property for any other
trace, say , then it must be the case that
. In our work we therefore generate and use the following trace
lemma in  (for simplicity, we omit the enclosing iterators):
    \Big(\forall it. \big(it< n_s(t_1) \rightarrow \llbracket \pv{Cond} \rrbracket (l_s(it),t_2)\big) \land \\
      \neg \llbracket \pv{Cond} \rrbracket (l_s(n_s(t_1)),t_2)\Big) \rightarrow n_s(t_2)\eql n_s(t_1)
  \Big(\forall it. \big(it< n_s(t_1) \rightarrow \llbracket \pv{Cond} \rrbracket (l_s(it),t_2)\big)  \land \neg \llbracket \pv{Cond} \rrbracket (l_s(n_s(t_1)),t_2)\Big)\\
\quad \rightarrow n_s(t_2)\eql n_s(t_1)
\noindent Property~\eqref{eq:tracelemma:termination} is essential to
prove that the loops in both
%allows us to prove for many benchmarks that the loops in both
traces have the same last iteration, and therefore terminate after the
same number of iterations.
\paragraph{Equality-preservation-arrays\ifnotfmcad . \fi} For an array variable
a and loop location , let  denote that
a at position pos has the same value in both traces at iteration
it of the loop:
Eq_{a}(it,\textit{pos}) := a(l(it),\textit{pos},t_1) \eql a(l(\textit{it}),\textit{pos},t_2).
The following lemma over array variables is similar to the
  \forall pos_{\Int}.\forall it_{\Nat}. \\
  \quad \Big(\big(Eq_{a}(\zero, pos)\land
  \forall it_{\Nat}. ((it < it \land Eq_{a}(it, pos)) \\
  \qquad \quad\rightarrow Eq_{a}(\suc(it),pos))\big)\rightarrow Eq_{a}(it’,pos)\Big)
We conclude by emphasizing that trace lemmas,
such as~\eqref{eq:tracelemma:termination} and~\eqref{eq:equality-preservation-array:lemma}, are expressed in
trace logic  and automatically
inferred by our approach. %Such, and similar trace lemmas, are used
%further in the relational verification of programs, similarly as
%described in Examples~\ref{ex:noninterference3}-\ref{ex:noninterference9}.
% \subsubsection*{Sensitivity} Sensitivity is a property describing how much a program amplifies the distance of its inputs, which is at the core of the Laplace mechanism used to enforce differential privacy  \cite{dwork2006calibrating}.  Let the integer  denote the deviation, and let  be the set of program variables that appear in the output after the execution of the program. We can then formally define sensitivity as follows:
% \bg{still need to rewrite this property. How far can we deviate from the original definition of sensitivity? We are only able to prove the example, if we encode the difference as , but not if we encode it as }\mm{indeed, we should present the real definition and then say that in the example we use a simplified one, no? But the definition below is wrongwe have to talk about that too}
% \[\forall k_{\Int}, v_{\in OUT}. \: \big(v(l_{0},t_1) = (v(l_{0},t_2) + k) \rightarrow v(l_{end},t_1) = (v(l_{end},t_2) + k) \big)\]
% \bg{quantifying over  is also not right here, since  correspond to a program variable and not to a logical one.}
% \noindent\paragraph{Example.} As an example, consider the program in Figure~\ref{fig:sensitivity3a}. Here, we can automatically show a property stronger than sensitivity, that is, if the value of program variable z in the two runs differs at most by  and all other inputs are the same, then x differs at the end of the program in the two runs by at most :
% \mm{can we have a more natural example like the one in Figure 1, with just the array but not the additional z?}
% \begin{gather*}
%       \forall k_{\Int}. \: \Big( EqT(a,l_6) \land EqT(alength,l_6) \land z(t_1) = (z(t_2) + k)  \\
%       \rightarrow x(l_{end}, t_1) = x(l_{end}, t_2) + k \Big)
% \end{gather*}
% \begin{figure}
%       \begin{center}
%               \begin{lstlisting}[xleftmargin=0.35\textwidth]
% func main()
% {
%     const Int[] a;
%     const Int alength;
%     const Int z;
%     Int x = 0;
%     Int i = 0;
%       while(i < alength)
%     {
%              x = x + a[i];
%         i = i + 1;
%     }
%     x = x + z;
% }             \end{lstlisting}
%       \end{center}
%       \caption{Example for sensitivity}
%       \label{fig:sensitivity3a}
% \end{figure}

6 Implementation and Experiments

This section describes our implementation and reports on our experiments for proving relational properties.

6.1 Implementation

We implemented our approach in the tool Rapid111, which consists of nearly 13,000 lines of C++ code. Rapid takes as input a program written in and a property expressed in trace logic . It then generates axioms written in trace logic corresponding to the semantics of the program and outputs both the axioms and the property in the smt-lib syntax [4]. The produced smt-lib encoding is further passed within Rapid to the first-order theorem prover Vampire [33] for proving validity of the property (i.e. partial correctness).

Inductive Reasoning.

Trace logic encodes loop-iterations using counters of sort . Hence, there are consequences of the semantics which can only be derived using inductive reasoning. Automating induction is however challenging: state-of-the-art SMT solvers and theorem-provers are not able to automatically infer and prove most (inductive) consequences needed by Rapid. In order to address this problem, (i) we identified some of the most important applications of induction that are useful for many programs and (ii) formulated the corresponding inductive properties in trace logic as trace lemmas. Some of these lemmas are described in Section 2 and Section 5. Rapid generates trace lemmas automatically and adds them as axioms to its smt-lib output.

Theory Reasoning

Reasoning with theories in the presence of quantifiers is yet another challenge for automated reasoners, and hence for Vampire. Different theory encodings lead to very different results. In Rapid, we model integers using the built-in support for integers in Vampire. We experimented with various sound but incomplete axiomatization of integers. We used Vampire with all its built-in theory axioms (option -tha on, default), as well as with a partial, but most relevant set of theory axioms (option -tha some) which we extended with specific integer theory axioms. Natural numbers are modeled in Rapid as a term algebra , for which efficient reasoning engines already exist [32]. In order to express the ordering of natural numbers, we manually add the symbol , together with an (incomplete) axiomatization. In Rapid, we also experimented with clause splitting by calling Vampire both with and without its Avatar framework [45] (options -av on/off, with on as default).

6.2 Benchmarks and Experimental Results

To compensate the lack of general benchmarks for first-order hyperproperties, we collected a set of 27 verification problems for evaluating our work in Rapid. Our benchmarks describe -safety properties relevant in the security domain, such as non-interference and sensitivity.

Rapid produced the smt-lib-encodings for each benchmark in less than a second. These encodings were passed to Vampire, as well as to the SMT solvers Z3 [21] and CVC4 [3] for comparison purposes, to establish the correctness of the input property. We ran each prover with a 60 seconds time limit. All experiments were carried out on an Intel Core i5 3.1Ghz machine with 16 GB of RAM.

Our experimental results are summarized in Table 1. The first four columns report on results by running Vampire on the Rapid output. The columns denoted with S/F refer to Vampire options for partial/full theory reasoning (option -tha some/on) respectively. A refers to the use of the Avatar (option -av on) in conjunction with one of the theory options, hence columns S+A and F+A. The last two columns of Table 1 summarize our results of running Z3 and CVC4 on the Rapid output. The rows denoted Total Vampire and Unique Vampire sum up the total and unique numbers of examples proven with the setting of the corresponding column. The example 4-hw-swap-in-array in Table 1 is our running example from Figure 1, whereas benchmarks 3-ni-high-guard-equal-branches and 9-ni-equal-output correspond to Figure 1(a) and Figure LABEL:fig:noninterference9, respectively.

All together, Vampire proved 25 Rapid encodings out of the 27 benchmark problems. Table 1 shows that the option S+A seems to be the most successful, with four unique benchmarks proven. While two of our benchmarks were not proven by Vampire with our current set of automatically generated Rapid lemmas, these problems could actually be proved by Vampire by using only a subset of trace lemmas, i.e. by removing unnecessary lemmas manually. Improving theory reasoning in Vampire, and in general in superposition proving, would further improve the efficiency of Rapid. In particular, designing better reasoning support for transitive relations like and is an interesting further line of research.

Benchmarks Vampire CVC4 Z3
1-hw-equal-arrays -
2-hw-last-position-swapped - - -
3-hw-swap-and-two-arrays - - - - -
4-hw-swap-in-array-lemma - - - - -
4-hw-swap-in-array-full - - - - -
4-ni-branch-on-high-twice-prop2 - -
5-ni-temp-impl-flow - -
6-ni-branch-assign-equal-val - -
8-ni-explicit-flow-while -
9-ni-equal-output - - - -
10-ni-rsa-exponentiation -
2-sens-equal-sums-two-arrays - -
3-sens-abs-diff-up-to-k - - - -
4-sens-abs-diff-up-to-k-two-arrays - - - - - -
5-sens-two-arrays-equal-k - -
6-sens-diff-up-to-explicit-k - -
7-sens-diff-up-to-explicit-k-sum - - - -
8-sens-explicit-swap - - - -
9-sens-explicit-swap-prop2 - - -
10-sens-equal-k - -
11-sens-equal-k-twice - -
12-sens-diff-up-to-forall-k - - -
Total Vampire 15 18 17 19
Unique Vampire 1 4 0 0
Total 25 14 13
Table 1: Rapid results with Vampire, Z3 and CVC4.

We also compared the performance of Vampire on the Rapid examples to the performance of Z3 and CVC4. Unlike Vampire, Z3 and CVC4 proved only 13 and 14 examples, respectively. Our results thus showcase that superposition reasoning, in particular Vampire, is better suited for proving first-order hyperproperties, as many of these properties involve heavy use of quantifiers, including alternations of quantifiers (such as for example 4-hw-swap-in-array corresponding to Figure 1). Moreover, Rapid proved security of examples that were classified insecure by existing techniques [27, 38], such as 3-ni-high-guard-equal-branches and 9-ni-equal-output.

7 Related Work

Deductive verification. Most verification approaches use a state-based language to express programs and properties about them, and use invariants to establish program correctness [11]. Such invariants loosely correspond to a fragment of trace logic, where formulas only feature universal quantification over time – but no existential quantification. The lack of existential, and thus alternating, quantification makes these works suitable for automation via SMT-solving [31, 30] and hence applicable for programs where full first-order logic is not needed, for instance programs involving mainly integer variables and function calls. For program properties expressed in full first-order logic, such as over unbounded arrays, existing methods are yet not able to automatically verify program correctness. We argue that the missing expressiveness is the problem here, since one usually needs to be able to express arbitrary dependencies of timepoints and values, if custom code is used to iterate through an array or more generally through a data structure. Our trace logic supports such kind of first-order reasoning.

Our approach to automate induction using trace lemmas is related to template-based invariant generation methods [16, 29]. Thanks to the expressiveness of trace logic, our trace lemmas are however more general than existing templates. Further, by using superposition-based reasoning over our trace lemmas, we automatically derive specialized lemmas similar to templates.

Program analysis by first-order reasoning is also studied in [24], where program semantics is expressed in extensions of Hoare Logic with explicit timepoints. Unlike [24], we do not rely on an intermediate program (Hoare) logic, but reason also about relational properties. While [24] can only handle simple loops, our work supports a standard while-language with explicit locations and arbitrary nestings of statements.

Relational verification. Verification of relational- and hyperproperties is an active area of research, with applications in programming languages and compilers, security and privacy; see [9] for an overview. Various static analysis techniques have been proposed to analyze non-interference, such as type systems [38] and graph dependency analysis [27]. Type systems proved also effective in the verification of privacy properties for cryptographic protocols [22, 8, 17, 18, 19]. Relational Hoare logic was introduced in [10] and further extended in [5, 6] for defining product programs to reduce relational verification to standard verification. All these works closely tie verification to the syntactic program structure, thus limiting their applicability and expressiveness. As already argued, our work allows proving security of examples that were so far classified as insecure by some of the aforementioned methods [27, 38]. Recently, [28] encodes relational properties through refinement types in F* [44]. While still being syntax driven, [28] can potentially verify semantic properties by using SMT solving, although this typically requires the manual insertion and proof of program-dependent lemmas, which is not the case with our approach.

In [25] bounded model checking is proposed for program equivalence. Program equivalence is reduced in [23] to proving a set of Horn clauses, by combining a relational weakest precondition calculus with SMT-based reasoning. However, when addressing programs with different control flow as in [23], user guidance is required for proving program equivalence. Program equivalence is also studied in [46, 34] for proving information flow properties. Unlike these works, we are not limited to SMT solving but automate the verification of relational properties expressed in full first-order theories, possibly with alternations of quantifiers.

Motivated by applications to translation validation, the work of [36] develops powerful techniques for proving correctness of loop transformations. Relational methods for reasoning about program versions and semantic differences are also introduced in [37, 35]. Going beyond relational properties, an SMT-based framework for verifying -safety properties is introduced in [41] and further extended [42] for proving correctness of 3-way merge. While these works focus on high-level languages, many others consider low-level languages, see [40, 43, 39, 2] for some exemplary approaches. Further afield, several authors have introduced logics for modelling hyperproperties. Unlike these works, trace logic allows expressing first-order relational properties and automates reasoning about such properties by first-order theorem proving, overcoming thus the SMT-based limitations of quantified reasoning.

Finally, Clarkson et al [14] introduced HyperLTL and HyperCTL to model temporal and relational properties properties in a uniform framework. However, these logics support only decidable fragments of first-order logic and thus cannot handle relational properties with non-constant function symbols. As such, security and privacy properties over unbounded data structures/uninterpreted functions cannot be encoded or verified.

8 Conclusion

We introduced trace logic for automating the verification of relational program properties of imperative programs. We showed that program semantics as well as relational properties can naturally be encoded in trace logic as first-order properties over program locations, loop iterations and computation traces. We combined trace logic with superposition proving and implemented our work in the Rapid tool. While our current experiments demonstrate the efficiency and automation of our approach, outperforming SMT solvers, we are convinced that improving superposition reasoning with both theories and quantifiers would further strengthen the use of trace logic for relational verification. We leave this challenge as an interesting line of future work.


This work was funded by the ERC Starting Grant 2014 SYMCAR 639270, the Wallenberg Academy Fellowship 2014 TheProSE, the Austrian FWF research projects W1255-N23 and RiSE S11409-N23, the ERC Consolidator Grant 2018 BROWSEC 771527, by the Netidee projects EtherTrust 2158 and PROFET P31621, and by the FFG projects PR4DLT 13808694 and COMET K1 SBA.


  • [1] T. Amtoft, S. Bandhakavi, and A. Banerjee. A logic for information flow in object-oriented programs. In J. G. Morrisett and S. L. P. Jones, editors, Proceedings of the 33rd ACM SIGPLAN-SIGACT Symposium on Principles of Programming Languages, POPL 2006, Charleston, South Carolina, USA, January 11-13, 2006, pages 91–102. ACM, 2006.
  • [2] M. Balliu, M. Dam, and R. Guanciale. Automating information flow analysis of low level code. In G. Ahn, M. Yung, and N. Li, editors, Proceedings of the 2014 ACM SIGSAC Conference on Computer and Communications Security, Scottsdale, AZ, USA, November 3-7, 2014, pages 1080–1091. ACM, 2014.
  • [3] C. Barrett, C. L. Conway, M. Deters, L. Hadarean, D. Jovanović, T. King, A. Reynolds, and C. Tinelli. CVC4. In International Conference on Computer Aided Verification, pages 171–177. Springer, 2011.
  • [4] C. Barrett, P. Fontaine, and C. Tinelli. The SMT-LIB standard: Version 2.6. Technical report, Department of Computer Science, The University of Iowa, 2017. Available at
  • [5] G. Barthe, J. M. Crespo, and C. Kunz. Relational verification using product programs. In M. J. Butler and W. Schulte, editors, FM 2011: Formal Methods - 17th International Symposium on Formal Methods, Limerick, Ireland, June 20-24, 2011. Proceedings, volume 6664 of Lecture Notes in Computer Science, pages 200–214. Springer, 2011.
  • [6] G. Barthe, J. M. Crespo, and C. Kunz. Beyond 2-safety: Asymmetric product programs for relational program verification. In S. N. Artëmov and A. Nerode, editors, Logical Foundations of Computer Science, International Symposium, LFCS 2013, San Diego, CA, USA, January 6-8, 2013. Proceedings, volume 7734 of Lecture Notes in Computer Science, pages 29–43. Springer, 2013.
  • [7] G. Barthe, P. R. D’Argenio, and T. Rezk. Secure information flow by self-composition. In 17th IEEE Computer Security Foundations Workshop, (CSFW-17 2004), 28-30 June 2004, Pacific Grove, CA, USA, pages 100–114. IEEE Computer Society, 2004.
  • [8] G. Barthe, C. Fournet, B. Grégoire, P.-Y. Strub, N. Swamy, and S. Zanella-Béguelin. Probabilistic relational verification for cryptographic implementations. In 41st ACM SIGPLAN-SIGACT Symposium on Principles of Programming Languages, POPL ’14, pages 193–205. ACM, 2014.
  • [9] B. Beckert and M. Ulbrich. Trends in relational program verification. In P. Müller and I. Schaefer, editors, Principled Software Development - Essays Dedicated to Arnd Poetzsch-Heffter on the Occasion of his 60th Birthday, pages 41–58. Springer, 2018.
  • [10] N. Benton. Simple relational correctness proofs for static analyses and program transformations. In N. D. Jones and X. Leroy, editors, Proceedings of the 31st ACM SIGPLAN-SIGACT Symposium on Principles of Programming Languages, POPL 2004, Venice, Italy, January 14-16, 2004, pages 14–25. ACM, 2004.
  • [11] N. Bjørner, A. Gurfinkel, K. McMillan, and A. Rybalchenko. Horn clause solvers for program verification. In Fields of Logic and Computation II, pages 24–51. Springer, 2015.
  • [12] S. Chaudhuri, S. Gulwani, and R. Lublinerman. Continuity analysis of programs. In M. V. Hermenegildo and J. Palsberg, editors, Proceedings of the 37th ACM SIGPLAN-SIGACT Symposium on Principles of Programming Languages, POPL 2010, Madrid, Spain, January 17-23, 2010, pages 57–70. ACM, 2010.
  • [13] B. Churchill, O. Padon, R. Sharma, and A. Aiken. Semantic program alignment for equivalence checking. In Proceedings of the 40th ACM SIGPLAN Conference on Programming Language Design and Implementation, PLDI ’19. ACM, 2019.
  • [14] M. R. Clarkson, B. Finkbeiner, M. Koleini, K. K. Micinski, M. N. Rabe, and C. Sánchez. Temporal logics for hyperproperties. In M. Abadi and S. Kremer, editors, Principles of Security and Trust - Third International Conference, POST 2014, Held as Part of the European Joint Conferences on Theory and Practice of Software, ETAPS 2014, Grenoble, France, April 5-13, 2014, Proceedings, volume 8414 of Lecture Notes in Computer Science, pages 265–284. Springer, 2014.
  • [15] M. R. Clarkson and F. B. Schneider. Hyperproperties. In Proceedings of the 21st IEEE Computer Security Foundations Symposium, CSF 2008, Pittsburgh, Pennsylvania, USA, 23-25 June 2008, pages 51–65. IEEE Computer Society, 2008.
  • [16] M. A. Colón, S. Sankaranarayanan, and H. B. Sipma. Linear invariant generation using non-linear constraint solving. In International Conference on Computer Aided Verification, pages 420–432. Springer, 2003.
  • [17] V. Cortier, F. Eigner, S. Kremer, M. Maffei, and C. Wiedling. Type-based verification of electronic voting protocols. In 4th International Conference on Principles of Security and Trust - Volume 9036, pages 303–323. Springer-Verlag, 2015.
  • [18] V. Cortier, N. Grimm, J. Lallemand, and M. Maffei. A type system for privacy properties. In 2017 ACM SIGSAC Conference on Computer and Communications Security, CCS ’17, pages 409–423. ACM, 2017.
  • [19] V. Cortier, N. Grimm, J. Lallemand, and M. Maffei. Equivalence properties by typing in cryptographic branching protocols. In Principles of Security and Trust (POST’18), pages 160–187. Springer, 2018.
  • [20] Á. Darvas, R. Hähnle, and D. Sands. A theorem proving approach to analysis of secure information flow. In D. Hutter and M. Ullmann, editors, Security in Pervasive Computing, Second International Conference, SPC 2005, Boppard, Germany, April 6-8, 2005, Proceedings, volume 3450 of Lecture Notes in Computer Science, pages 193–209. Springer, 2005.
  • [21] L. De Moura and N. Bjørner. Z3: An efficient SMT solver. In International conference on Tools and Algorithms for the Construction and Analysis of Systems, pages 337–340. Springer, 2008.
  • [22] F. Eigner and M. Maffei. Differential privacy by typing in security protocols. In 2013 IEEE 26th Computer Security Foundations Symposium, CSF ’13, pages 272–286. IEEE Computer Society, 2013.
  • [23] D. Felsing, S. Grebing, V. Klebanov, P. Rümmer, and M. Ulbrich. Automating regression verification. In I. Crnkovic, M. Chechik, and P. Grünbacher, editors, ACM/IEEE International Conference on Automated Software Engineering, ASE ’14, Vasteras, Sweden - September 15 - 19, 2014, pages 349–360. ACM, 2014.
  • [24] B. Gleiss, L. Kovács, and S. Robillard. Loop analysis by quantification over iterations. In G. Barthe, G. Sutcliffe, and M. Veanes, editors,

    LPAR-22. 22nd International Conference on Logic for Programming, Artificial Intelligence and Reasoning

    , volume 57 of EPiC Series in Computing, pages 381–399. EasyChair, 2018.
  • [25] B. Godlin and O. Strichman. Regression verification: proving the equivalence of similar programs. Softw. Test., Verif. Reliab., 23(3):241–258, 2013.
  • [26] J. A. Goguen and J. Meseguer. Security policies and security models. In 1982 IEEE Symposium on Security and Privacy, Oakland, CA, USA, April 26-28, 1982, pages 11–20. IEEE Computer Society, 1982.
  • [27] J. Graf, M. Hecker, and M. Mohr. Using joana for information flow control in java programs-a practical guide. Software Engineering 2013-Workshopband, 2013.
  • [28] N. Grimm, K. Maillard, C. Fournet, C. Hriţcu, M. Maffei, J. Protzenko, T. Ramananandro, A. Rastogi, N. Swamy, and S. Zanella-Béguelin. A monadic framework for relational verification: Applied to information security, program equivalence, and optimizations. In 7th ACM SIGPLAN International Conference on Certified Programs and Proofs, CPP 2018, pages 130–145. ACM, 2018.
  • [29] A. Gupta and A. Rybalchenko. InvGen: An efficient invariant generator. In International Conference on Computer Aided Verification, pages 634–640. Springer, 2009.
  • [30] A. Gurfinkel, S. Shoham, and Y. Meshman. Smt-based verification of parameterized systems. In FSE, pages 338–348, 2016.
  • [31] K. Hoder and N. Bjørner. Generalized property directed reachability. In International Conference on Theory and Applications of Satisfiability Testing, pages 157–171. Springer, 2012.
  • [32] L. Kovács, S. Robillard, and A. Voronkov. Coming to terms with quantified reasoning. In POPL, pages 260–270. ACM, 2017.
  • [33] L. Kovács and A. Voronkov. First-order theorem proving and vampire. In International Conference on Computer Aided Verification, pages 1–35. Springer, 2013.
  • [34] H. Kwon, W. Harris, and H. Esmaeilzadeh. Proving flow security of sequential logic via automatically-synthesized relational invariants. In 30th IEEE Computer Security Foundations Symposium, CSF 2017, Santa Barbara, CA, USA, August 21-25, 2017, pages 420–435. IEEE Computer Society, 2017.
  • [35] S. K. Lahiri, C. Hawblitzel, M. Kawaguchi, and H. Rebêlo. SYMDIFF: A language-agnostic semantic diff tool for imperative programs. In P. Madhusudan and S. A. Seshia, editors, Computer Aided Verification - 24th International Conference, CAV 2012, Berkeley, CA, USA, July 7-13, 2012 Proceedings, volume 7358 of Lecture Notes in Computer Science, pages 712–717. Springer, 2012.
  • [36] K. S. Namjoshi and N. Singhania. Loopy: Programmable and formally verified loop transformations. In X. Rival, editor, Static Analysis - 23rd International Symposium, SAS 2016, Edinburgh, UK, September 8-10, 2016, Proceedings, volume 9837 of Lecture Notes in Computer Science, pages 383–402. Springer, 2016.
  • [37] N. Partush and E. Yahav. Abstract semantic differencing via speculative correlation. In A. P. Black and T. D. Millstein, editors, Proceedings of the 2014 ACM International Conference on Object Oriented Programming Systems Languages & Applications, OOPSLA 2014, part of SPLASH 2014, Portland, OR, USA, October 20-24, 2014, pages 811–828. ACM, 2014.
  • [38] A. Sabelfeld and A. C. Myers. Language-based information-flow security. IEEE Journal on selected areas in communications, 21(1):5–19, 2003.
  • [39] R. Sharma, E. Schkufza, B. R. Churchill, and A. Aiken. Data-driven equivalence checking. In A. L. Hosking, P. T. Eugster, and C. V. Lopes, editors, Proceedings of the 2013 ACM SIGPLAN International Conference on Object Oriented Programming Systems Languages & Applications, OOPSLA 2013, part of SPLASH 2013, Indianapolis, IN, USA, October 26-31, 2013, pages 391–406. ACM, 2013.
  • [40] E. W. Smith and D. L. Dill. Automatic formal verification of block cipher implementations. In A. Cimatti and R. B. Jones, editors, Formal Methods in Computer-Aided Design, FMCAD 2008, Portland, Oregon, USA, 17-20 November 2008, pages 1–7. IEEE, 2008.
  • [41] M. Sousa and I. Dillig. Cartesian hoare logic for verifying k-safety properties. In C. Krintz and E. Berger, editors, Proceedings of the 37th ACM SIGPLAN Conference on Programming Language Design and Implementation, PLDI 2016, Santa Barbara, CA, USA, June 13-17, 2016, pages 57–69. ACM, 2016.
  • [42] M. Sousa, I. Dillig, and S. K. Lahiri. Verified three-way program merge. PACMPL, 2(OOPSLA):165:1–165:29, 2018.
  • [43] M. Stepp, R. Tate, and S. Lerner. Equality-based translation validator for LLVM. In G. Gopalakrishnan and S. Qadeer, editors, Computer Aided Verification - 23rd International Conference, CAV 2011, Snowbird, UT, USA, July 14-20, 2011. Proceedings, volume 6806 of Lecture Notes in Computer Science, pages 737–742. Springer, 2011.
  • [44] N. Swamy, C. Hritcu, C. Keller, A. Rastogi, A. Delignat-Lavaud, S. Forest, K. Bhargavan, C. Fournet, P. Strub, M. Kohlweiss, J. K. Zinzindohoue, and S. Z. Béguelin. Dependent types and multi-monadic effects in F. In POPL, pages 256–270, 2016.
  • [45] A. Voronkov. Avatar: The architecture for first-order theorem provers. In Proceedings of the 16th International Conference on Computer Aided Verification-Volume 8559, pages 696–710. Springer-Verlag, 2014.
  • [46] Q. Zhou, D. Heath, and W. Harris. Completely automated equivalence proofs. CoRR, abs/1705.03110, 2017.

Appendix 0.A Appendix

0.a.1 Small-step operational semantics of

In this subsection, we recall standard definitions from small-step operational semantics.

Definition 2

Let be a program. Then a state is a function which (i) maps each integer-variable v of to a concrete value and (ii) maps each array-variable v and each value to a value .

Definition 3

A configuration is a pair , where we refer to as the continuation and is a state.

The execution of a single step in the program is defined by the rules of Figure 2. Our presentation is semantically equivalent to standard small-step operational semantics, but differs syntactically in three points, in order to simplify later definitions and theorems: (i) program-expressions are evaluated on the fly without introducing explicit steps (ii) the relation between the state in the original configuration and the state in the resulting configuration is explicitly described using a formula (in contrast to using the same variable twice) and (iii) we annotate while-statements with counters to ensure the uniqueness of continuations during the execution, see Section 0.A.2.



if( c )then{ p_1 }else{ p_2 };p,σ⟩= ≫_ ⟨p_1;p,σ’⟩

if( c )then{ p_1 }else{ p_2 };p,σ⟩= ≫_ ⟨p_2;p,σ’⟩

[while] [c]⟨while^i ( c )do{ p_1 };p,σ⟩= ≫_ ⟨p_1; while^i+1 ( c )do{ p_1 };p,σ’⟩


Figure 2: Small-step operational semantics of

A program is executed by iteratively transforming the initial configuration according to the rules of Figure 2 until the continuation becomes . We annotate each while-statement in the initial configuration of the execution with counter :

Definition 4

Let be a program, let be the result of annotating each while-loop in with counter and let be an arbitrary state. Then is called initial configuration.

Definition 5

Let be a program and be configurations. A partial execution from to is a derivation in the inference system of small-step operational semantics starting at and ending in . An execution of is a partial execution from an initial configuration to a configuration for an arbitrary state . If there exists a partial execution starting at the initial configuration and ending in , we say that is reachable.

0.a.2 Separating subprograms and state

Our presentation of operational semantics features counters. We now show that as a result, if and are continuations occuring in the same execution, then and are different. This implies that we do not need to know about the state to distinguish different configurations and allows us to separate the continuation from the state.

Theorem 0.A.1 (Uniqueness)

Let be a program and let and be configurations occuring in the execution of . Then .


Let be subprograms, let be a single statement and let be a condition. Consider the minimal relation which satisfies the following conditions and consider its transitive closure .

It is an easy exercise to establish that is a strict partial order on continuations. Next, , , and reduce the ordering according to the first condition, and reduce the ordering according to the second resp. third condition and reduce the ordering according to the fourth condition. In particular, we are able to conclude , which immediately implies due to the irreflexivity of .

Having established the uniqueness, we are now able to speak of the state at a given continuation (and annotate it as ). As a result, a configuration is fully described by the continuation. We therefore omit the state in any configuration and write instead. Finally we use the fact that we have finitely many program variables , and split up into , which we simply write as .

0.a.3 Mapping timepoints to continuations

Small-step operational semantics describes only the next step in an execution, whereas structural semantics, and trace logic semantics in particular, also describes the complete execution of each substatement.

Recall that the definitions of and from Section 4.1 describe the timepoints of the start, respectively end of a partial execution of a statement . To connect the two worlds of operational and structural semantics we provide a mapping from such timepoints to continuations:

Definition 6

Let be

if is non-loop
if is loop

We are now able to describe configurations using , and . In particular, we are able to instantiate each rule to a new rule, whose configurations can be described using , and . The instantiated rules produce the same reachable configurations as the original rules, and are presented in Figure 3.

Let be a skip-statement. Instantiating the -rule with yields [skip] Let be an assignment v = e. Instantiating the -rule with yields [asg]

Let and be s$_1;\ldots;_k$ resp. ss and let s be