DeepAI
Log In Sign Up

Behavioural Equivalence via Modalities for Algebraic Effects

The paper investigates behavioural equivalence between programs in a call-by-value functional language extended with a signature of (algebraic) effect-triggering operations. Two programs are considered as being behaviourally equivalent if they enjoy the same behavioural properties. To formulate this, we define a logic whose formulas specify behavioural properties. A crucial ingredient is a collection of modalities expressing effect-specific aspects of behaviour. We give a general theory of such modalities. If two conditions, openness and decomposability, are satisfied by the modalities then the logically specified behavioural equivalence coincides with a modality-defined notion of applicative bisimilarity, which can be proven to be a congruence by a generalisation of Howe's method. We show that the openness and decomposability conditions hold for several examples of algebraic effects: nondeterminism, probabilistic choice, global store and input/output.

READ FULL TEXT VIEW PDF

page 1

page 2

page 3

page 4

02/12/2019

Program Equivalence for Algebraic Effects via Modalities

This dissertation is concerned with the study of program equivalence and...
04/26/2019

Quantitative Logics for Equivalence of Effectful Programs

In order to reason about effects, we can define quantitative formulas to...
05/01/2020

From Equations to Distinctions: Two Interpretations of Effectful Computations

There are several ways to define program equivalence for functional prog...
12/28/2021

Inductive and Coinductive Predicate Liftings for Effectful Programs

We formulate a framework for describing behaviour of effectful higher-or...
12/31/2017

Two Light Modalities for Recursion

We investigate the interplay between two modalities for controlling the ...
03/04/2019

Dijkstra Monads for All

This paper proposes a general semantic framework for verifying programs ...
03/18/2020

Signature restriction for polymorphic algebraic effects

It has been well known that naively combining polymorphic effects and po...

1 Introduction

Many tasks in software development and analysis rely on abstracting away from program syntax to an appropriate notion of program behaviour. For example, the goal of specification is to specify (constraints on) the behaviour of a program. Similarly, verification concerns validating that a program indeed exhibits the behaviour specified. Closely associated with the general concept of behaviour is the related concept of behavioural equivalence, under which two programs are deemed equivalent if they exhibit the same behaviour.

Studying the interrelated concepts of behaviour and equivalence is important from both theoretical and practical perspectives. On the theoretical side, these are fundamental notions, whose understanding sheds light not just on any particular programming language under consideration, but more generally on the question of how to mathematically model the process of computation. On the practical side, the engineering tasks of program specification, verification and synthesis all depend on having a precise mathematical model of program behaviour; and notions of program equivalence play a key role in applications, such as compiler optimisation, that involve program transformation.

For applications such as those described above to be possible, it is crucial that the mathematical notion of behaviour is appropriately chosen. For example, in the case of compiler transformations, it is essential that aspects of computational behaviour, such as execution time, that one specifically does not want preserved, are ignored. Whereas, for other applications, for example ones in which resources need to be quantified, it may be important to have a notion of behaviour in which execution time is taken into account. In general, therefore, there is no single all-encompassing approach to defining behaviour and equivalence. Nonetheless, as general desirability criteria, one would like to have definitions that are, on the one hand, mathematically natural and convenient to work with and, on the other, suitable for practical applications.

In the present paper, we explore and relate two complementary methodologies for defining notions of behaviour and equivalence, within a particular programming context: call-by-value functional programming with effects. The methodologies themselves make sense, however, within a more-or-less arbitrary programming context, so we introduce and motivate them at this greater generality.

Behavioural logic

The first methodology is to specify program behaviour via a formal logic, which we call a behavioural logic, whose formulas are constructed using operators that express primitive properties of program behaviour. Mathematically, one defines a satisfaction relation , expressing that program satisfies behavioural property . The idea is that the logic should be designed in such a way that its formulas are capable of expressing all properties of programs that are bona fide properties of program behaviour (as opposed to, for example, properties of program syntax).

Given such a program logic, one derives a corresponding notion of behavioural equivalence. Two programs are said be logically equivalent if, for all formulas , it holds that iff ; that is if they exhibit the same behaviour.

Bisimilarity

It is a remarkably general phenomenon that, in numerous computation contexts, program behaviour can be modelled as an interactive process, leading to a natural coinductive definition of program equivalence as bisimilarity: roughly speaking, the largest (equivalence) relation that relates interaction points only if their local behaviour is indistinguishable modulo the relation.

Having given such a mathematical definition of bisimilarity, one can derive an associated notion of behavioural property. Namely, a property of programs is behavioural if it respects bisimilarity; that is, whenever a program satisfies the property, then so does any bisimilar program .

The above complementary approaches to defining behaviour and equivalence have been particularly prominent in concurrency theory. Indeed, the idea of bisimilarity as a notion of behavioural equivalence between systems was first introduced in that context, in the work of Milner and Park [Milner82, Park81]. The logical approach to defining behaviour emerged around the same time, with the characterisation, by Hennessy and Milner, of bisimilarity as the behavioural equivalence induced by an infinitary propositional modal logic, now known as Hennessy-Milner logic [Henessy85].

In the case of bisimilarity, Abramsky realised that a similar style of definition generalises to other programming contexts. In particular, he developed the notion of applicative bisimilarity for functional languages [Abramsky90]. Subsequently, numerous variant notions of bisimilarity have been given across a plethora of computational contexts (for example, [Sangiorgi_book, Sangiorgi:2011, Bisim_object, Lassen99]). To highlight one recent example, which is important for the present paper, Dal Lago, Gavazzo and Levy have provided a uniform generalisation of applicative bisimilarity to a functional programming language with effects [Relational].

A major goal of the present paper is to show that the logical approach to defining program behaviour can also be adapted very naturally to the context of functional programming languages with effects. In doing so, we establish that the corresponding behavioural equivalence coincides with effectful applicative bisimilarity in the style of Dal Lago et al. [Relational].

More precisely, we consider a typed call-by-value functional programming language with algebraic effects in the sense of Plotkin and Power [effect]. Broadly speaking, effects are those aspects of computation that involve a program interacting with its ‘environment’; for example: nondeterminism, probabilistic choice (in both cases, the choice is deferred to the environment); input/output; mutable store (the machine state is modified); control operations such as exceptions, jumps and handlers (which interact with the continuation in the evaluation process); etc. Such general effects collectively enjoy common properties identified in the work of Moggi on monads [monad]. Among them, algebraic effects play a special role. They can be included in a programming language by adding effect-triggering operations, whose ‘algebraic’ nature means that effects act independently of the continuation. From the aforementioned examples of effects, only jumps and handlers are non-algebraic. Thus the notion of algebraic effect covers a broad range of effectful computational behaviour. Call-by-value functional languages provide a natural context for exploring effectful programming. From a theoretical viewpoint, other programming paradigms are subsumed; for example, imperative programs can be recast as effectful functional ones. From a practical viewpoint, the combination of effects with call-by-value leads to the natural programming style supported by impure functional languages such as OCaml.

In order to focus on the main contributions of the paper (the behavioural logic and its induced behavioural equivalence), we instantiate “call-by-value functional language with algebraic effects” using a very simple language. Our language is a simply-typed -calculus with a base type of natural numbers, general recursion, call-by-value function evaluation, and algebraic effects. That is, it is a call-by-value version of PCF [PCF], extended with effects. A very similar language is used by Plotkin and Power [effect]; although, for technical convenience, we adopt an alternative (but equivalent) formulation using fine-grained call-by-value [CBV]. The language is defined precisely in Section 2, using an operational semantics that evaluates programs to effect trees [effect, op_meta].

Section 3 introduces the behavioural logic. In our impure functional setting, the evaluation of a program of type results in a computational process that may or may not invoke effects, and which may or may not terminate with a return value of type . The key ingredient in our logic is an effect-specific family of modalities, where each modality converts a property of values of some given type to a property of general programs (called computations) of type . The idea is that such modalities capture all relevant effect-specific behavioural properties of the effects under consideration. For example, in the context of probabilistic computation, we have a modality for every rational , where the formula

is satisfied by a computation if the evaluation of the computation has a probability greater than

of terminating with a return value satisfying .

A main contribution of the paper is to give a general framework for defining such effect modalities, applicable across a wide range of algebraic effects. The technical setting for this is that we have a signature of effect operations, which determines the programming language, and a collection of modalities, which determines the behavioural logic. In order to specify the semantics of the logic, we require each modality to be assigned a set of unit-type effect trees, which defines the meaning of the modality. For example, the modality is specified by the set of effect trees that have probability greater than

of terminating if considered as a Markov chain. Several further examples and a detailed general explanation are given in Section 

3.

In Section 4, we consider the relation of behavioural equivalence between programs determined by the logic. This equivalence directly relates to the notion of behaviour that is made explicit by the modalities. A fundamental well-behavedness property that any reasonable program equivalence should enjoy is that it should be a congruence with respect to the syntactic constructs of the programming language. (If an equivalence is not a congruence then there are aspects of program behaviour that are not distinguished by the equivalence but which can be separated within the programming language itself.) The major result of the paper about the logically induced behavioural equivalence is that it is indeed a congruence, as long as two conditions, openness and decomposability, hold of the collection of modalities (Theorem 1). These two conditions do indeed hold for the natural sets of modalities associated with the principal examples of algebraic effects.

As a second indication of the reasonableness of the logically defined behavioural equivalence, we establish that it has an alternative characterisation as an effect-sensitive version of Abramsky’s notion of applicative bisimilarity [Abramsky90]. This is achieved, in Section 5, by using the modalities in a logic-free context to define a relation of applicative -bisimilarity. The resulting equivalence relation is closely related to the effect-sensitive version of bisimilarity developed by Dal Lago et al. [Relational] for an untyped language with general algebraic effects. Our use of modalities as a uniform method for generating the bisimilarity relation is, however, novel. Theorem 2 shows that applicative -bisimilarity coincides with the logically defined relation of behavioural equivalence.

In addition to its conceptual value, establishing the coincidence of the logically defined equivalence with applicative -bisimilarity also serves an important technical purpose: this result is used crucially in the proof of Theorem 1. We adapt the well-known proof method of Howe [How89, How] to show that applicative -bisimilarity is a congruence. By the coincidence theorem, this means that the logically defined equivalence is indeed a congruence; establishing Theorem 1. Our adaptation of Howe’s method is presented in Section 6. Although the argument is technically involved, a similar adaptation of Howe’s method to a language with algebraic effects has been previously given by Dal Lago et al. [Relational]. Accordingly, this proof is not a main contribution of the present paper, and we give only an outline argument in the main body of the paper, with details deferred to Appendix A.

In the discussion thus far we have ignored one nuance within the development of Sections 36. In addition to working with the full behavioural logic, we also identify a positive fragment of the logic, which omits negation. Whereas the full logic defines an equivalence relation that coincides with applicative bisimilarity, the positive logic defines a preorder between programs that coincides with the coarser relation of applicative similarity, from which it follows that the equivalence relation determined by the positive logic coincides with mutual applicative similarity. In Section 7 we compare the above equivalence relations and preorders, and we relate them to contextual preorder and equivalence, which can also be naturally defined in terms of the set of modalities. In general, applicative bisimilarity is contained in mutual applicative similarity which is contained in contextual equivalence. For some effects, however, these inclusions are strict. For example, as has been shown by Lassen [Lassen] and Ong [Ong], there are examples involving nondeterminism that separate the equivalences. For the sake of completeness, we recall such separating examples in Section 7, with the novelty that we use the logical characterisations of bisimilarity and mutual similarity as a tool for proving and disproving the equivalences in question.

In cases in which the equivalences differ, there is a question of which (if any) of the equivalences should be taken as being the most fundamental. In the literature, contextual equivalence is often taken as the equivalence of choice for applicative languages. From this viewpoint, bisimilarity and mutual similarity are considered important for providing sound (but incomplete) proof methods for reasoning about contextual equivalence, which is the relation of ultimate interest. We would like to argue instead that the logical viewpoint makes a strong case for considering bisimilarity as being the primary equivalence. Every formula in the logic in this paper states a meaningful property of program behaviour. It is a truism that, when two programs are not bisimilar, there is some property of behaviour, expressible in the logic, that distinguishes them. If one takes the logic seriously, then the desirable property that and imply holds only in the case that the equivalence relation is bisimilarity, because of its characterisation as the logically induced equivalence.

The above (almost trivial) argument of course relies on accepting arbitrary formulas as expressing behaviourally meaningful properties. Readers can read Section 3 and make up their own minds about this. Alternatively, one might take a more pragmatic viewpoint. Another reason for accepting a logic for expressing properties of programs is if it provides the expressivity needed to formulate proof principles for reasoning about programs. In Section 9, we show that our logic does indeed support such principles. Furthermore, they arise in the form of compositional proof rules that allow properties of a compound program to be proved by establishing appropriate properties of its constituent subprograms. In order to achieve this, we provide, in Section 8, a reformulation of our behavioural logic, which is of interest in its own right. This reformulation enjoys the property that the syntax of the logic is independent of the syntax of the programming language, which is not true of the logic of Section 3. This property is appealing, as one would like to be able to specify behaviour without knowing the syntax in which programs are written. More practically, the reformulation is used to formulate, in Section 9, the compositional proof principles referred to above. There is one significant qualification to add to this perspective. The infinitary propositional logics considered in the present paper are by no means suitable for serving as practical logics for specification and verification. Nevertheless, we view these logics as relevant to the development of more practical non-propositional but finitary logics, as they can potentially act as low-level ‘target’ logics into which high-level practical logics can be ‘compiled’. We elaborate further on this point in Section 9. However, the development of practical logics is left as a topic for future work.

Finally, in Section 10 we discuss other related and further work.

This paper is an extended and revised version of a conference paper [ESOP], presented at the 27th European Symposium on Programming (ESOP). One principal difference from the conference version is that full proofs are included. Other extensions include: normal form results for the logic (Propositions 4.5 and 4.6); a significantly expanded presentation of the crucial decomposability properties in Section 4; explicit descriptions of the -relators determined by our running examples in Section 5; the comparison between bisimilarity, similarity and contextual equivalence in Section 7; and the discussion of compositional reasoning principles in Section 9.

2 A simple programming language

As motivated in the introduction, our chosen base language is a simply-typed call-by-value functional language with general recursion and a ground type of natural numbers, to which we add (algebraic) effect-triggering operations. This means that our language is a call-by-value variant of PCF [PCF], extended with algebraic effects, resulting in a language similar to the one considered in Plotkin and Power [effect]. In order to simplify the technical treatment of the language, we present it in the style of fine-grained call-by-value [CBV]. This means that we make a syntactic distinction between values and computations, separating the static and dynamic aspects of the language respectively. Furthermore, all sequencing of computations is performed using a single language construct, the let construct. The resulting language is straightforwardly intertranslatable with the more traditional call-by-value formulation. But the encapsulation of all sequencing within a single construct has the benefit of avoiding redundancy in proofs.

Our types are just the simple types obtained by iterating the function type construction over two base types: of natural numbers, and also a unit type .

Types:

Contexts:

As usual, term variables are taken from a countably-infinite stock of such variables, and the context can only be formed if the variable does not already appear in .

As discussed above, program terms are separated into two mutually defined but disjoint categories: values and computations.

Values:

Computations:

Here, is the unique value of the unit type. The values of the type of natural numbers are the numerals represented using zero and successor . The values of function types are the -abstractions. And a variable can be considered a value, because, under the call-by-value evaluation strategy of the language, it can only be instantiated with a value.

The computations are: function application ; the computation that does nothing but return a value ; a let construct for sequencing; a fix construct for recursive definition333The term is a computation, because it needs to be evaluated.; and a case construct that branches according to whether its natural-number argument is zero or positive. The computation ‘’ implements sequencing in the following sense. First the computation is evaluated. Only in the case that the evaluation of terminates, with return value , does the thread of execution continue to . In this case, the computation is evaluated, and its return value (if any) is then returned as the result of the let computation.

To the pure functional language described above, we add effect operations. The collection of effect operations is specified by a set (the signature) of such operations, together with, for each an associated arity which takes one of the four forms below444These four forms of arity suffice for the examples we consider. Similar choices were made in [op_meta, Plotkin:2009]. Going beyond such discrete arities is an interesting direction for future research; see Section 10.

The notation here is chosen to be suggestive of the way in which such arities are used in the typing rules of Fig. 1, viewing as a type variable. Each of the forms of arity has an associated term constructor, for building additional computation terms, with which we extend the above grammar for computation terms.

Computations:  

Motivating examples of effect operations and their computation terms can be found in Examples 25 below.

The typing rules for the language are given in Figure 1. Note that the choice of typing rule for an effect operation depends on its declared arity.

Figure 1: Typing rules

The terms of type are the values and computations generated by the constructors above. Every term has a unique aspect as either a value or computation. We write and respectively for closed values and computations. So the closed terms of are . For a natural number, we write for the numeral . Thus .

We now consider some illustrative signatures of computationally interesting effect operations, which will be used as running examples throughout the paper. (We use the same examples as in Johann et al. [op_meta].)

Example 0 (Pure functional computation).

This is the trivial case (from an effect point of view) in which the signature of effect operations is empty. The resulting language is a fine-grained call-by-value variant of PCF [PCF].

Example 1 (Error).

We take a set of error labels . For each there is an effect operation which, when invoked by the computation , aborts evaluation and outputs ‘’ as an error message.

Example 2 (Nondeterminism).

There is a binary choice operation which gives two options for continuing the computation. The choice of continuation is under the control of some external agent, which one may wish to model as being cooperative (angelic), antagonistic (demonic), or neutral.

Example 3 (Probabilistic choice).

Again there is a single binary choice operation which gives two options for continuing the computation. In this case, the choice of continuation is probabilistic, with a probability of either option being chosen. Other weighted probabilistic choices can be programmed in terms of this fair choice operation.

Example 4 (Global store).

We take a set of locations for storing natural numbers. For each we have and . The computation looks up the number at location and passes it as an argument to the function , and stores at and then continues with the computation .

Example 5 (Input/output).

Here we have two operations, which reads a number from an input channel and passes it as the argument to a function, and which outputs a number (the first argument) and then continues with the computation given as the second argument.

We next present an operational semantics for our language, under which a computation term evaluates to an effect tree: essentially, a coinductively generated term using operations from , and with values and (nontermination) as the generators. This idea appears in Plotkin and Power [effect], and our technical treatment follows the approach of Johann et al. [op_meta], adapted to (fine-grained) call-by-value.

We define a single-step reduction relation between configurations consisting of a stack and a computation . The computation is the term under current evaluation. The stack represents a continuation computation awaiting the termination of . First, we define a stack-independent reduction relation on computation terms that do not involve let at the top level.

The behaviour of let is implemented using a system of stacks where:

Stacks:

We write for the computation term obtained by ‘applying’ the stack to , defined by:

We write for the set of stacks such that for any , it holds that is a well-typed expression of type . We define a reduction relation on pairs (denoted ) by:

if

We define the notion of effect tree for an arbitrary set , where is thought of as a set of abstract ‘values’.

Definition 2.1.

An effect tree (henceforth tree), over a set , determined by a signature of effect operations, is a labelled and possibly infinite tree whose nodes have the possible forms:

  1. A leaf node labelled with (the symbol for nontermination).

  2. A leaf node labelled with where .

  3. A node labelled with children , when has arity .

  4. A node labelled with children , when has arity .

  5. A node labelled where with children , when has arity .

  6. A node labelled where with children , when has arity .

See Examples 2 and 4 later on in this section for examples of effect trees.

We write for the set of trees over . We define a partial ordering on where , if can be obtained by pruning by removing a possibly infinite number of subtrees of and putting leaf nodes labelled in their place. This forms an -complete partial order, meaning that every ascending sequence has a least upper bound . Let , we will define a reduction relation from computations to such trees of values.

Given and a tree , we write or for the tree whose leaves are renamed to . We have a function , which takes a tree of trees and flattens it to a tree , by taking the tree labelling each non- leaf of to be the subtree rooted at the corresponding node in . The function is the multiplication associated with the monad structure of the operation. The unit of the monad is the map which takes an element and returns the leaf labelled qua tree.

The operational mapping from a computation to an effect tree is defined intuitively as follows. Start evaluating the in the empty stack id, until the evaluation process (which is deterministic) terminates. If termination never happens the tree is . If the evaluation process terminates at a configuration of the form then the tree is the leaf . Otherwise the evaluation process can only terminate at a configuration of the form for some effect operation . In this case, create an internal node in the tree of the appropriate kind (depending on ) and continue generating each child tree of this node by repeating the above process by evaluating an appropriate continuation computation, starting from a configuration with the current stack .

The following (somewhat technical) definition formalises the idea outlined above in a mathematically concise way. We define a family of maps indexed over and by:

It follows that in the given ordering on trees. We write

for the function defined by . Using this we can give the operational interpretation of computation terms as effect trees by defining by

We illustrate the above definitions with a couple of examples of effect computations and their corresponding effect trees.

Example 2 (Nondeterminism).

Nondeterministically generate a natural number:

[1ex]

Example 4 (Global store).

Save and load a value, returning its successor:

[1ex]

In the second example above, we see that the resulting tree exhibits redundancies with respect to the expected model of computation with global store. Since the operation sets the value of location to , the ensuing operation will retrieve the value , and so execution will proceed down the branch labelled resulting in the return value . The other infinitely many leaves of the tree are redundant. The issue here is that the operational semantics of the language has been defined independently of any implementation model for the effect operations. An effect tree provides a normal form that records in its nodes all effect operations that may potentially be performed during execution, and the dependencies between them. But nothing is stated about which effect operations will actually be performed in practice, and what effect they have if invoked. It is precisely this lack of specificity that allows the operational semantics to be defined in a uniform way depending only on the signature of effect operations and their arities.

In order to be able to reason about programs with effects (for example, to establish properties of or equivalences between them), it is necessary to supply the missing information about how effect operations behave when executed. As motivated in the introduction, we now proceed to do this by introducing a behavioural logic for expressing behavioural properties of our language.

3 Behavioural logic and modalities

The goal of this section is to motivate and formulate a logic for expressing behavioural properties of programs. In our language, program means (well-typed) term, and we shall be interested both in properties of computations and in properties of values. Accordingly, we define a logic that contains both value formulas and computation formulas. We shall use lower case Greek letters for the former, and upper case Greek letters for the latter. Our logic will thus have two satisfaction relations

which respectively assert that “value enjoys the value property expressed by ” and “computation enjoys the computation property expressed by ”.

In order to motivate the detailed formulation of the logic, it is useful to identify criteria that will guide the design.

(C1)

The logic should express only ‘behaviourally meaningful’ properties of programs. This guides us to build the logic upon primitive notions that have a direct behavioural interpretation according to a natural understanding of program behaviour.

(C2)

The logic should be as expressive as possible within the constraints imposed by criterion (C1).

For every type , we define a collection of value formulas, and a collection of computation formulas, as motivated above.

Since boolean logical connectives say nothing themselves about computational behaviour, it is a reasonable general principle that ‘behavioural properties’ should be closed under such connectives. Thus, in keeping with criterion (C2), which asks for maximal expressivity, we close each set and , of computation and value formulas, under infinitary propositional logic.

In addition to closure under infinitary propositional logic, each set contains a collection of basic value formulas, from which compound formulas are constructed using (infinitary) propositional connectives.555We call such formulas basic rather than atomic because they include formulas such as , discussed below, which are built from other formulas. The choice of basic formulas depends on the type .

In the case of the natural numbers type, we include a basic value formula , for every . The semantics of this formula are given by:

By the closure of under infinitary disjunctions, every subset of can be represented by some value formula. Moreover, since a general value formula in is an infinitary boolean combination of basic formulas of the form , every value formula corresponds to a subset of .

For the unit type, we do not require any basic value formulas. The unit type has only one value, . The two subsets of this singleton set of values are defined by the formulas (‘falsum’, given as an empty disjunction), and (the truth constant, given as an empty conjunction).

For a function type , we want each basic formula to express a fundamental behavioural constraint on values (i.e., -abstractions) of type . In keeping with the applicative nature of functional programming, the only way in which a -abstraction can be used to generate behaviour is to apply it to an argument of type , which, because we are in a call-by-value setting, must be a value . The application of to results in a computation of type , whose properties can be probed using computation formulas in . Based on this, for every value and computation formula , we include a basic value formula with the semantics:

Using this simple construct, based on application to a single argument , other natural mechanisms for expressing properties of -abstractions are definable, using infinitary propositional logic. For example, given and , the definition

(1)

defines a formula whose derived semantics is

(2)

In Section 8, we shall consider the possibility of changing the basic value formulas in to formulas .

It remains to explain how the basic computation formulas in are formed. For this we require a given set of modalities, which depends on the algebraic effects contained in the language. The basic computation formulas in then have the form , where is one of the available modalities, and is a value formula in . Thus a modality lifts properties of values of type to properties of computations of type .

In order to give semantics to computation formulas , we need a general theory of the kind of modality under consideration. This is one of the main contributions of the paper. Before presenting the general theory, we first consider motivating examples, using our running examples of algebraic effects.

Example 0 (Pure functional computation).

Define . Here the single modality is the termination modality: asserts that a computation terminates with a return value satisfying . This is formalised using effect trees:

Note that, in the case of pure functional computation, all trees are leaves: either value leaves , or nontermination leaves .

Example 1 (Error).

Define . The semantics of the termination modality is defined as above. The error modality flags error :

(Because is an operation of arity , a node in a tree has children.) Note that the semantics of makes no reference to . Indeed it would be natural to consider as a basic computation formula in its own right, which could be done by introducing a notion of -argument modality, and considering as such. In this paper, however, we keep the treatment uniform by always considering modalities as unary operations, with natural -argument modalities subsumed as unary modalities with a redundant argument.

Example 2 (Nondeterminism).

Define with:

Including both modalities amounts to a neutral view of nondeterminism. In the case of angelic nondeterminism, one would include just the modality; in that of demonic nondeterminism, just the modality. Because of the way the semantic definitions interact with termination, the modalities and are not De Morgan duals. Indeed, each of the three possibilities for leads to a logic with a different expressivity.

Example 3 (Probabilistic choice).

Define with:

where the probability on the right is the probability that a run through the tree , starting at the root, and making an independent fair probabilistic choice at each branching node, terminates at a value node with a value in the set . We observe that the restriction to rational thresholds is immaterial, as, for any real with , we can define:

Similarly, we can define non-strict threshold modalities, for , by:

Also, we can exploit negation to define modalities expressing strict and non-strict upper bounds on probabilities. Notwithstanding the definability of non-strict and upper-bound thresholds, we shall see later that it is important that we include only strict lower-bound modalities in our set of primitive modalities.

Example 4 (Global store).

Given the set of locations , we define the set of states by . The modalities are , where informally:

the execution of , starting in state , terminates in

We make the above definition precise using the effect tree of . Define

for any set , to be the least partial function satisfying:

where is the evident modification of state . Intuitively, defines the result of “executing” the tree of commands in effect tree starting in state , whenever this execution terminates. In terms of operational semantics, it can be viewed as defining a ‘big-step’ semantics for effect trees (in the signature of global store). We can now define the semantics of the modality formally:

In Section 9, we show an example of how to encode Hoare Logic in the above logic.

Example 5 (Input/output).

Define an i/o-trace to be a word over the alphabet

The idea is that such a word represents an input/output sequence, where means the number is given in response to an input prompt, and means that the program outputs . Define the set of modalities

The intuitive semantics of these modalities is as follows.

is a complete i/o-trace for the execution of
is an initial i/o-trace for the execution of  .

In order to define the semantics of formulas precisely, we first define relations and , between and , by induction on words. (Note that we are overloading the symbol.) In the following, we write for the empty word, and we use textual juxtaposition for concatenation of words.

is a leaf and
and
and
true
and
and

The formal semantics of modalities is now easily defined by:

Note that, as in Example 1, the formula argument of the modality is redundant. Also, note that our modalities for input/output could naturally be formed by combining the termination modality , which lifts value formulas to computation formulas, with sequences of atomic modalities and acting directly on computation formulas. In this paper, we do not include such modalities, acting on computation formulas, in our general theory. But this is a natural avenue for future consideration.

We now give a formal treatment of the logic and its semantics, in full generality. We assume a signature of effect operations, as in Section 2. And we assume given a set , whose elements we call modalities.

We call our main behavioural logic , where the letter is chosen as a reference to the fact that the basic formula at function type specifies function behaviour on individual value arguments .

Definition 3.1 (The logic ).

The classes and of value and computation formulas, for each type , are mutually inductively defined by the rules in Fig. 2.

Figure 2: The logic

In this, can be instantiated to any set, allowing for arbitrary conjunctions and disjunctions. When is , we get the special formulas and . The use of arbitrary index sets means that formulas, as defined, form a proper class. However, we shall see below that countable index sets suffice.

In order to specify the semantics of modal formulas, we require a connection between modalities and effect trees, which is given by an interpretation function

That is, every modality is mapped to a subset of unit-type effect trees. Given a subset (e.g. given by a formula) and a tree we can define a unit-type tree as the tree created by replacing the leaves of that belong to by and the others by . In the case that is the subset specified by a formula , we also write for .

We now define the two satisfaction relations and , mutually inductively, where for the basic formulas we have:

and for the other formulas we have:

We remark that all conjunctions and disjunctions are semantically equivalent to countable ones, because value and computation formulas are interpreted over sets of terms, and , which are countable.

The lemma below is standard. It states that every formula in infinitary propositional logic can be written in infinitary disjunctive normal form. (It can also be written in infinitary conjunctive normal form.)

Lemma 3.2.

Each formula (value or computation) is equivalent to a formula of the form where for each and the formula is either a basic formula or the negation of a basic formula.

We omit the proof, which is both routine and standard.

We end this section by revisiting our running examples, and observing, in each case, that the example modalities presented above are all specified by suitable interpretation functions .

Example 0 (Pure functional computation).

We have . Define:

Example 1 (Error).

We have . Define:

Example 2 (Nondeterminism).

We have . Define:

Example 3 (Probabilistic choice).

. Define:

Example 4 (Global store).

. Define:

Example 5 (Input/output).

. Define:

In this section we have defined our logic expressing behavioural properties. We next proceed to derive the induced notion of behavioural equivalence between programs, as motivated in Section 1.

4 Behavioural equivalence

The goal of this section is to precisely formulate our main theorem: under suitable conditions, the behavioural equivalence determined by the logic of Section 3 is a congruence. In addition, we shall obtain a similar result for a coarser behavioural preorder determined by a natural positive fragment of , which we call . In addition to being natural in its own right, the preorder induced by the positive fragment turns out to be an indispensable technical tool for establishing properties of the behavioural equivalence induced by the full logic .

Definition 4.1 (The logic ).

The logic is the fragment of consisting of those formulas in and that do not contain negation. It is inductively defined using rules 1-5, 7 and 8 from Fig. 2.

Whenever we have a logic whose value and computation formulas are given as subcollections and , then determines a preorder (and hence also an equivalence relation) between terms of the same type and aspect.

Definition 4.2 (Logical preorder and equivalence).

Given a fragment of , we define the logical preorder , between well-typed terms of the same type and aspect, by:

The logical equivalence on terms is the equivalence relation induced by the preorder (the intersection of and its converse).

In the case that formulas in are closed under negation, it is trivial that the preorder is already an equivalence relation, and hence coincides with . Thus we shall only refer specifically to the preorder , for fragments, such as , that are not closed under negation.

The two main relations of interest to us in this paper are the primary relations determined by and : full behavioural equivalence ; and the positive behavioural preorder (which induces positive behavioural equivalence ). Since is a subset of , it is apparent that is finer than , as it considers more behavioural properties which could distinguish terms. For the same reason, it holds that .

Before formulating the required notions to prove congruence of the behavioural equivalences, we shall make some observations about the preorders and discuss a possible simplification of the logic (Proposition 4.5).

Lemma 4.3.

For any , we have if and only if:

Lemma 4.4.

For any , we have if and only if:

Both these lemmas are a consequence of the fact that satisfaction of conjunctions and disjunctions are completely determined by satisfaction of the formulas over which the connectives are taken. As such, the logical preorder is completely determined by satisfaction of basic formulas. Similar characterisations, but replacing ‘implies’ and with ‘if and only if’ and respectively, hold for the behavioural equivalence .

Proposition 4.5.

Let   be the fragment of   inductively defined by rules 1 to 6 from Fig. 2 (so computation formulas are not closed under propositional connectives), then the induced logical equivalence is the same as .

Proof.

We prove that any value formula is equivalent to a value formula from . We do this by induction on types. For value formulas of natural numbers type, note that the , so the statement is trivially true by taking .

For value formulas of function type, assume is a basic formula , where by Lemma 3.2 we may assume w.l.o.g. that is a disjunction over conjunctions over formulas of the form or . We can now use the equivalences and to construct a formula equivalent to , using the induction hypothesis to replace each occurrence of with where is equivalent to . In the case that is not a basic formula, we can do an induction on its structure to find the desired , where , , , and basic formulas are handled as above.

So every value formula has an equivalent value formula in , so the logical preorder on value terms remains unchanged. To see that the logical equivalence on computation terms remains unchanged, simply use Lemma 4.4. ∎

Altering the proof slightly, we can derive a similar result for the positive logic.

Proposition 4.6.

Let   be the fragment of   defined by rules 1 to 5 from Fig. 2 (so computation formulas are not closed under propositional connectives), then the induced logical preorder   is the same as  .

We next formulate the appropriate notion of (pre)congruence to apply to the relations