Differential Logical Relations, Part I: The Simply-Typed Case (Long Version)

04/27/2019 ∙ by Ugo Dal Lago, et al. ∙ 0

We introduce a new form of logical relation which, in the spirit of metric relations, allows us to assign each pair of programs a quantity measuring their distance, rather than a boolean value standing for their being equivalent. The novelty of differential logical relations consists in measuring the distance between terms not (necessarily) by a numerical value, but by a mathematical object which somehow reflects the interactive complexity, i.e. the type, of the compared terms. We exemplify this concept in the simply-typed lambda-calculus, and show a form of soundness theorem. We also see how ordinary logical relations and metric relations can be seen as instances of differential logical relations. Finally, we show that differential logical relations can be organised in a cartesian closed category, contrarily to metric relations, which are well-known not to have such a structure, but only that of a monoidal closed category.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Modern software systems tend to be heterogeneous and complex, and this is

Figure 1: Replacing with .

reflected in the analysis methodologies we use to tame their complexity. Indeed, in many cases the only way to go is to make use of compositional kinds of analysis, in which parts of a large system can be analysed in isolation, without having to care about the rest of the system, the environment. As an example, one could consider a component and replace it with another, e.g. more efficient component without looking at the context in which and are supposed to operate, see Figure 1. Of course, for this program transformation to be safe, should be equivalent to or, at least, should be a refinement of .

Program equivalences and refinements, indeed, are the cruxes of program semantics, and have been investigated in many different programming paradigms. When programs have an interactive behaviour, like in concurrent or higher-order languages, even defining a notion of program equivalence is not trivial, while coming out with handy methodologies for proving concrete programs to be equivalent can be quite challenging, and has been one of the major research topics in programming language theory, stimulating the development of techniques like logical relations [21, 18], applicative bisimilarity [1], and to some extent denotational semantics [24, 25] itself.

Coming back to our example, may we say anything about the case in which and are not equivalent, although behaving very similarly? Is there anything classic program semantics can say about this situation? Actually, the answer is negative: the program transformation turning such an into cannot be justified, simply because there is no guarantee about what the possible negative effects that turning into could have on the overall system formed by and . There are, however, many cases in which program transformations like the one we just described are indeed of interest, and thus desirable. Many examples can be, for instance, drawn from the field of approximate computing [19], in which equivalence-breaking program transformations are considered as beneficial provided the overall behaviour of the program is not affected too much by the transformation, while its intensional behaviour, e.g. its performance, is significantly improved.

One partial solution to the problem above consists in considering program metrics rather than program equivalences. This way, any pair of programs are dubbed being at a certain numerical distance rather than being merely equivalent (or not). This, for example, can be useful in the context of differential privacy [22, 5, 30] and has also been studied in the realms of domain theory [11, 4, 12, 14, 3] (see also [26] for an introduction to the subject) and coinduction [28, 27, 13, 7]. The common denominator among all these approaches is that on one hand, the notion of a congruence, crucial for compositional reasoning, is replaced by the one of a Lipschitz-continuous map: any context should not amplify (too much) the distance between any pair of terms, when it is fed with either the former or the latter:

This enforces compositionality, and naturally leads us to consider metric spaces and Lipschitz functions as the underlying category. As is well known, this is not a cartesian closed category, and thus does not form a model of typed -calculi, unless one adopts linear type systems, or type systems in which the number of uses of each variable is kept track of, like  [22]. This somehow limits the compositionality of the metric approach [11, 15].

There are however program transformations which are intrinsically unjustifiable in the metric approach. Consider the following two programs of type

The two terms compute two very different functions on the real numbers, namely the sine trigonometric function and the identity on , respectively. The distance is unbounded when ranges over . As a consequence, the numerical distance between and , however defined, is infinite, and the program transformation turning into cannot be justified this way, for very good reasons. As highlighted by Westbrook and Chaudhuri [29], this is not the end of the story, at least if the environment in which and operate feed either of them only with real numbers close to , then can be substituted with without affecting too much the overall behaviour of the system.

The key insight by Westbrook and Chaudhuri is that justifying program transformations like the one above requires taking the difference between and not merely as a number, but as a more structured object. What they suggest is to take as yet another program, which however describes the difference between and :

This reflects the fact that the distance between and , namely the discrepancy between their output, depends not only on the discrepancy on the input, namely on , but also on the input itself, namely on . It both and are close to , is itself close to .

In this paper, we develop Westbrook and Chaudhuri’s ideas, and turn them into a framework of differential logical relations. We will do all this in a simply-typed -calculus with real numbers as the only base type. Starting from such a minimal calculus has at least two advantages: on the one hand one can talk about meaningful examples like the one above, and on the other hand the induced metatheory is simple enough to highlight the key concepts.

The contributions of this paper can be summarised as follows:

  • After introducing our calculus , we define differential logical relations inductively on types, as ternary relations between pairs of programs and differences. The latter are mere set theoretic entities here, and the nature of differences between terms depends on terms’ types.

  • We prove a soundness theorem for differential logical relations, which allows us to justify compositional reasoning about terms’ differences. We also prove a finite difference theorem, which stipulates that the distance between two simply-typed -terms is finite if mild conditions hold on the underlying set of function symbols.

  • We give embeddings of logical and metric relations into differential logical relations. This witnesses that the latter are a generalisation of the former two.

  • Finally, we show that generalised metric domains, the mathematical structure underlying differential logical relations, form a cartesian closed category, contrarily to the category of metric spaces, which is well known not to have the same property.

Due to space constraints, many details have to be omitted, but can be found in an Extended Version of this work [10].

2 A Simply-Typed -Calculus with Real Numbers

In this section, we introduce a simply-typed -calculus in which the only base type is the one of real numbers, and constructs for iteration and conditional are natively available. The choice of this language as the reference calculus in this paper has been made for the sake of simplicity, allowing us to concentrate on the most crucial aspects, at the same time guaranteeing a minimal expressive power.

Terms and Types

is a typed -calculus, so its definition starts by giving the language of types, which is defined as follows:

The expression stands for . The set of terms is defined as follows:

where ranges over a set of variables, ranges over the set of real numbers, is a natural number and ranges over a set of total real functions of arity . We do not make any assumption on , apart from the predecessor being part of . The family, in particular, could in principle contain non-continuous functions. The expression is simply a shortcut for . All constructs are self-explanatory, except for the and operators, which are conditional and iterator combinators, respectively. An environment is a set of assignments of types to variables in where each variable occurs at most once. A type judgment has the form where is an environment, is a term, and is a type. Rules for deriving correct typing judgments are in Figure 2, and are standard.

Figure 2: Typing rules for .

The set of terms for which is derivable is indicated as .

Call-by-Value Operational Semantics

A static semantics is of course not enough to give meaning to a paradigmatic programming language, the dynamic aspects being captured only once an operational semantics is defined. The latter turns out to be very natural. Values are defined as follows:

The set of closed values of type is , and the evaluation of produces a value , as formalised by the rules in Figure 3, through the judgment .

Figure 3: Operational semantics for .

We write if is derivable for some . The absence of full recursion has the nice consequence of guaranteeing a form of termination:

Thoerem 1.

The calculus is terminating: if then .

We show the normalisation theorem using the standard reducibility candidate argument.

Definition 2.

We define as follows.

Then we show the two lemmas that prove Theorem 1 together:

Lemma 3.

If , then .

Proof.

The following strengthening of the statement can be proved by incuction on the structure of : whenever and whenever it holds that

All inductive cases are standard. One that deserves a little bit of attentionis the case of abstractions, in which one needs to deduce that iff . But this is of course easy to prove, because the former evaluated to any value iff the latter evaluates to . Another delicate case is the one of , which requires an induction. ∎

Please observe that, by definition, if , then . As a consequence, one easily gets termination from Lemma 3.

Corollary 4.

If then there exists a unique satisfying , which we indicate as .

Proof.

By Theorem 1 there exists a value satisfying . The only form of value of type is . Moreover, the fact that such a is unique is a consequence of the following, slightly more general result: if and , then is syntactically equal to . This can be proved by a straightforward induction on the structure of the proof that, e.g., . ∎

Context Equivalence

A context is nothing more than a term containing a single occurrence of a placeholder . Given a context , indicates the term one obtains by substituting for the occurrence of in . Typing rules in Figure 2 can be lifted to contexts by a generalising judgments to the form , by which one captures that whenever it holds that . Two terms and such that are said to be context equivalent [20] when for every such that it holds that . Context equivalence is the largest adequate congruence, and is thus considered as the coarsest “reasonable” equivalence between terms. It can also be turned into a pseudometric [9, 8] — called context distance — by stipulating that

The obtained notion of distance, however, is bound to trivialise [9], given that is not affine. Trivialisation of context distance highlights an important limit of the metric approach to program difference which, ultimately, can be identified with the fact that program distances are sensitive to interactions with the environment. Our notion of a differential logical relation tackles such a problem from a different perspective, namely refining the concept of a program difference which is not just a number, but is now able to take into account interactions with the environment.

Set-Theoretic Semantics

Before introducing differential logical relations, it is useful to remark that we can give a standard set-theoretic semantics. To any type we associate the set , the latter being defined by induction on the structure of as follows:

This way, any closed term is interpreted as an element of in a natural way (see, e.g. [18]). Up to now, everything we have said about  is absolutely standard, and only serves to set the stage for the next sections.

3 Making Logical Relations Differential

Logical relations can be seen as one of the many ways of defining when two programs are to be considered equivalent. Their definition is type driven, i.e., they can be seen as a family of binary relations indexed by types such that . This section is devoted to showing how all this can be made into differential logical relations.

The first thing that needs to be discussed is how to define the space of differences between programs. These are just boolean values in logical relations, become real numbers in ordinary metrics, and is type-dependent itself here. A function that assigns a set to each type is defined as follows:

where . The set is said to be the difference space for the type and is meant to model the outcome of comparisons between closed programs of type . As an example, when is , we have that . This is the type of the function we used to compare the two programs described in the Introduction.

Now, which structure could we endow with? First of all, we can define a partial order over for each type as follows:

This order has least upper bounds and greater lower bounds, thanks to the nice structure of :

Proposition 5.

For each type , forms a complete lattice.

Proof.

We show that each has suprema by induction on types.

  • Case . Then is clearly complete.

  • Case . Given a subset , we define as:

    where the supremum on the right-hand side exists by induction hypothesis (on the type ). This serves as the supremum of because:

    • (Upperbound.) For any , by definition of supremum it holds that:

      . Hence .

    • (Leastness.) Suppose that is an upperbound of , i.e. . Then, it by definition means that: Therefore is an upperbound of the set for each . Thus by definition of supremum, for each . Hence holds by definition of .

  • Case . Given a subset , we define , where and are meta-level projections and the suprema on the right-hand side exist by induction hypothesis (on the types and ). One can verify that is the supremum of in a straightforward way:

    • (Upperbound.) For any , by definition of supremum and hold. Hence by definition of .

    • (Leastness.) Suppose that is an upperbound of , i.e.

      . It by definition means that and . Therefore (resp. ) is an upperbound of the set (resp. ). Thus by definition of supremum, (resp. ). Hence holds by definition of .

The fact that has a nice order-theoretic structure is not the end of the story. For every type , we define a binary operation as follows:

This is precisely what it is needed to turn into a quantale111 Recall that a quantale consists of a complete lattice and a monoid such that the lattice and monoid structure properly interact (meaning that monoid multiplication distributes over joins). We refer to [23, 16] for details. [23].

Proposition 6.

For each type , forms a commutative unital non-idempotent quantale. That is, the following holds for any :

  • for all ,

  • for all where is an arbitrary index set,

  • there exists an element satisfying for all ,

  • does not necessarily satisfy .

Proof.

By induction on .

  • Case . The multiplication is clearly commutative and satisfies for all . The unit is ; the multiplication is obviously non-idempotent.

  • Case . It holds that because

    and because

    The unit is the constant function: for all .

  • Case . Then

    and

    The unit is .

The fact that is a quantale means that it has, e.g., the right structure to be the codomain of generalised metrics [17, 16]. Actually, a more general structure is needed for our purposes, namely the one of a generalised metric domain, which will be thoroughly discussed in Section 6

below. For the moment, let us concentrate our attention to programs:

Definition 7 (Differential Logical Relations).

We define a differential logical relation as a set of ternary relations indexed by types satisfying

An intuition behind the condition required for is that overapproximates both the “distance” between and and the one between and , this whenever is within the error from .

Some basic facts about differential logical relations, which will be useful in the following are now in order.

Lemma 8.

For every , it holds that if and only if .

Proof.

By induction on types. ∎

Lemma 9.

Let and . Assume that and implies . Then and implies .

Proof.

By induction on types. ∎

Lemma 10.

Let and . Assume that and implies . Then and implies .

Proof.

By induction on types. ∎

3.1 A Fundamental Lemma

Usually, the main result about any system of logical relations is the so-called Fundamental Lemma, which states that any typable term is in relation with itself. But how would the Fundamental Lemma look like here? Should any term be at somehow minimal distance to itself, in the spirit of what happens, e.g. with metrics [22, 11]? Actually, there is no hope to prove anything like that for differential logical relations, as the following example shows.

Example 11.

Consider again the term , which can be given type in the empty context. Please recall that . Could we prove that , where is the constant- function? The answer is negative: given two real numbers and at distance , the terms and are themselves apart, thus at nonnull distance. The best one can say, then, is that , where .

As the previous example suggests, a term being at self-distance is a witness of being sensitive to changes to the environment according to . Indeed, the only terms which are at self-distance are the constant functions. This makes the underlying theory more general than the one of logical or metric relations, although the latter can be proved to be captured by differential logical relations, as we will see in the next section.

Coming back to the question with which we opened the section, we can formulate a suitable fundamental lemma for differential logical relations by stating that for any closed term of type there exists such that . In order to prove such result, however, we need to prove something stronger, namely the extension of the above statement to arbitrary (and thus possibly open) terms. Doing so, requires to extend differential logical relations to arbitrary sequents .

Let us begin extending the maps and to environments.

Definition 12.

Given an environment , define:

An element in e.g. is thus a family , meaning that for any , . The syntactic counterparts of such families is given by families . We refer to as a -family of values. Indeed, such a -family of values can naturally be seen as a substitution mapping each variable to . As it is customary, for a term we write for the closed term of type obtained applying the substitution to . We denote by the set of all -family of values.

We can now extend a differential logical relation to environments stipulating that the family is a differential logical relation if

Next, we extend our framework to arbitrary sequents . First of all, we define:

It is then natural to extend to arbitrary sequents by stipulating that , where denotes the set of sequents typable within the sequent , is a differential logical relation if holds if and only if

This definition as it is, however, does not work well, as it does not take into account possible alternations of the substitutions and . To solve this issue we introduce the following notation. Given a boolean-valued map , and two substitutions , define the substitution as:

We can now extend differential logical relations to arbitrary terms.

Definition 13.

Given a differential logical relations , define stipulating that holds if and only if

Before proving the desired strengthening of the fundamental lemma, it is useful to observe the following useful result.

Lemma 14.

For all terms , we have .

Proof.

First observe that if and (where is the obvious small-step semantics relation associated to ), then holds (a similar statement holds for ). By Theorem  1 we thus infer the right-to-left implication. For the other implication, we proceed by induction on observing that . ∎

Lemma 15.

For any , there exists such that holds.

Proof.

The proof is by induction on .

  • Suppose , meaning that . We simply define as .

  • Suppose . Define as .

  • Suppose . For simplicity, we show the case for (the case for follows the same structure). We have to find such that implies . The latter means, that for all values (for simplicity, we denote by both the numeral and the number ) and , implies . Define:

    where the diameter of a set is defined as . Notice that a set can have diameter . We conclude the wished thesis observing that implies (recall that being real numbers, so is ), and thus . As a consequence, , and thus we are done.

  • Suppose Γ⊢MN: τ Γ⊢M: ρτ& Γ⊢N: ρ . By induction hypothesis we have:

    1. There exists such that implies for any , for all , and for any :

    2. There exists such that implies for any :

    We define the wished as . We conclude the wished thesis instantiating as relying on Lemma 14.

  • Suppose Γ⊢λx.M: τρ Γ, x:τ⊢M: ρ . The thesis directly follows from the induction hypothesis.

As an immediate corollary we obtain the wished result.

Thoerem 16 (Fundamental Lemma, Version I).

For every there is a such that .

But what do we gain from Theorem 16? In the classic theory of logical relations, the Fundamental Lemma has, as an easy corollary, that logical relations are compatible: it suffices to invoke the theorem with any context seen as a term , such that . Thus, ultimately, logical relations are proved to be a compositional methodology for program equivalence, in the following sense: if and are equivalent, then and are equivalent, too.

In the realm of differential logical relations, the Fundamental Lemma plays a similar role, although with a different, quantitative flavor: once has been proved sensitive to changes according to , and are proved to be at distance , then, e.g., the impact of substituting with in can be measured by composing and (and ), i.e. by computing . Notice that the sensitivity analysis on and the relational analysis on and are decoupled. What the Fundamental Lemma tells you is that and can always be found.

3.2 Our Running Example, Revisited

It is now time to revisit the example we talked about in the Introduction. Consider the following two programs, both closed and of type :

Let us prove that , where . Consider any pair of real numbers such that , where . We have that: