An Introduction to Logical Relations

07/25/2019 ∙ by Lau Skorstengaard, et al. ∙ Aarhus Universitet 0

Logical relations (LR) have been around for many years, and today they are used in many formal results. However, it can be difficult to LR beginners to find a good place to start to learn. Papers often use highly specialized LRs that use the latest advances of the technique which makes it impossible to make a proper presentation within the page limit. This note is a good starting point for beginners that want to learn about LRs. Almost no prerequisite knowledge is assumed, and the note starts from the very basics. The note covers the following: LRs for proving normalization and type safety of simply typed lambda calculus, relational substitutions for reasoning about universal and existential types, step-indexing for reasoning about recursive types, and worlds for reasoning about references.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

The term logical relations stems from Gordon Plotkin’s memorandum Lambda-definability and logical relations written in 1973. However, the spirit of the proof method can be traced back to Wiliam W. Tait who used it to show strong normalization of System T in 1967.

Names are a curious thing. When I say “chair”, you immediately get a picture of a chair in your head. If I say “table”, then you picture a table. The reason you do this is because we denote a chair by “chair” and a table by “table”, but we might as well have said “giraffe” for chair and “Buddha” for table. If we encounter a new word composed of known words, it is natural to try to find its meaning by composing the meaning of the components of the name. Say we encounter the word “tablecloth” for the first time, then if we know what “table” and “cloth” denotes we can guess that it is a piece of cloth for a table. However, this approach does not always work. For instance, a “skyscraper” is not a scraper you use to scrape the sky. Likewise for logical relations, it may be a fool’s quest to try to find meaning in the name. Logical relations are relations, so that part of the name makes sense. They are also defined in a way that has a small resemblance to a logic, but trying to give meaning to logical relations only from the parts of the name will not help you understand them. A more telling name might be Type Indexed Inductive Relations. However, Logical Relations is a well-established name and easier to say, so we will stick with it (no one would accept “giraffe” to be a chair).

The majority of this note is based on the lectures of Amal Ahmed at the Oregon Programming Languages Summer School, 2015. The videos of the lectures can be found at https://www.cs.uoregon.edu/research/summerschool/summer15/curriculum.html.

1.1 Simply Typed Lambda Calculus

The language we use to present logical predicates and relations is the simply typed lambda calculus (STLC). In the first section, it will be used in its basic form, and later it will be used as the base language when we study new constructs and features. We will later leave it implicit that it is STLC that we extend with new constructs. STLC is defined in Figure 1.

Figure 1: The simply typed lambda calculus. For the typing contexts, it is assumed that the binders () are distinct. That is, if , then is not a legal context.

For readers unfamiliar with inference rules: A rule A B A ∧B is read as if and is the case, then we can conclude . This means that the typing rule for application says that an application has the type under the typing context when has type under and has type also under .

1.2 Logical Relations

A logical relation is a proof method that can be used to prove properties of programs written in a particular programming language. Proofs for properties of programming languages often go by induction on the typing or evaluation judgement. A logical relations add a layer of indirection by constructing a collection of programs that all have the property we are interested in. We will see this in more detail later. As a motivation, here are a number of examples of properties that can be proven with a logical relation:

  • Termination (Strong normalization)

  • Type safety

  • Program equivalences

    • Correctness of programs

    • Representation independence

    • Parametricity and free theorems, e.g.

      The program cannot inspect as it has no idea which type it will be, therefore must be the identity function.

      A function with this type cannot exist (the function would need to return something of type , but it only has something of type to work with, so it cannot possibly return a value of the proper type).

    • Security-Typed Languages (for Information Flow Control (IFC))
      Example: All types in the code snippet below are labelled with their security level. A type can be labelled with either for low or for high. We do not want any information flowing from variables with a high label to a variable with a low label. The following is an example of an insecure program because it has an explicit flow of information from low to high:

        x : int@$^L$@
        y : int@$^H$@
        x = y    //This assignment is insecure.

      Information may also leak through a side channel. There are many varieties of side channels, and they vary from language to language depending on their features. One of the perhaps simplest side channels is the following: Say we have two variable and . They are both of integer type, but the former is labelled with low and the latter with high. Now say the value of depends on the value of , e.g.  when and otherwise. In this example, we may not learn the exact value of from , but we will have learned whether is positive. The side channel we just sketched looks as follows:

        x : int@$^L$@
        y : int@$^H$@
        if y > 0 then x = 0 else x = 1

      Generally, speaking the property we want is non-interference:

      That is for programs that generate low results, we want “low-equivalent” results. Low-equivalence means: if we execute twice with the same low value but two different high values, then the low results of the two executions should be equal. In other words, the execution cannot have depended on the high value which means that no information was leaked to the low results.

1.3 Categories of Logical Relations

We can split logical relations into two: logical predicates and logical relations. Logical predicates are unary and are usually used to show properties of programs. Logical relations are binary and are usually used to show equivalences:

Logical Predicates Logical Relations
(Unary) (Binary)
- One property - Program Equivalence
- Strong normalization
- Type safety

There are some properties that we want logical predicates and relation to have in general. We describe these properties for logical predicates as they easily generalize to logical relations. In general, we want the following things to hold true for a logical predicate that contains expressions 111Note: these are rules of thumb. For instance, one exception to the rule is the proof of type safety where the well-typedness condition is weakened to only require to be closed.:

  1. The expressions is closed and well-types, i.e. .

  2. The expression has the property we are interested in.

  3. The property of interest is preserved by eliminating forms.

2 Normalization of the Simply Typed Lambda Calculus

2.1 Strong Normalization of STLC

In this section, we prove strong normalization for the simply typed lambda calculus which means that every term is strongly normalizing. Normalization of a term is the process of reducing it to its normal form (where it can be reduced no further). If a term is strongly normalizing, then it always reduces to its normal form. In our case, the normal forms of the language are the values of the language.

A first attempt at proving strong normalization for STLC

We will first attempt a syntactic proof of the strong normalization property of STLC to demonstrate how it fails. However, first we need to state what we mean by strong normalization.

Definition 1.

For expression and value :

Theorem 1 (Strong Normalization).

If , then

Proof.

!‘ This proof gets stuck and is not complete. !
Induction on the structure of the typing derivation.
Case , this term has already terminated.
Case , same as for true.
Case , simple, but requires the use of canonical forms of bool222See Pierce’s Types and Programming Languages [Pierce, 2002] for more about canonical forms..
Case , it is a value already and it has terminated.
Case ,
by the induction hypothesis, we get and . By the type of , we conclude . What we need to show is . By the evaluation rules, we know takes the following steps:

Here we run into an issue as we know nothing about . As mentioned, we know from the induction hypothesis that evaluates to a lambda abstraction which makes strongly normalizing. However, this say nothing about how the body of the body of the lambda abstraction evaluates. Our induction hypothesis is simply not strong enough333:(. ∎

The direct style proof did not work in this case, and it is not clear what to do to make it work.

Proof of strong normalization using a logical predicate

Now that the direct proof failed, we try using a logical predicate. First, we define the predicate :

Now recall the three conditions from Section 1.3 that a logical predicate should satisfy. It is easy to verify that only accepts closed well-typed terms. Further, the predicate also requires terms to have the property we are interested in proving, namely . Finally, it should satisfy that “the property of interest is preserved by eliminating forms”. In STLC lambdas are eliminated by application which means that application should preserve strong normalization when the argument is strongly normalizing.

The logical predicate is defined over the structure of which has as a base type, so the definition is well-founded444This may seem like a moot point as it is so obvious, but for some of the logical relations we see later it is not so, so we may as well start the habit of checking this now.. We are now ready to prove strong normalization using . To this end, we have the following lemmas:

Lemma 1.

If , then

Lemma 2.

If , then

These two lemmas are common for proofs using a logical predicate (or relation). We first prove that all well-typed terms are in the predicate, and then we prove that all terms in the predicate have the property we want to show (in this case strong normalization).

The proof of Lemma 2 is by induction on . This proof is straightforward because the strong normalization was baked into the predicate. It is generally a straightforward proof as our rules of thumb guide us to bake the property of interest into the predicate.

To prove Lemma 1, we we could try induction over , but the case we will fail to show the case for T-Abs. Instead we prove a generalization of Lemma 1:

Theorem 2 (Lemma 1 generalized).

If and , then

This theorem uses a substitution to close off the expression . In order for to close off , it must map all the possible variables of to strongly normalizing terms. When we prove this lemma, we get a stronger induction hypothesis than we have when we try to prove Lemma 1. In Lemma 1, the induction hypothesis can only be used with a closed term; but in this lemma, we can use an open term provided we have a substitution that closes it.

To be more specific, a substitution works as follows:

Definition 2.
555We do not formally define substitution (). We refer to Pierce [2002] for a formal definition.

and is read “the substitution satisfies the type environment ”, and it is defined as follows:

To prove Theoremq 2 we need two further lemmas

Lemma 3 (Substitution Lemma).

If and , then

Proof.

Left as an exercise. ∎

Lemma 4 ( preserved by forward/backward reduction).

Suppose and

  1. if , then

  2. if , then

Proof.

Left as an exercise. ∎

Proof. (1 Generalized).

Proof by induction on .
Case T-True,
Assume:

We need to show:

There is no variable, so the substitution does nothing, and we just need to show which is true as .
Case T-False, similar to the true case.
Case T-Var,
Assume:

We need to show:

This case follows from the definition of . We know that is well-typed, so it is in the domain of . From the definition of , we get . From well-typedness of , we have which then gives us what we needed to show.
Case T-If, left as an exercise.
Case T-App,
Assume:

We need to show:

which amounts to . By the induction hypothesis we have

(1)
(2)

By the 3rd property of (1), , instantiated with (2), we get which is the result we need.

Note this was the case we got stuck in when we tried to do the direct proof. With the logical predicate, it is easily proven because we made sure to bake information about into when we follow the rule of thumb: “The property of interest is preserved by eliminating forms”.
Case T-Abs,
Assume:

We need to show:

which amounts to . Our induction hypothesis in this case reads:

It suffices to show the following three things:

If we use the substitution lemma (Lemma 3) and push the in under the -abstraction, then we get 1. The lambda-abstraction is a value, so by definition 2 is true.

It only remains to show 3. To do this, we want to somehow apply the induction hypothesis for which we need a such that . We already have and , so our

should probably have the form

for some of type . Let us move on and see if any good candidates for present themselves.

Let be given and assume . We then need to show . From , it follows that for some . is a good candidate for so let . From the forward part of the preservation lemma (Lemma 4), we can further conclude . We use this to conclude which we use with the assumption to instantiate the induction hypothesis and get .

Now consider the following evaluation:

We already concluded that , which corresponds to the first series of steps. We can then do a -reduction to take the next step, and finally we get something that is equivalent to . That is we have the evaluation

From , we have and we already argued that , so from the application typing rule we get . We can use this with the above evaluation and the forward part of the preservation lemma (Lemma 4) to argue that every intermediate expressions in the steps down to are closed and well typed.

If we use with and the fact that every intermediate step in the evaluation is closed and well typed, then we can use the backward reduction part of the preservation lemma to get which is the result we wanted. ∎

2.2 Exercises

  1. Prove preserved by forward/backward reduction (Lemma 4).

  2. Prove the substitution lemma (Lemma 3).

  3. Go through the cases of the proof for Theorem 2 by yourself.

  4. Prove the T-If case of Theorem 2.

  5. Extend the language with pairs and adjust the proofs.

    Specifically, how do you apply the rules of thumb for the case of pairs? Do you need to add anything for the third clause (eliminating forms preservation), or does it work without doing anything for it like it did for case of booleans?

recu

3 Type Safety for STLC

In this section, we prove type safety for simply typed lambda calculus using a logical predicate.

First we need to consider what type safety is. The classical mantra for type safety is: “Well-typed programs do not go wrong.” It depends on the language and type system what go wrong actually means, but in our case a program has gone wrong when it is stuck666In the case of language-based security and information flow control, the notion of go wrong would be that there is an undesired flow of information. (an expression is stuck if it is irreducible but not a value).

3.1 Type safety - the classical treatment

Type safety for simply typed lambda calculus is stated as follows:

Theorem 3 (Type Safety for STLC).

If and , then or .

Traditionally, the type safety proof uses two lemmas: progress and preservation.

Lemma 5 (Progress).

If , then or .

Progress is normally proved by induction on the typing derivation.

Lemma 6 (Preservation).

If and , then .

Preservation is normally proved by induction on the evaluation. Preservation is also known as subject reduction. Progress and preservation talk about one step, so to prove type safety we have to do induction on the evaluation. Here we do not want to prove type safety the traditional way, but if you are unfamiliar with it and want to learn more, then we refer to Pierce’s Types and Programming Languages [Pierce, 2002].

We will use a logical predicate (as it is a unary property) to prove type safety.

3.2 Type safety - using logical predicate

We define the predicate in a slightly different way compared to Section 2. We define it in two parts: a value interpretation and an expression interpretation. The value interpretation is a function from types to the power set of closed values:

The value interpretation is defined as:

We define the expression interpretation as:

Notice that neither nor requires well-typedness. Normally, the logical predicate would require this as our guidelines suggest it. However, as the goal is to prove type safety we do not want it as a part of the predicate. In fact, if we did include a well-typedness requirement, then we would end up proving preservation for some of the proofs to go through. We do, however, require the value interpretation to only contain closed values.

An expression is irreducible if it is unable to take any reduction steps according to the evaluation rules. The predicate captures whether an expression is irreducible:

The sets are defined on the structure of the types. contains , but uses directly in , so the definition is structurally well-founded. To prove type safety, we first define a new predicate, :

An expression is safe if it can take a number of steps and end up either as a value or as an expression that can take another step.

We are now ready to prove type safety. Just like we did for strong normalization, we use two lemmas:

Lemma 7.

If , then

Lemma 8.

If , then

Rather than proving Lemma 7 directly, we prove a more general theorem and get Lemma 7 as a consequence. We are not yet in a position to state the generalization. First, we need to define the interpretation of environments:

Further, we need to define semantic type safety:

This definition should look familiar because we use the same trick as we did for strong normalization: Instead of only considering closed terms, we consider all terms but require a substitution that closes it.

We can now define our generalized version of Lemma 7:

Theorem 4 (Fundamental Property).

If , then

A theorem like this would typically be the first you prove after defining a logical relation. In this case, the theorem says that syntactic type safety implies semantic type safety.

We also alter Lemma 8 to fit with Theorem 4:

Lemma 9.

If , then

Proof.

Suppose for some , then we need to show or . We proceed by case on whether or not :
Case , this case follows directly from the definition of : is defined as and as the assumption is , we get .
Case , by assumption we have . As the typing context is empty, we choose the empty substitution and get . We now use the definition of with the two assumptions and to conclude . As is in the value interpretation of , we can conclude . ∎

To prove the Fundamental Property (Theorem 4), we need a substitution lemma:

Lemma 10 (Substitution).

Let be syntactically well-formed term, let be a closed value and let be a substitution that maps term variables to closed values, and let be a variable not in the domain of , then

Proof.

By induction on the size of .
Case , this case is immediate by definition of substitution. That is by definition we have .
Case , , in this case our induction hypothesis is:

We wish to show

(3)
(4)
(5)
(6)

In the first step (3), we swap the two mappings. It is safe to do so as both and are closed values, so we know that no variable capturing will occur. In the second step (4), we just use the definition of substitution (Definition 2). In the third step (5), we use the induction hypothesis777The induction hypothesis actually has a number of premises, as an exercise convince yourself that they are satisfied.. Finally in the last step (6), we use the definition of substitution to get the binding out as an extension of . ∎

Proof. (Fundamental Property, Theorem 4).

Proof by induction on the typing judgement.
Case T-Abs,
assuming , we need to show . Suppose and show

Now suppose that and . We then need to show . is irreducible because it is a value, and we can conclude it takes no steps. In other words . This means we need to show . Now suppose , then we need to show .

Keep the above proof goal in mind and consider the induction hypothesis:

Instantiate this with . We have because of assumptions and . The instantiation gives us . The equivalence is justified by the substitution lemma we proved. This is exactly the proof goal we kept in mind.
Case T-App, Show this case as an exercise.
The remaining cases are straightforward. ∎

Now consider what happens if we add pairs to the language (exercise 5 in exercise section 2.2). We need to add a clause to the value interpretation:

There is nothing surprising in this addition to the value relation, and it should not be a challenge to show the pair case of the proofs.

If we extend our language with sum types.

Then we need to add the following clause to the value interpretation:

It turns out this clause is sufficient. One might think that it is necessary to require the body of the match to be in the expression interpretation, which looks something like . This requirement will, however, give well-foundedness problems, as is not a structurally smaller type than . It may come as a surprise that we do not need to relate the expressions as the slogan for logical relations is: “Related inputs to related outputs.”

3.3 Exercises

  1. Prove the T-App case of the Fundamental Property (Theorem 4).

  2. Verify the remaining cases of Theorem 4 (T-True,T-False,T-Var, and T-If).

4 Universal Types and Relational Substitutions

In the previous sections, we considered the unary properties and safety and termination, but now we shift our focus to relational properties and specifically program equivalences. Generally speaking, we use logical relations rather than predicates for proving relational properties. A program equivalence is a relational property, so we are going to need a logical relation.

We will consider the language System F which is STLC with universal types. Universal types allow us to write generic functions. For instance, say we have a function that sorts integer lists:

The function takes a list of integers and returns the sorted list. Say we now want a function sortstring that sorts lists of strings. Instead of implementing a completely new sorting function for this, we could factor out the generic code responsible for sorting from sortint and make a generic function. The type signature for a generic sort function would be:

The generic sort function takes a type, a list of elements of this type, and a comparison function that compares to elements of the given type. The result of the sorting function is a list sorted according to the comparison function. An example of an application of this function could be

Whereas sort instantiated with the string type, but given an integer list would not be a well typed instantiation.

Here the application with the list is not well typed, but if we instead use a list of strings, then it type checks.

We want to extend the simply typed lambda calculus with functions that abstract over types in the same way lambda abstractions, , abstract over terms. We do that by introducing a type abstraction:

This function abstracts over the type which allows to depend on .

4.1 System F (STLC with universal types)

We extend STLC as follows to get System F:

New evaluation: (Λα .   e)[τ] →e[τ/α] Type environment:

The type environment consists of type variables, and they are assumed to be distinct type variables. That is, the environment is only well-formed if 888We do not annotate with a kind as we only have one kind in this language.. With the addition of type environments, we update the form of the typing judgement as follows

We now need a notion of well-formed types. If is well formed with respect to , then we write:

We do not include the formal rules here, but they amount to , where is the set of free type variables in .

We further introduce a notion of well-formed typing contexts. A context is well formed if all the types that appear in the range of are well formed.

For any type judgment , we have as an invariant that is well formed in and is well formed in . The old typing system modified to use the new form of the typing judgment looks like this:

Notice that the only thing that has changed is that has been added to the environment in the judgments. We further extend the typing rules with the following two rules to account for our new language constructs:

4.2 Properties of System-F: Free Theorems

In System-F, certain types reveal the behavior of the functions with that type. Let us consider terms with the type . Recall from Section 1.2 that this has to be the identity function. We can now phrase this as a theorem:

Theorem 5.

If all of the following hold

  • ,

then

This is a free theorem in this language. Another free theorem from Section 1.2 is for expressions of type . All expressions with this type has to be constant functions. We can also phrase this as a theorem:

Theorem 6.

If all of the following hold

then

999We have not defined yet. For now, it suffices to know that it equates programs that are behaviorally the same.

We can even allow the type abstraction to be instantiated with different types:

Theorem 7.

If all of the following hold

then

We get these free theorems because the functions do not know the type of the argument which means that they have no way to inspect it. The function can only treat its argument as an unknown “blob”, so it has no choice but to return the same value every time.

The question now is: how do we prove these free theorems? The two last theorems both talk about program equivalence, so the proof technique of choice is a logical relation. The first theorem did not mention a program equivalence, but it can also be proven with a logical relation.

4.3 Contextual Equivalence

To define a contextual equivalence, we first define the notion of a program context. A program context is a complete program with exactly one hole in it:

For instance,

is a context where the hole is the body of the lambda abstraction.

We need a notion of context typing. For simplicity, we just introduce it for simply typed lambda calculus. The context typing is written as:

This means that for any expression of type under if we embed it into , then the type of the embedding is under . For our example of a context, we would have

because the hole of the context can be plugged with any expression of type with variable free in it. Such an expression could be as . When the hole in the context is plugged with such an expression, then it is closed and has type . For instance, if we plug the example context with the above expression we have

Informally we want contextual equivalence to express that two expressions give the same result no matter what program context they are plugged into. In other words, two expressions are contextually equivalent when no program context is able to observe any difference in behavior between the two expressions. For this reason, contextual equivalence is also known as observational equivalence. A hole has to be plugged with a term of the correct type, so we annotate the equivalence with the type of the hole which means that the two contextually equivalent expressions must have that type.