Realizability Interpretation and Normalization of Typed Call-by-Need λ-calculus With Control

03/02/2018 ∙ by Étienne Miquey, et al. ∙ Inria 0

We define a variant of realizability where realizers are pairs of a term and a substitution. This variant allows us to prove the normalization of a simply-typed call-by-need λ-calculus with control due to Ariola et al. Indeed, in such call-by-need calculus, substitutions have to be delayed until knowing if an argument is really needed. In a second step, we extend the proof to a call-by-need λ-calculus equipped with a type system equivalent to classical second-order predicate logic, representing one step towards proving the normalization of the call-by-need classical second-order arithmetic introduced by the second author to provide a proof-as-program interpretation of the axiom of dependent choice.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

Introduction

Realizability-based normalization

Normalization by realizability is a standard technique to prove the normalization of typed -calculi. Originally introduced by Tait [36] to prove the normalization of System T, it was extended by Girard to prove the normalization of System F [11]. This kind of techniques, also called normalization by reducibility or normalization by logical relations, works by interpreting each type by a set of typed or untyped terms seen as realizers of the type, then showing that the way these sets of realizers are built preserve properties such as normalization. Over the years, multiple uses and generalization of this method have been done, for a more detailed account of which we refer the reader to the work of Gallier [9].

Realizability techniques were adapted to the normalization of various calculi for classical logic (see e.g. [3, 32]). A specific framework tailored to the study of realizability for classical logic has been designed by Krivine [19] on top of a -calculus with control whose reduction is defined in terms of an abstract machine. In such a machinery, terms are evaluated in front of stacks; and control (thus classical logic) is made available through the possibility of saving and restoring stacks. During the last twenty years, Krivine’s classical realizability turned out to be fruitful both from the point of view of logic, leading to the construction of new models of set theory, and generalizing in particular the technique of Cohen’s forcing [20, 21, 22]; and on its computational facet, providing alternative tools to the analysis of the computational content of classical programs111See for instance [27] about witness extraction or [12, 13] about specification problems..

Noteworthily, Krivine realizability is one of the approaches contributing to advocating the motto that through the Curry-Howard correspondence, with new programming instructions come new reasoning principles222For instance, one way to realize the axiom of dependent choice in classical realizability is by means of an extra instruction quote [18].. Our original motivation for the present work is actually in line with this idea, in the sense that our long-term purpose is to give a realizability interpretation to , a call-by-need calculus defined by the second author [15]. In this calculus, the lazy evaluation is indeed a fundamental ingredient in order to obtain an executable proof term for the axiom of dependent choice.

Contributions of the paper

In order to address the normalization of typed call-by-need -calculus, we design a variant of Krivine’s classical realizability, where the realizers are closures (a term with a substitution for its free variables). The call-by-need -calculus with control that we consider is the -calculus. This calculus, that was defined by Ariola et al. [2], is syntactically described in an extension with explicit substitutions of the -calculus [6, 14, 29]. The syntax of the -calculus itself refines the syntax of the -calculus by syntactically distinguishing between terms and evaluation contexts. It also contains commands which combine terms and evaluation contexts so that they can interact together. Thinking of evaluation contexts as stacks and commands as states, the -calculus can also be seen as a syntax for abstract machines. As for a proof-as-program point of view, the -calculus and its variants can be seen as a term syntax for proofs of Gentzen’s sequent calculus. In particular, the -calculus contains control operators which give a computational interpretation to classical logic.

We give a proof of normalization first for the simply-typed -calculus333Even though it has not been done formally, the normalization of the -calculus presented in [2] should also be derivable from Polonowski’s proof of strong normalization of the non-deterministic -calculus [35]. The -calculus (a big-step variant of the -calculus introduced in Ariola et al.) is indeed a particular evaluation strategy for the -calculus, so that the strong normalization of the non-deterministic variant of the latter should imply the normalization of the former as a particular case., then for a type system with first-order and second-order quantification. While we only apply our technique to the normalization of the -calculus, our interpretation incidentally suggests a way to adapt Krivine realizability to other call-by-need settings. This paves the way to the computational interpretation of classical proofs using lazy evaluation or shared memory cells, including the case of the call-by-need second order arithmetic  [15].

1 The -calculus

1.1 The call-by-need evaluation strategy

The call-by-need evaluation strategy of the -calculus evaluates arguments of functions only when needed, and, when needed, shares their evaluations across all places where the argument is required. The call-by-need evaluation is at the heart of a functional programming language such as Haskell. It has in common with the call-by-value evaluation strategy that all places where a same argument is used share the same value. Nevertheless, it observationally behaves like the call-by-name evaluation strategy (for the pure -calculus), in the sense that a given computation eventually evaluates to a value if and only if it evaluates to the same value (up to inner reduction) along the call-by-name evaluation. In particular, in a setting with non-terminating computations, it is not observationally equivalent to the call-by-value evaluation. Indeed, if the evaluation of a useless argument loops in the call-by-value evaluation, the whole computation loops, which is not the case of call-by-name and call-by-need evaluations.

These three evaluation strategies can be turned into equational theories. For call-by-name and call-by-value, this was done by Plotkin through continuation-passing-style (CPS) semantics characterizing these theories [34]. For the call-by-need evaluation strategy, a specific equational theory reflecting the intensional behavior of the strategy into a semantics was proposed independently by Ariola and Felleisen [1], and by Maraist, Odersky and Wadler [26]. A continuation-passing-style semantics was proposed in the 90s by Okasaki, Lee and Tarditi [30]. However, this semantics does not ensure normalization of simply-typed call-by-need evaluation, as shown in [2], thus failing to ensure a property which holds in the simply-typed call-by-name and call-by-value cases.

Continuation-passing-style semantics de facto gives a semantics to the extension of -calculus with control operators444That is to say with operators such as Scheme’s callcc, Felleisen’s , , or operators [8], Parigot’s and [ ] operators [31], Crolard’s catch and throw operators [5].. In particular, even though call-by-name and call-by-need are observationally equivalent on pure -calculus, their different intentional behaviors induce different CPS semantics, leading to different observational behaviors when control operators are considered. On the other hand, the semantics of calculi with control can also be reconstructed from an analysis of the duality between programs and their evaluation contexts, and the duality between the let construct (which binds programs) and a control operator such as Parigot’s (which binds evaluation contexts). Such an analysis can be done in the context of the -calculus [6, 14].

In the call-by-name and call-by-value cases, the approach based on -calculus leads to continuation-passing style semantics similar to the ones given by Plotkin or, in the call-by-name case, also to the one by Lafont, Reus and Streicher [23]. As for call-by-need, in [2] is defined the -calculus, a call-by-need version of the -calculus. A continuation-passing style semantics is then defined via a calculus called  [2]. This semantics, which is different from Okasaki, Lee and Tarditi’s one [30], is the object of study in this paper.

1.2 Explicit environments

While the results presented in this paper could be directly expressed using the -calculus, the realizability interpretation naturally arises from the decomposition of this calculus into a different calculus with an explicit environment, the -calculus [2]. Indeed, as we shall see in the sequel, the decomposition highlights different syntactic categories that are deeply involved in the type system and in the definition of the realizability interpretation.

The -calculus is a reformulation of the -calculus with explicit environments, called stores and denoted by . Stores consists of a list of bindings of the form , where is a term variable and a term, and of bindings of the form where is a context variable and a context. For instance, in the closure , the variable is bound to in and . Besides, the term might be an unevaluated term (i.e. lazily stored), so that if is eagerly demanded at some point during the execution of this closure, will be reduced in order to obtain a value. In the case where indeed produces a value , the store will be updated with the binding . However, a binding of this form (with a value) is fixed for the rest of the execution. As such, our so-called stores somewhat behave like lazy explicit substitutions or mutable environments.

To draw the comparison between our structures and the usual notions of stores and environments, two things should be observed. First, the usual notion of store refers to a structure of list that is fully mutable, in the sense that the cells can be updated at any time and thus values might be replaced. Second, the usual notion of environment designates a structure in which variables are bounded to closures made of a term and an environment. In particular, terms and environments are duplicated, i.e. sharing is not allowed. Such a structure resemble to a tree whose nodes are decorated by terms, as opposed to a machinery allowing sharing (like ours) whose underlying structure is broadly a directed acyclic graph. See for instance [24] for a Krivine abstract machine with sharing.

1.3 Syntax & reduction rules

The lazy evaluation of terms allows for the following reduction rule: us to reduce a command to the command together with the binding .

In this case, the term is left unevaluated (“frozen”) in the store, until possibly reaching a command in which the variable is needed. When evaluation reaches a command of the form , the binding is opened and the term is evaluated in front of the context :

The reader can think of the previous rule as the “defrosting” operation of the frozen term  : this term is evaluated in the prefix of the store which predates it, in front of the context where the binder is waiting for a value. This context keeps trace of the part of the store that was originally located after the binding . This way, if a value is indeed furnished for the binder , the original command is evaluated in the updated full store:

The brackets in are used to express the fact that the variable is forced at top-level (unlike contexts of the shape in the -calculus). The reduction system resembles the one of an abstract machine. Especially, it allows us to keep the standard redex at the top of a command and avoids searching through the meta-context for work to be done.

Note that our approach slightly differ from [2] since we split values into two categories: strong values () and weak values (). The strong values correspond to values strictly speaking. The weak values include the variables which force the evaluation of terms to which they refer into shared strong value. Their evaluation may require capturing a continuation. The syntax of the language, which includes constants and co-constants , is given in Figure 1. As for the reduction , we define it as the compatible reflexive transitive closure of the rules given in Figure 1.

 

Figure 1: Syntax and reduction rules of the -calculus

The different syntactic categories can be understood as the different levels of alternation in a context-free abstract machine (see [2]): the priority is first given to contexts at level (lazy storage of terms), then to terms at level (evaluation of into values), then back to contexts at level and so on until level . These different categories are directly reflected in the definition of the abstract machine defined in [2], and will thus be involved in the definition of our realizability interpretation. We chose to highlight this by distinguishing different types of sequents already in the typing rules that we shall now present.

1.4 A type system for the -calculus.

Figure 2: Typing rules of the -calculus

We have nine kinds of (one-sided) sequents, one for typing each of the nine syntactic categories. We write them with an annotation on the sign, using one of the letters , , , , , , , , . Sequents typing values and terms are asserting a type, with the type written on the right; sequents typing contexts are expecting a type with the type written ; sequents typing commands and closures are black boxes neither asserting nor expecting a type; sequents typing substitutions are instantiating a typing context. In other words, we have the following nine kinds of sequents:

where types and typing contexts are defined by:

The typing rules are given on Figure 2 where we assume that a variable (resp. co-variable ) only occurs once in a context (we implicitly assume the possibility of renaming variables by -conversion). We also adopt the convention that constants and co-constants come with a signature which assigns them a type. This type system enjoys the property of subject reduction.

Theorem 1.1 (Subject reduction)

If and then .

Proof

By induction on typing derivations, see Appendix 0.A.∎

2 Normalization of the -calculus

2.1 Normalization by realizability

The proof of normalization for the -calculus that we present in this section is inspired from techniques of Krivine’s classical realizability [19], whose notations we borrow. Actually, it is also very close to a proof by reducibility555See for instance the proof of normalization for system presented in [17, 3.2].. In a nutshell, to each type is associated a set of terms whose execution is guided by the structure of . These terms are the ones usually called realizers in Krivine’s classical realizability. Their definition is in fact indirect, and is done by orthogonality to a set of “correct” computations, called a pole. The choice of this set is central when studying models induced by classical realizability for second-order-logic, but in the present case we only pay attention to the particular pole of terminating computations. This is where lies one of the difference with usual proofs by reducibility, where everything is done with respect to , while our definition are parametric in the pole (which is chosen to be in the end). The adequacy lemma, which is the central piece, consists in proving that typed terms belong to the corresponding sets of realizers, and are thus normalizing.

More in details, our proof can be sketched as follows. First, we generalize the usual notion of closed term to the notion of closed term-in-store. Intuitively, this is due to the fact that we are no longer interested in closed terms and substitutions to close opened terms, but rather in terms that are closed when considered in the current store. This is based on the simple observation that a store is nothing more than a shared substitution whose content might evolve along the execution. Second, we define the notion of pole , which are sets of closures closed by anti-evaluation and store extension. In particular, the set of normalizing closures is a valid pole. This allows to relate terms and contexts thanks to a notion of orthogonality with respect to the pole. We then define for each formula and typing level (of ) a set (resp. ) of terms (resp. contexts) in the corresponding syntactic category. These sets correspond to reducibility candidates, or to what is usually called truth values and falsity values in Krivine realizability. Finally, the core of the proof consists in the adequacy lemma, which shows that any closed term of type at level is in the corresponding set . This guarantees that any typed closure is in any pole, and in particular in the pole of normalizing closures. Technically, the proof of adequacy evaluates in each case a state of an abstract machine (in our case a closure), so that the proof also proceeds by evaluation. A more detailed explanation of this observation as well as a more introductory presentation of normalization proofs by classical realizability are given in an article by Dagand and Scherer [7].

2.2 Realizability interpretation for the -calculus

We begin by defining some key notions for stores that we shall need further in the proof.

Definition 1 (Closed store)

We extend the notion of free variable to stores:

so that we can define a closed store to be a store such that .

Definition 2 (Compatible stores)

We say that two stores and are independent and write when . We say that they are compatible and write whenever for all variables (resp. co-variables ) present in both stores: ; the corresponding terms (resp. contexts) in and coincide. Finally, we say that is an extension of and write whenever and .

We denote by the compatible union of closed stores and , defined by:

The following lemma (which follows easily from the previous definition) states the main property we will use about union of compatible stores.

Lemma 1

If and are two compatible stores, then and . Besides, if is of the form , then is of the form with and .

Proof

This follows easily from the previous definition.∎

As we explained in the introduction of this section, we will not consider closed terms in the usual sense. Indeed, while it is frequent in the proofs of normalization (e.g. by realizability or reducibility) of a calculus to consider only closed terms and to perform substitutions to maintain the closure of terms, this only makes sense if it corresponds to the computational behavior of the calculus. For instance, to prove the normalization of in typed call-by-name -calculus, one would consider a substitution that is suitable for with respect to the typing context , then a context of type , and evaluates :

Then we would observe that and deduce that is suitable for , which would allow us to conclude by induction.

However, in the -calculus we do not perform global substitution when reducing a command, but rather add a new binding in the store:

Therefore, the natural notion of closed term invokes the closure under a store, which might evolve during the rest of the execution (this is to contrast with a substitution).

Definition 3 (Term-in-store)

We call closed term-in-store (resp. closed context-in-store, closed closures) the combination of a term (resp. context , command ) with a closed store such that . We use the notation (resp. ) to denote such a pair.

We should note that in particular, if is a closed term, then is a term-in-store for any closed store . The notion of closed term-in-store is thus a generalization of the notion of closed terms, and we will (ab)use of this terminology in the sequel. We denote the sets of closed closures by , and will identify and the closure when is closed in . Observe that if is a closure in and is a store extending , then is also in . We are now equipped to define the notion of pole, and verify that the set of normalizing closures is indeed a valid pole.

Definition 4 (Pole)

A subset is said to be saturated or closed by anti-reduction whenever for all , if and then . It is said to be closed by store extension if whenever , for any store extending : , . A pole is defined as any subset of that is closed by anti-reduction and store extension.

The following proposition is the one supporting the claim that our realizability proof is almost a reducibility proof whose definitions have been generalized with respect to a pole instead of the fixed set SN.

Proposition 1

The set is a pole.

Proof

As we only considered closures in , both conditions (closure by anti-reduction and store extension) are clearly satisfied:

  • if and normalizes, then normalizes too;

  • if is closed in and normalizes, if then will reduce as does (since is closed under , it can only use terms in that already were in ) and thus will normalize.∎

Definition 5 (Orthogonality)

Given a pole , we say that a term-in-store is orthogonal to a context-in-store and write if and are compatible and .

Remark 1

The reader familiar with Krivine’s forcing machine [20] might recognize his definition of orthogonality between terms of the shape and stacks of the shape , where and are forcing conditions666The meet of forcing conditions is indeed a refinement containing somewhat the “union” of information contained in each, just like the union of two compatible stores.:

We can now relate closed terms and contexts by orthogonality with respect to a given pole. This allows us to define for any formula the sets (resp. ,, ) of realizers (or reducibility candidates) at level , , (resp. , , ) for the formula . It is to be observed that realizers are here closed terms-in-store.

Definition 6 (Realizers)

Given a fixed pole , we set:

Remark 2

We draw the reader attention to the fact that we should actually write , etc… and , because the corresponding definitions are parameterized by a pole . As it is common in Krivine’s classical realizability, we ease the notations by removing the annotation whenever there is no ambiguity on the pole. Besides, it is worth noting that if co-constants do not occur directly in the definitions, they may still appear in the realizers by mean of the pole.

If the definition of the different sets might seem complex at first sight, we claim that they are quite natural in regards of the methodology of Danvy’s semantics artifacts presented in [2]. Indeed, having an abstract machine in context-free form (the last step in this methodology before deriving the CPS) allows us to have both the term and the context (in a command) that behave independently of each other. Intuitively, a realizer at a given level is precisely a term which is going to behave well (be in the pole) in front of any opponent chosen in the previous level (in the hierarchy ,etc…). For instance, in a call-by-value setting, there are only three levels of definition (values, contexts and terms) in the interpretation, because the abstract machine in context-free form also has three. Here the ground level corresponds to strong values, and the other levels are somewhat defined as terms (or context) which are well-behaved in front of any opponent in the previous one. The definition of the different sets , etc… directly stems from this intuition.

In comparison with the usual definition of Krivine’s classical realizability, we only considered orthogonal sets restricted to some syntactical subcategories. However, the definition still satisfies the usual monotonicity properties of bi-orthogonal sets:

Proposition 2

For any type and any given pole , we have:

  1. ;

  2. .

Proof

All the inclusions are proved in a similar way. We only give the proof for . Let be a pole and be in . We want to show that is in , that is to say that is in the syntactic category (which is true), and that for any such that , . The latter holds by definition of , since .∎

We now extend the notion of realizers to stores, by stating that a store realizes a context if it binds all the variables and in to a realizer of the corresponding formula.

Definition 7

Given a closed store and a fixed pole , we say that realizes , which we write777Once again, we should formally write but we will omit the annotation by as often as possible. , if:

  1. for any , and

  2. for any , and

In the same way than weakening rules (for the typing context) are admissible for each level of the typing system :

the definition of realizers is compatible with a weakening of the store.

Lemma 2 (Store weakening)

Let and be two stores such that , let be a typing context and let be a pole. The following statements hold:

  1. If    for some closed term and type , then  . The same holds for each level of the typing rules.

  2. If    then  .

Proof
  1. Straightforward from the definition of .

  2. This essentially amounts to the following observations. First, one remarks that if is a closed term, so then so is for any closed store compatible with . Second, we observe that if we consider for instance a closed context , then implies , thus and finally by closure of the pole under store extension. We conclude that using the first statement.

  3. By definition, for all , is of the form such that . As and are compatible, we know by Lemma 1 that is of the form with an extension of , and using the first point we get that .∎

Definition 8 (Adequacy)

Given a fixed pole , we say that:

  • A typing judgment is adequate (w.r.t. the pole ) if for all stores , we have .

  • More generally, we say that an inference rule

    is adequate (w.r.t. the pole ) if the adequacy of all typing judgments implies the adequacy of the typing judgment .

Remark 3

From the latter definition, it is clear that a typing judgment that is derivable from a set of adequate inference rules is adequate too.

We will now show the main result of this section, namely that the typing rules of Figure 2 for the -calculus without co-constants are adequate with any pole. Observe that this result requires to consider the -calculus without co-constants. Indeed, we consider co-constants as coming with their typing rules, potentially giving them any type (whereas constants can only be given an atomic type). Thus, there is a priori no reason888Think for instance of a co-constant of type , there is no reason why it should be orthogonal to any function in . why their types should be adequate with any pole.

However, as observed in the previous remark, given a fixed pole it suffices to check whether the typing rules for a given co-constant are adequate with this pole. If they are, any judgment that is derivable using these rules will be adequate.

Theorem 2.1 (Adequacy)

If is a typing context, is a pole and is a store such that , then the following holds in the -calculus without co-constants:

  1. If is a strong value such that , then .

  2. If is a forcing context such that , then .

  3. If is a weak value such that , then .

  4. If is a catchable context such that , then .

  5. If is a term such that , then .

  6. If is a context such that , then .

  7. If is a command such that , then .

  8. If is a store such that , then .

Proof

The different statements are proved by mutual induction over typing derivations. We only give the most important cases here, the exhaustive induction is given in Appendix 0.B.

Rule .

Assume that

and let be a pole and a store such that . Let be a closed term in the set such that , then we have:

By definition of , this closure is in the pole, and we can conclude by anti-reduction.

Rule .

Assume that

and let be a pole and a store such that . As , we know that is of the form with . Let be in , with . By Lemma 1, we know that is of the form . Hence we have:

and it suffices by anti-reduction to show that the last closure is in the pole . By induction hypothesis, we know that thus we only need to show that it is in front of a catchable context in . This corresponds exactly to the next case that we shall prove now.

Rule .

Assume that

and let be a pole and a store such that . Let be a closed term in such that . We have that :

By induction hypothesis, we obtain . Up to -conversion in and , so that the variables in are disjoint from those in , we have that (by Lemma 2) and then . By induction hypothesis again, we obtain that (this was an assumption in the previous case) and as , we finally get that and conclude again by anti-reduction.∎

Corollary 1

If is a closure such that is derivable, then for any pole such that the typing rules for co-constants used in the derivation are adequate with , .

We can now put our focus back on the normalization of typed closures. As we already saw in Proposition 1, the set of normalizing closure is a valid pole, so that it only remains to prove that any typing rule for co-constants is adequate with .

Lemma 3

Any typing rule for co-constants is adequate with the pole , i.e. if is a typing context, and is a store such that , if is a co-constant such that , then .

Proof

This lemma directly stems from the observation that for any store and any closed strong value , does not reduce and thus belongs to the pole .

As a consequence, we obtain the normalization of typed closures of the full calculus.

Theorem 2.2

If is a closure of the -calculus such that is derivable, then normalizes.

This is to be contrasted with Okasaki, Lee and Tarditi’s semantics for the call-by-need -calculus, which is not normalizing in the simply-typed case, as shown in Ariola et al. [2].

2.3 Extension to 2-order type systems

We focused in this article on simply-typed versions of the and calculi. But as it is common in Krivine classical realizability, first and second-order quantifications (in Curry style) come for free through the interpretation. This means that we can for instance extend the language of types to first and second-order predicate logic:

We can then define the following introduction rules for universal quantifications:

Observe that these rules need to be restricted at the level of strong values, just as they are restricted to values in the case of call-by-value999For further explanation on the need for a value restriction in Krivine realizability, we refer the reader to [29] or [25].. As for the left rules, they can be defined at any levels, let say the more general :

where is any natural number and any formula. The usual (call-by-value) interpretation of the quantification is defined as an intersection over all the possible instantiations of the variables within the model. We do not wish to enter into too many details101010Once again, we advise the interested reader to refer to [29] or [25] for further details. on this topic here, but first-order variable are to be instantiated by integers, while second order are to be instantiated by subset of terms at the lower level, i.e. closed strong-values in store (which we write ):

where the variable is of arity . It is then routine to check that the typing rules are adequate with the realizability interpretation.

3 Conclusion and further work

In this paper, we presented a system of simple types for a call-by-need calculus with control, which we proved to be safe in that it satisfies subject reduction (Theorem 1.1) and that typed terms are normalizing (Theorem 2.2). We proved the normalization by means of realizability-inspired interpretation of the -calculus. Incidentally, this opens the doors to the computational analysis (in the spirit of Krivine realizability) of classical proofs using control, laziness and shared memory.

In further work, we intend to present two extensions of the present paper. First, following the definition of the realizability interpretation, we managed to type the continuation-and-store passing style translation for the -calculus (see [2]). Interestingly, typing the translation emphasizes its computational content, and in particular, the store-passing part is reflected in a Kripke forcing-like manner of typing the extensibility of the store [28, Chapter 6].

Second, on a different aspect, the realizability interpretation we introduced could be a first step towards new ways of realizing axioms. In particular, the first author used in his Ph.D. thesis [28, Chapter 8] the techniques presented in this paper to give a normalization proof for , a proof system developed by the second author [15]. Indeed, this proof system allows to define a proof for the axiom of dependent choice thanks to the use of streams that are lazily evaluated, and was lacking a proper normalization proof.

Finally, to determine the range of our technique, it would be natural to investigate the relation between our framework and the many different presentations of call-by-need calculi (with or without control). Amongst other calculi, we could cite Chang-Felleisen presentation of call-by-need [4], Garcia et al. lazy calculus with delimited control [10] or Kesner’s recent paper on normalizing by-need terms characterized by an intersection type system [16]. To this end, we might rely on Pédrot and Saurin’s classical by-need [33]. They indeed relate (classical) call-by-need with linear head-reduction from a computational point of view, and draw the connections with the presentations of Ariola et al. [2] and Chang-Felleisen [4]. Ariola et al. -calculus being close to the -calculus (see [2] for further details), our technique is likely to be adaptable to their framework, and thus to Pédrot and Saurin’s system.

References

  • [1] Zena Ariola and Matthias Felleisen. The call-by-need lambda calculus. J. Funct. Program., 7(3):265–301, 1993.
  • [2] Zena M. Ariola, Paul Downen, Hugo Herbelin, Keiko Nakata, and Alexis Saurin. Classical call-by-need sequent calculi: The unity of semantic artifacts. In Tom Schrijvers and Peter Thiemann, editors, Proceedings of FLOPS’12, Kobe, Japan, May 23-25, 2012. Proceedings, LNCS, pages 32–46. Springer, 2012.
  • [3] Franco Barbanera and Stefano Berardi. A symmetric -calculus for classical program extraction. Information and Computation, 125(2):103–117, 1996.
  • [4] Stephen Chang and Matthias Felleisen. The call-by-need lambda calculus, revisited. In Programming Languages and Systems - 21st European Symposium on Programming, ESOP 2012, Held as Part of the European Joint Conferences on Theory and Practice of Software, ETAPS 2012, Tallinn, Estonia, March 24 - April 1, 2012. Proceedings, pages 128–147, 2012.
  • [5] Tristan Crolard. A confluent lambda-calculus with a catch/throw mechanism. J. Funct. Program., 9(6):625–647, 1999.
  • [6] Pierre-Louis Curien and Hugo Herbelin. The duality of computation. In Proceedings of ICFP 2000, SIGPLAN Notices 35(9), pages 233–243. ACM, 2000.
  • [7] Pierre-Évariste Dagand and Gabriel Scherer. Normalization by realizability also evaluates. In David Baelde and Jade Alglave, editors, Proceedings of JFLA’15, Le Val d’Ajol, France, January 2015.
  • [8] Matthias Felleisen, Daniel P. Friedman, Eugene E. Kohlbecker, and Bruce F. Duba. Reasoning with continuations. In Proceedings of LICS’86, Cambridge, Massachusetts, USA, June 16-18, 1986, pages 131–141. IEEE Computer Society, 1986.
  • [9] Jean Gallier. On girard’s ’candidats de reductibilité.’. In Odifreddi, editor, Logic and Computer Science, pages 123–203. Academic Press, 1900.
  • [10] Ronald Garcia, Andrew Lumsdaine, and Amr Sabry. Lazy evaluation and delimited control. Logical Methods in Computer Science, Volume 6, Issue 3, July 2010.
  • [11] Jean-Yves Girard. Une extension de Ľinterpretation de gödel a Ľanalyse, et son application a Ľelimination des coupures dans Ľanalyse et la theorie des types. In J.E. Fenstad, editor, Proceedings of the Second Scandinavian Logic Symposium, volume 63 of Studies in Logic and the Foundations of Mathematics, pages 63 – 92. Elsevier, 1971.
  • [12] Mauricio Guillermo and Alexandre Miquel. Specifying peirce’s law in classical realizability. Mathematical Structures in Computer Science, 26(7):1269–1303, 2016.
  • [13] Mauricio Guillermo and Étienne Miquey. Classical realizability and arithmetical formulæ. Mathematical Structures in Computer Science, page 1–40, 2016.
  • [14] Hugo Herbelin. C’est maintenant qu’on calcule: au cœur de la dualité. Habilitation thesis, University Paris 11, December 2005.
  • [15] Hugo Herbelin. A constructive proof of dependent choice, compatible with classical logic. In Proceedings of the 27th Annual IEEE Symposium on Logic in Computer Science, LICS 2012, Dubrovnik, Croatia, June 25-28, 2012, pages 365–374. IEEE Computer Society, 2012.
  • [16] Delia Kesner. Reasoning About Call-by-need by Means of Types, pages 424–441. Springer Berlin Heidelberg, Berlin, Heidelberg, 2016.
  • [17] Jean-Louis Krivine. Lambda-calculus, types and models. Ellis Horwood series in computers and their applications. Masson, 1993.
  • [18] Jean-Louis Krivine. Dependent choice, ‘quote’ and the clock. Th. Comp. Sc., 308:259–276, 2003.
  • [19] Jean-Louis Krivine. Realizability in classical logic. In interactive models of computation and program behaviour. Panoramas et synthèses, 27, 2009.
  • [20] Jean-Louis Krivine. Realizability algebras: a program to well order r. Logical Methods in Computer Science, 7(3), 2011.
  • [21] Jean-Louis Krivine. Realizability algebras II : new models of ZF + DC. Logical Methods in Computer Science, 8(1):10, February 2012. 28 p.
  • [22] Jean-Louis Krivine. On the structure of classical realizability models of ZF, 2014.
  • [23] Yves Lafont, Bernhard Reus, and Thomas Streicher. Continuations semantics or expressing implication by negation. Technical Report 9321, Ludwig-Maximilians-Universität, München, 1993.
  • [24] Frédéric Lang. Explaining the lazy krivine machine using explicit substitution and addresses. Higher-Order and Symbolic Computation, 20(3):257–270, Sep 2007.
  • [25] Rodolphe Lepigre. A classical realizability model for a semantical value restriction. In Peter Thiemann, editor, Programming Languages and Systems - 25th European Symposium on Programming, ESOP 2016, Held as Part of the European Joint Conferences on Theory and Practice of Software, ETAPS 2016, Eindhoven, The Netherlands, April 2-8, 2016, Proceedings, volume 9632 of Lecture Notes in Computer Science, pages 476–502. Springer, 2016.
  • [26] John Maraist, Martin Odersky, and Philip Wadler. The call-by-need lambda calculus. J. Funct. Program., 8(3):275–317, 1998.
  • [27] Alexandre Miquel. Existential witness extraction in classical realizability and via a negative translation. Logical Methods in Computer Science, 7(2):188–202, 2011.
  • [28] Étienne Miquey. Classical realizability and side-effects. PhD thesis, Université Paris-Diderot, Universidad de la República (Uruguay), 2017.
  • [29] Guillaume Munch-Maccagnoni. Focalisation and Classical Realisability. In Erich Grädel and Reinhard Kahle, editors, Computer Science Logic ’09, volume 5771 of Lecture Notes in Computer Science, pages 409–423. Springer, Heidelberg, 2009.
  • [30] Chris Okasaki, Peter Lee, and David Tarditi. Call-by-need and continuation-passing style. Lisp and Symbolic Computation, 7(1):57–82, 1994.
  • [31] Michel Parigot. Free deduction: An analysis of ”computations” in classical logic. In Andrei Voronkov, editor, Proceedings of LPAR, volume 592 of LNCS, pages 361–380. Springer, 1991.
  • [32] Michel Parigot. Strong normalization of second order symmetric lambda-calculus. In Sanjiv Kapoor and Sanjiva Prasad, editors, Foundations of Software Technology and Theoretical Computer Science, 20th Conference, FST TCS 2000 New Delhi, India, December 13-15, 2000, Proceedings, volume 1974 of LNCS, pages 442–453. Springer, 2000.
  • [33] Pierre-Marie Pédrot and Alexis Saurin. Classical by-need. In Peter Thiemann, editor, Programming Languages and Systems: 25th European Symposium on Programming, ESOP 2016, Proceedings, pages 616–643. Springer Berlin Heidelberg, 2016.
  • [34] Gordon D. Plotkin. Call-by-name, call-by-value and the lambda-calculus. Theor. Comput. Sci., 1(2):125–159, 1975.
  • [35] Emmanuel Polonowski. Strong normalization of lambda-mu-mu/tilde-calculus with explicit substitutions. In Igor Walukiewicz, editor, Foundations of Software Science and Computation Structures, 7th International Conference, FOSSACS 2004, Held as Part of the Joint European Conferences on Theory and Practice of Software, ETAPS 2004, Barcelona, Spain, March 29 - April 2, 2004, Proceedings, volume 2987 of Lecture Notes in Computer Science, pages 423–437. Springer, 2004.
  • [36] William. W. Tait. Intensional interpretations of functionals of finite type i. Journal of Symbolic Logic, 32(2):198–212, 1967.

Appendix 0.A Subject reduction of the -calculus

We present in this section the proof of subject reduction for the -calculus (Section 1). The proof is done by reasoning by induction over the typing derivation, and relies on the fact that the type system admits a weakening rule.

Lemma 4

The following rule is admissible for any level of the hierarchy :

Proof

Easy induction on typing derivations using the typing rules given in Figure 2.

Theorem 1

If and then .

Proof

By induction over the reduction rules of the -calculus (see Figure 1).

Case .

A typing derivation of the closure on the left-hand side is of the form:

hence we can derive:

Case .

A typing derivation of the closure on the left-hand side is of the form:

hence we can derive:

Case .

A typing derivation of the closure on the left-hand side is of the form:

where we cheated to compact each typing judgment for (corresponding to types in ) in . Therefore, we can derive: