# Call-by-need, neededness and all that

We show that call-by-need is observationally equivalent to weak-head needed reduction. The proof of this result uses a semantical argument based on a (non-idempotent) intersection type system called V. Interestingly, system V also allows to syntactically identify all the weak-head needed redexes of a term.

## Authors

• 10 publications
• 3 publications
• 6 publications
• ### Realizability Interpretation and Normalization of Typed Call-by-Need λ-calculus With Control

We define a variant of realizability where realizers are pairs of a term...
03/02/2018 ∙ by Étienne Miquey, et al. ∙ 0

• ### Sequence Types and Infinitary Semantics

We introduce a new representation of non-idempotent intersection types, ...
02/15/2021 ∙ by Pierre Vial, et al. ∙ 0

• ### Gun Source and Muzzle Head Detection

There is a surging need across the world for protection against gun viol...
01/29/2020 ∙ by Zhong Zhou, et al. ∙ 16

• ### A Type Theory for Strictly Associative Infinity Categories

Many definitions of weak and strict ∞-categories have been proposed. In ...
09/03/2021 ∙ by Eric Finster, et al. ∙ 0

• ### Reconstructing a single-head formula to facilitate logical forgetting

Logical forgetting may take exponential time in general, but it does not...
12/18/2020 ∙ by Paolo Liberatore, et al. ∙ 0

• ### Binary intersection formalized

We provide a reformulation and a formalization of the classical result b...
06/30/2020 ∙ by Štěpán Holub, et al. ∙ 0

• ### Empirical bounds for functions with weak interactions

We provide sharp empirical estimates of expectation, variance and normal...
03/11/2018 ∙ by Andreas Maurer, et al. ∙ 0

##### This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

## 1 Introduction

One of the fundamental notions underlying this paper is the one of needed reduction in -calculus, which is to be used here to understand (lazy) evaluation of functional programs. Key notions are those of reducible and non-reducible programs: the former are programs (represented by -terms) containing non-evaluated subprograms, called reducible expressions (redexes), whereas the latter can be seen as definitive results of computations, called normal forms. It turns out that every reducible program contains a special kind of redex known as needed or, in other words, every -term not in normal form contains a needed redex. A redex is said to be needed in a -term if has to be contracted (i.e. evaluated) sooner or later when reducing to normal form, or, informally said, if there is no way of avoiding to reach a normal form.

The needed strategy, which always contracts a needed redex, is normalising [BarendregtKKS87], i.e. if a term can be reduced (in any way) to a normal form, then contraction of needed redexes necessarily terminates. This is an excellent starting point to design an evaluation strategy, but unfortunately, neededness of a redex is not decidable [BarendregtKKS87]. As a consequence, real implementations of functional languages cannot be directly based on this notion.

Our goal is, however, to establish a clear connection between the semantical notion of neededness and different implementations of lazy functional languages (e.g. Miranda or Haskell). Such implementations are based on call-by-need calculi, pioneered by Wadsworth [Wadsworth:thesis], and extensively studied e.g. in [AriolaFMOW95]. Indeed, call-by-need calculi fill the gap between the well-known operational semantics of the call-by-name -calculus and the actual implementations of lazy functional languages. While call-by-name re-evaluates an argument each time it is used –an operation which is quite expensive– call-by-need can be seen as a memoized version of call-by-name, where the value of an argument is stored the first time it is evaluated for subsequent uses. For example, if , where and , then call-by-name duplicates the argument , while lazy languages first reduce to the value so that further uses of this argument do not need to evaluate it again.

While the notion of needed reduction is defined with respect to (full strong) normal forms, call-by-need calculi evaluate programs to special values called weak-head normal forms, which are either abstractions or arbitrary applications headed by a variable (i.e. terms of the form where are arbitrary terms). To overcome this shortfall, we first adapt the notion of needed redex to terms that are not going to be fully reduced to normal forms but only to weak-head normal forms. Thus, informally, a redex is weak-head needed in a term if has to be contracted sooner or later when reducing to a weak-head normal form. The derived notion of strategy is called a weak-head needed strategy, which always contracts a weak-head needed redex.

This paper introduces two independent results about weak-head neededness, both obtained by means of (non-idempotent) intersection types [Gardner94, Carvalho:thesis] (a survey can be found in [BucciarelliKV17]). We consider, in particular, typing system  [Kesner16] and show that it allows to identify all the weak-head needed redexes of a weak-head normalising term. This is done by adapting the classical notion of principal type [Rocca88] and proving that a redex in a weak-head normalising term is weak-head needed iff it is typed in a principally typed derivation for in .

Our second goal is to show observational equivalence between call-by-need and weak-head needed reduction. Two terms are observationally equivalent when all the empirically testable computations on them are identical. This means that a term can be evaluated to a weak-head normal form using the call-by-need machinery if and only if the weak-head needed reduction normalises .

By means of system mentioned so far we use a technique to reason about observational equivalence that is flexible, general and easy to verify or even certify. Indeed, system provides a semantic argument: first showing that a term is typable in system iff it is normalising for the weak-head needed strategy (), then by resorting to some results in [Kesner16], showing that system is complete for call-by-name, i.e. a term is typable in system iff is normalising for call-by-name (); and that is normalising for call-by-name iff is normalising for call-by-need (). Thus completing the following chain of equivalences:

This leads to the observational equivalence between call-by-need, call-by-name and weak-head needed reduction.

Structure of the paper: Sec. 2 introduces preliminary concepts while Sec. 3 defines different notions of needed reduction. The type system is studied in Sec. 4. Sec 5 extends -reduction to derivation trees. We show in Sec. 6 how system identifies weak-head needed redexes, while Sec. 7 gives a characterisation of normalisation for the weak-head needed reduction. Sec. 8 is devoted to define call-by-need. Finally, Sec. 9 presents the observational equivalence result.

## 2 Preliminaries

This section introduces some standard definitions and notions concerning the reduction strategies studied in this paper, that is, call-by-name, head and weak-head reduction, and neededness, this later notion being based on the theory of residuals [Barendregt84].

### 2.1 The Call-By-Name Lambda-Calculus

Given a countable infinite set of variables we consider the following grammar:

 (Terms) t,u ::= x∈X∣tu∣λx.t (Values) v ::= λx.t (Contexts) C ::= □∣Ct∣tC∣λx.C (Name contexts) E ::= □∣Et

The set of -terms is denoted by . We use , and to denote the terms , and respectively. We use (resp. ) for the term obtained by replacing the hole of (resp. ) by . The sets of free and bound variables of a term , written respectively and , are defined as usual [Barendregt84]. We work with the standard notion of -conversion, i.e. renaming of bound variables for abstractions; thus for example .

A term of the form is called a -redex (or just redex when is clear from the context) and is called the anchor of the redex. The one-step reduction relation (resp. ) is given by the closure by contexts (resp. ) of the rewriting rule , where denotes the capture-free standard higher-order substitution. Thus, call-by-name forbids reduction inside arguments and -abstractions, e.g.  and but neither nor holds. We write (resp. ) for the reflexive-transitive closure of (resp. ).

In order to introduce different notions of reduction, we start by formalising the general mechanism of reduction which consists in contracting a redex at some specific occurrence. Occurrences are finite words over the alphabet . We use to denote the empty word and notation for concatenations of some letter of the alphabet. The set of occurrences of a given term is defined by induction as follows: ; ; .

Given two occurrences and , we use the notation to mean that is a prefix of , i.e. there is such that . We denote by the subterm of at occurrence , defined as expected [BaaderN98], thus for example . The set of redex occurrences of is defined by . We use the notation to mean that and reduces to by contracting the redex at occurrence , e.g. . This notion is extended to reduction sequences as expected, and noted , where is the list of all the redex occurrences contracted along the reduction sequence. We use to denote the empty reduction sequence, so that holds for every term .

Any term has exactly one of the following forms: or with . In the latter case we say that is the head redex of , while in the former case there is no head redex. Moreover, if , we say that is the weak-head redex of . In terms of occurrences, the head redex of is the minimal redex occurrence of the form with . In particular, if it satisfies that is not an abstraction for every , it is the weak-head redex of . A reduction sequence contracting at each step the head redex (resp. weak-head redex) of the corresponding term is called the head reduction (resp. weak-head reduction).

Given two redex occurrences , we say that is to-the-left of if the anchor of is to the left of the anchor of . Thus for example, the redex occurrence is to-the-left of in the term , and is to-the-left of in . Alternatively, the relation to-the-left can be understood as a dictionary order between redex occurrences, i.e.  is to-the-left of if either with (i.e.  is a proper prefix of ); or and (i.e. they share a common prefix and is on the left-hand side of an application while is on the right-hand side). Notice that in any case this implies . Since this notion defines a total order on redexes, every term not in normal form has a unique leftmost redex. The term leftmost reduces to if reduces to and the reduction step contracts the leftmost redex of . For example, leftmost reduces to and leftmost reduces to . This notion extends to reduction sequences as expected.

## 3 Towards neededness

Needed reduction is based on two fundamental notions: that of residual, which describes how a given redex is traced all along a reduction sequence, and that of normal form, which gives the form of the expected result of the reduction sequence. This section extends the standard notion of needed reduction [BarendregtKKS87] to those of head and weak-head needed reductions.

### 3.1 Residuals

Given a term , and , the descendants of after in , written , is the set of occurrences defined as follows:

 ∅if p=r or p=r0{p}if r≰p{rq}if p=r00q{rkq∣s|k=x}if p=r1q % with t|r=(λx.s)u

For instance, given , then , , , , and .

Notice that where . Furthermore, if is the occurrence of a redex in (i.e. ), then , and each position in is called a residual of after reducing . This notion is extended to sets of redex occurrences, indeed, the residuals of after in are . In particular . Given and , the residuals of after the sequence are: and .

Stability of the to-the-left relation makes use of the notion of residual:

###### Lemma 1.

Given a term , let such that is to-the-left of , and . Then, and is to-the-left of for every .

###### Proof.

First, notice that implies , then it is immediate to see that . If , then and the result holds immediately. Otherwise, implies for every by definition. From is to-the-left of , we may distinguish two cases: either with ; or and . Both cases imply that and share a common prefix, say (resp. or on each case). From and we know that is a proper prefix of . Thus, it is a proper prefix of too, i.e. either with ; or . Hence, is to-the-left of for every . ∎

Notice that this result does not only implies that the leftmost redex is preserved by reduction of other redexes, but also that the residual of the leftmost redex occurs in exactly the same occurrence as the original one.

###### Corollary 1.

Given a term , and the leftmost redex of , if the reduction contracts neither nor any of its residuals, then is the leftmost redex of .

###### Proof.

By induction on the length of using Lem. 1. ∎

### 3.2 Notions of Normal Form

The expected result of evaluating a program is specified by means of some appropriate notion of normal form. Given any relation , a term is said to be in -normal form () iff there is no such that . A term is -normalising () iff there exists such that . Thus, given an -normalising term , we can define the set of -normal forms of as .

In particular, it turns out that a term in weak-head -normal form () is of the form () or , where are arbitrary terms, i.e. it has no weak-head redex. The set of weak-head -normal forms of is .

Similarly, a term in head -normal form () turns out to be of the form (), i.e. it has no head redex. The set of head -normal forms of is given by .

Last, any term in -normal form () has the form () where are themselves in -normal form. It is well-known that the set is a singleton, so we may use it either as a set or as its unique element.

It is worth noticing that . Indeed, the inclusions are strict, for instance is in weak-head but not in head -normal form, while is in head but not in -normal form.

### 3.3 Notions of Needed Reduction

The different notions of normal form considered in Sec. 3.2 suggest different notions of needed reduction, besides the standard one in the literature [BarendregtKKS87]. Indeed, consider . We say that is used in a reduction sequence iff reduces or some residual of . Then:

1. is needed in if every reduction sequence from to -normal form uses ;

2. is head needed in if every reduction sequence from to head -normal form uses ;

3. is weak-head needed in if every reduction sequence of to weak-head -normal form uses .

Notice in particular that (resp. or ) implies every redex in is needed (resp. head needed or weak-head needed).

A one-step reduction is needed (resp. head or weak-head needed), noted (resp. or ), if the contracted redex is needed (resp. head or weak-head needed). A reduction sequence is needed (resp. head or weak-head needed), noted (resp. or ), if every reduction step in the sequence is needed (resp. head or weak-head needed).

For instance, consider the reduction sequence:

 (λy.λx.Ix(II–––r1))(II)→nd(λy.λx.Ix–––r2I)(II)→nd(λy.λx.xI)(II)––––––––––––––––––r3→ndλx.xI

which is needed but not head needed, since redex might not be contracted to reach a head normal form:

Moreover, this second reduction sequence is head needed but not weak-head needed since only redex is needed to get a weak-head normal form:

 (λy.λx.Ix(II))(II)––––––––––––––––––––––––r3→whndλx.Ix(II)

Notice that the following equalities hold: , and .

Leftmost redexes and reduction sequences are indeed needed:

###### Lemma 2.

The leftmost redex in any term not in normal form (resp. head or weak-head normal form) is needed (resp. head or weak-head needed).

###### Proof.

Since the existing proof [BarendregtKKS87] does not extend to weak-head normal forms, we give here an alternative argument which does not only cover the standard case of needed reduction but also the new ones of head and weak-head reductions.

###### Theorem 3.1.

Let and be the leftmost reduction (resp. head reduction or weak-head reduction) starting with such that (resp. or ). Then, is needed (resp. head or weak-head needed) in iff is used in .

###### Proof.

Let with . By hypothesis is the leftmost redex of its corresponding term. By Lem. 2, is needed (resp. head or weak-head needed). Notice that, given a redex not needed in , it follows from the definition that no residual of is needed either. Therefore, needed (resp. head or weak-head needed) implies needed (resp. head or weak-head needed) as well. ∎

Notice that the weak-head reduction is a prefix of the head reduction, which is in turn a prefix of the leftmost reduction to normal form. As a consequence, it is immediate to see that every weak-head needed redex is in particular head needed, and every head needed redex is needed as well. For example, consider:

 (λy.λx.¯¯¯¯¯¯¯Ixr2(¯¯¯¯¯¯IIr3))(¯¯¯¯¯¯IIr4)––––––––––––––––––––––––––––r1

where is a needed redex but not head needed nor weak-head needed. However, is both needed and head needed, while is the only weak-head needed redex in the term, and is not needed at all.

## 4 The Type System V

In this section we recall the (non-idempotent) intersection type system  [Kesner16] –an extension of those in [Gardner94, Carvalho:thesis]– used here to characterise normalising terms w.r.t. the weak-head strategy. More precisely, we show that is typable in system if and only if is normalising when only weak-head needed redexes are contracted. This characterisation is used in Sec. 9 to conclude that the weak-head needed strategy is observationally equivalent to the call-by-need calculus (to be introduced in Sec. 8).

Given a constant type that denotes answers and a countable infinite set of base type variables , we define the following sets of types:

 (Types) τ,σ ::= a∣α∈B∣M→τ (Multiset types) M,N ::= {{τi}}i∈I where I is a finite set

The empty multiset is denoted by . We remark that types are strict [Bakel92], i.e. the right-hand sides of functional types are never multisets. Thus, the general form of a type is with being the constant type or a base type variable.

Typing contexts (or just contexts), written , are functions from variables to multiset types, assigning the empty multiset to all but a finite set of variables. The domain of is given by . The union of contexts, written , is defined by , where denotes multiset union. An example is . This notion is extended to several contexts as expected, so that denotes a finite union of contexts (when the notation is to be understood as the empty context). We write for the context and if .

Type judgements have the form , where is a typing context, is a term and is a type. The intersection type system for the -calculus is given in Fig. 1.

The constant type in rule is used to type values. The axiom is relevant (there is no weakening) and the rule is multiplicative. Note that the argument of an application is typed times by the premises of rule . A particular case is when : the subterm occurring in the typed term turns out to be untyped.

A (type) derivation is a tree obtained by applying the (inductive) typing rules of system . The notation means there is a derivation of the judgement in system . The term is typable in system , or -typable, iff is the subject of some derivation, i.e. iff there are and such that . We use the capital Greek letters to name type derivations, by writing for example . For short, we usually denote with a derivation with subject for some type and context. The size of the derivation , denoted by , is defined as the number of nodes of the corresponding derivation tree. We write to access the last rule applied in the derivation . Likewise, is the multiset of proper maximal subderivations of . For instance, given

 \prooftreeΦt(Φiu)i∈I\justifiesΓ⊢tu:τ\thickness=0.05em\usingΦ=(→e)\endprooftree

we have and . We also use functions , and to access the context, subject and type of the judgement in the root of the derivation tree respectively. For short, we also use notation to denote the type associated to the variable in the typing environment of the conclusion of (i.e. ).

Intersection type systems can usually be seen as models [CoppoD80], i.e. typing is stable by convertibility: if is typable and , then is typable too. This property splits in two different statements known as subject reduction and subject expansion respectively, the first one giving stability of typing by reduction, the second one by expansion. In the particular case of non-idempotent types, subject reduction refines to weighted subject-reduction, stating that not only typability is stable by reduction, but also that the size of type derivations is decreasing. Moreover, this decrease is strict when reduction is performed on special occurrences of redexes, called typed occurrences. We now introduce all these concepts.

Given a type derivation , the set of typed occurrences of , which is a subset of , is defined by induction on the last rule of .

• If , then .

• If with and , then .

• If with and , then .

Remark that there are two kind of untyped occurrences, those inside untyped arguments of applications, and those inside untyped bodies of abstractions. For instance consider the following type derivations:

 \prooftree\prooftree\prooftree\justifiesx:{{a}}⊢x:a\thickness=0.05em\using(ax)\endprooftree\justifiesx:{{a}}⊢λy.x:{{}}→a\using(→i)\endprooftree\justifies⊢K:{{a}}→{{}}→a\usingΦK=(→i)\endprooftree\prooftree\prooftreeΦK\prooftree\justifies⊢I:a\thickness=0.05em\using(val)\endprooftree\justifies⊢KI:{{}}→a\usingΦKI=(→e)\endprooftree\justifies⊢KIΩ:a\usingΦKIΩ=(→e)\endprooftree

Then, .

###### Remark 1.

The weak-head redex of a typed term is always a typed occurrence.

For convenience we introduce an alternative way to denote type derivations. We refer to as the result of applying with subject and type :

 AX(x,τ)\raisebox−1.0pt$\tiny{def}=$\prooftreeΦ\justifiesx:{{τ}}⊢x:τ\thickness=0.05em\using(ax)\endprooftree

We denote with the result of applying abstracting and term :

 VAL(x,t)\raisebox−1.0pt$\tiny{def}=$\prooftreeΦ\justifies⊢λx.t:a\thickness=0.05em\using(val)\endprooftree

We refer to as the result of applying with premise and abstracting variable :

 ABS(x,Φt)\raisebox−1.0pt$\tiny{def}=$\prooftreeΦt\justifiesCTXT(Φt)∖∖x⊢λx.t:Φt(x)→TYPE(Φt)\thickness=0.05em\using(→i)\endprooftree

Likewise, we write for the result of applying with premises and , and argument (the argument is untyped when ). Note that this application is valid provided that and . Then:

 APP(Φt,u,(Φiu)i∈I)\raisebox−1.0pt$\tiny{def}=$\prooftreeΦt(Φiu)i∈I\justifiesCTXT(Φt)+i∈ICTXT(Φiu)⊢tu:τ\thickness=0.05em\using(→e)\endprooftree

Given and , the multiset of all the subderivations of at occurrence is inductively defined as follows:

• If , then .

• If , i.e.  with and . Then, .

• If , i.e.  with and . Then, (recall denotes multiset union).

Given type derivations and a position , replacing the subderivations of at occurrence by , written , is a type derivation inductively defined as follows, assuming and for every (we call the unique subject of all these derivations):

• If , then with .

• If , either:

• with and . Then, ; or

• with and . Then, .

• If , i.e.  with , and . Then,

 Φ[(Ψi)i∈I]p\raisebox−1.0pt$\tiny{def}=$APP(Φt,u[s]p′,(Φju[(Ψi)i∈Ij]p′)j∈J)

where is the unique subject of all the derivations and denotes the replacement of the subterm by in (variable capture is allowed). Remark that the decomposition of into the sets () is non-deterministic, thus replacement turns out to be a non-deterministic operation.

We can now state the two main properties of system , whose proofs can be found in Sec. 7 of [BucciarelliKV17].

###### Theorem 4.1 (Weighted Subject Reduction).

Let . If , then there exists s.t. . Moreover,

1. If , then .

2. If , then .

###### Theorem 4.2 (Subject Expansion).

Let . If , then there exists s.t. .

Note that weighted subject reduction implies that reduction of typed redex occurrences turns out to be normalising.

## 5 Substitution and Reduction on Derivations

In order to relate typed redex occurrences of convertible terms, we now extend the notion of -reduction to derivation trees, by making use of a natural and basic concept of typed substitution. In contrast to substitution and -reduction on terms, these operations are now both non-deterministic on derivation trees (see [Vial:thesis] for discussions and examples). Given a variable and type derivations and , the typed substitution of  by in , written by making an abuse of notation, is a type derivation inductively defined on , only if