Tight Typings and Split Bounds

07/06/2018 ∙ by Beniamino Accattoli, et al. ∙ Inria 0

Multi types---aka non-idempotent intersection types---have been used to obtain quantitative bounds on higher-order programs, as pioneered by de Carvalho. Notably, they bound at the same time the number of evaluation steps and the size of the result. Recent results show that the number of steps can be taken as a reasonable time complexity measure. At the same time, however, these results suggest that multi types provide quite lax complexity bounds, because the size of the result can be exponentially bigger than the number of steps. Starting from this observation, we refine and generalise a technique introduced by Bernadet & Graham-Lengrand to provide exact bounds for the maximal strategy. Our typing judgements carry two counters, one measuring evaluation lengths and the other measuring result sizes. In order to emphasise the modularity of the approach, we provide exact bounds for four evaluation strategies, both in the lambda-calculus (head, leftmost-outermost, and maximal evaluation) and in the linear substitution calculus (linear head evaluation). Our work aims at both capturing the results in the literature and extending them with new outcomes. Concerning the literature, it unifies de Carvalho and Bernadet & Graham-Lengrand via a uniform technique and a complexity-based perspective. The two main novelties are exact split bounds for the leftmost strategy---the only known strategy that evaluates terms to full normal forms and provides a reasonable complexity measure---and the observation that the computing device hidden behind multi types is the notion of substitution at a distance, as implemented by the linear substitution calculus.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1. Introduction

Type systems enforce properties of programs, such as termination, deadlock-freedom, or productivity. This paper studies a class of type systems for the -calculus that refines termination by providing exact bounds for evaluation lengths and normal forms.

Intersection types and multi types.

One of the cornerstones of the theory of -calculus is that intersection types characterise termination: not only typed programs terminate, but all terminating programs are typable as well (Coppo and Dezani-Ciancaglini, 1978, 1980; Pottinger, 1980; Krivine, 1993). In fact, the -calculus comes with different notions of evaluation (e.g. call-by-name, call-by-value, call-by-need, etc) to different notions of normal forms (head/weak/full, etc) and, accordingly, with different systems of intersection types.

Intersection types are a flexible tool and, even when one fixes a particular notion of evaluation and normal form, the type system can be formulated in various ways. A flavour that became quite convenient in the last 10 years is that of non-idempotent intersection types (Gardner, 1994; Kfoury, 2000; Neergaard and Mairson, 2004; de Carvalho, 2007) (a survey can be found in (Bucciarelli et al., 2017)), where the intersection is not equivalent to . Non-idempotent intersection types are more informative than idempotent ones because they give rise to a quantitative approach, that allows counting resource consumption.

Non-idempotent intersections can be seen as multi-sets, which is why, to ease the language, we prefer to call them multi types rather than non-idempotent intersection types. Multi types have two main features:

  1. Bounds on evaluation lengths: they go beyond simply qualitative characterisations of termination, as typing derivations provide quantitative bounds on the length of evaluation (i.e. on the number of -steps). Therefore, they give intensional insights on programs, and seem to provide a tool to reason about the complexity of programs.

  2. Linear logic interpretation: multi types are deeply linked to linear logic. The relational model (Girard, 1988; Bucciarelli and Ehrhard, 2001) of linear logic (often considered as a sort of canonical model of linear logic) is based on multi-sets, and multi types can be seen as a syntactic presentation of the relational model of the -calculus induced by the interpretation into linear logic.

These two facts together have a potential, fascinating consequence: they suggest that denotational semantics may provide abstract tools for complexity analyses, that are theoretically solid, being grounded on linear logic.

Various works in the literature explore the bounding power of multi types. Often, the bounding power is used qualitatively, i.e. without explicitely counting the number of steps, to characterise termination and / or the properties of the induced relational model. Indeed, multi types provide combinatorial proofs of termination that are simpler than those developed for (idempotent) intersection types (e.g. reducibility candidates). Several papers explore this approach under the call-by-name (Bucciarelli et al., 2012; Kesner and Ventura, 2015; Kesner and Vial, 2017; Paolini et al., 2017; Ong, 2017) or the call-by-value (Ehrhard, 2012; Díaz-Caro et al., 2013; Carraro and Guerrieri, 2014) operational semantics, or both (Ehrhard and Guerrieri, 2016). Sometimes, precise quantitative bounds are provided instead, as in (de Carvalho, 2007; Bernadet and Graham-Lengrand, 2013b). Multi types can also be used to provide characterisation of complexity classes (Benedetti and Ronchi Della Rocca, 2016). Other qualitative (de Carvalho, 2016; Guerrieri et al., 2016) and quantitative (de Carvalho et al., 2011; de Carvalho and Tortora de Falco, 2016) studies are also sometimes done in the more general context of linear logic, rather than in the -calculus.

Reasonable cost models.

Usually, the quantitative works define a measure for typing derivations and show that the measure provides a bound on the length of evaluation sequences for typed terms. A criticism that could be raised against these results is, or rather was, that the number of -steps of the bounded evaluation strategies might not be a reasonable cost model, that is, it might not be a reliable complexity measure. This is because no reasonable cost models for the -calculus were known at the time. But the understanding of cost models for the -calculus made significant progress in the last few years. Since the nineties, it is known that the number of steps for weak strategies (i.e. not reducing under abstraction) is a reasonable cost model (Blelloch and Greiner, 1995), where reasonable

means polynomially related to the cost model of Turing machines. It is only in 2014, that a solution for the general case has been obtained: the length of leftmost evaluation to normal form was shown to be a reasonable cost model in 

(Accattoli and Dal Lago, 2016). In this work we essentially update the study of the bounding power of multi types with the insights coming from the study of reasonable cost models. In particular, we provide new answers to the question of whether denotational semantics can really be used as an accurate tool for complexity analyses.

Size explosion and lax bounds.

The study of cost models made clear that evaluation lengths are independent from the size of their results. The skepticism about taking the number of -steps as a reliable complexity measure comes from the size explosion problem, that is, the fact that the size of terms can grow exponentially with respect to the number of -steps. When -terms are used to encode decision procedures, the normal forms (encoding true or false) are of constant size, and therefore there is no size explosion issue. But when -terms are used to compute other normal forms than Boolean values, there are families of terms where has size linear in , it evaluates to normal form in -steps, and produces a result of size , i.e. exponential in . Moreover, the size explosion problem is extremely robust, as there are families for which the size explosion is independent of the evaluation strategy. The difficulty in proving that the length of a given strategy provides a reasonable cost model lies precisely in the fact that one needs a compact representation of normal forms, to avoid to fully compute them (because they can be huge and it would be too expensive). A divulgative introduction to reasonable cost models and size explosion is (Accattoli, 2018).

Now, multi typings do bound the number of -steps of reasonable strategies, but these bounds are too generous since they bound at the same time the length of evaluations and the size of the normal forms. Therefore, even a notion of minimal typing (in the sense of being the smallest derivation) provides a bound that in some cases is exponentially worse than the number of -steps.

Our observation is that the typings themselves are in fact much bigger than evaluation lengths, and so the widespread point of view for which multi types—and so the relational model of linear logic—faithfully capture evaluation lengths, or even the complexity, is misleading.

Contributions

The tightening technique.

Our starting point is a technique introduced in a technical report by (Bernadet and Graham-Lengrand, 2013a). They study the case of strong normalisation, and present a multi type system where typing derivations of terms provide an upper bound on the number of -steps to normal form. More interestingly, they show that every strongly normalising term admits a typing derivation that is sufficiently tight, where the obtained bound is exactly the length of the longest -reduction path. This improved on previous results, e.g. (Bernadet and Lengrand, 2011; Bernadet and Graham-Lengrand, 2013b) where multi types provided the exact measure of longest evaluation paths plus the size of the normal forms which, as discussed above, can be exponentially bigger. Finally, they enrich the structure of base types so that, for those typing derivations providing the exact lengths, the type of a term gives the structure (and hence the size) of its normal form. This paper embraces this tightening technique, simplifying it with the use of tight constants for base types, and generalising it to a range of other evaluation strategies, described below.

It is natural to wonder how natural the tightening technique is—a malicious reader may indeed suspect that we are cooking up an ad-hoc way of measuring evaluation lengths, betraying the linear-logic-in-disguise spirit of multi types. To remove any doubt, we show that our tight typings are actually isomorphic to minimal multi typings without tight constants. Said differently, the tightening technique turns out to be a way of characterising minimal typings in the standard multi type framework (aka the relational model). Let us point out that, in the literature, there are characterisations of minimal typings (so-called principal typings) only for normal forms, and they extend to non-normal terms only indirectly, that is, by subject expansion of those for normal forms. Our approach, instead, provides a direct description, for any typable term.

Modular approach.

We develop all our results by using a unique schema that modularly applies to different evaluation strategies. Our approach isolates the key concepts for the correctness and completeness of multi types, providing a powerful and modular technique, having at least two by-products. First, it reveals the relevance of neutral terms and of their properties with respect to types. Second, the concrete instantiations of the schema on four different cases always require subtle definitions, stressing the key conceptual properties of each case study.

Head and leftmost evaluation.

Our first application of the tightening technique is to the head and leftmost evaluation strategies. The head case is the simplest possible one. The leftmost case is the natural iteration of the head one, and the only known strong strategy whose number of steps provides a reasonable cost model (Accattoli and Dal Lago, 2016). Multi types bounding the lengths of leftmost normalising terms have been also studied in (Kesner and Ventura, 2014), but the exact number of steps taken by the leftmost strategy has not been measured via multi types before—therefore, this is a new result, as we now explain.

The study of the head and the leftmost strategies, at first sight, seems to be a minor reformulation of de Carvalho’s results about measuring via multi types the length of executions of the Krivine abstract machine (shortened KAM)—implementing weak head evaluation—and of the iterated KAM—that implements leftmost evaluation (de Carvalho, 2009). The study of cost models is here enlightening: de Carvalho’s iterated KAM does implement leftmost evaluation, but the overhead of the machine (that is counted by de Carvalho’s measure) is exponential in the number of -steps, while here we only measure the number of -steps, thus providing a much more parsimonious (and yet reasonable) measure.

The work of de Carvalho, Pagani and Tortora de Falco (de Carvalho et al., 2011), using the relational model of linear logic to measure evaluation lengths in proof nets, is also closely related. They do not however split the bounds, that is, they do not have a way to measure separately the number of steps and the size of the normal form. Moreover, their notion of cut-elimination by levels does not correspond to leftmost evaluation.

Maximal evaluation.

We also apply the technique to the maximal strategy, which takes the maximum number of steps to normal form, if any, and diverges otherwise. The maximal strategy has been bounded in (Bernadet and Lengrand, 2011), and exactly measured in (Bernadet and Graham-Lengrand, 2013a) via the idea of tightening, as described above. With respect to (Bernadet and Graham-Lengrand, 2013a), our technical development is simpler. The differences are:

  1. Uniformity with other strategies: The typing system used in (Bernadet and Graham-Lengrand, 2013a) for the maximal strategy has a special rule for typing a -abstraction whose bound variable does not appear in the body. This special case is due to the fact that the empty multi type is forbidden in the grammar of function types. Here, we align the type grammar with that used for other evaluation strategies, allowing the empty multi type, which in turn allows the typing rules for -abstractions to be the same as for head and leftmost evaluation. This is not only simpler, but it also contributes to making the whole approach more uniform across the different strategies that we treat in the paper. Following the head and leftmost evaluation cases, our completeness theorem for the maximal strategy bears quantitative information (about e.g. evaluation lengths), in contrast with (Bernadet and Graham-Lengrand, 2013a).

  2. Quantitative aspects of normal forms: Bernadet and Graham-Lengrand encode the shape of normal forms into base types. We simplify this by only using two tight constants for base types. On the other hand, we decompose the actual size of a typing derivation as the sum of two quantities: the first one is shown to match the maximal evaluation length of the typed term, and the second one is shown to match the size of its normal form together with the size of all terms that are erased by the evaluation process. Identifying what the second quantity captures is a new contribution.

  3. Neutral terms: we emphasise the key role of neutral terms in the technical development by describing their specificities with respect to typing. This is not explicitly broached in (Bernadet and Graham-Lengrand, 2013a).

Linear head evaluation.

Last, we apply the tightening technique to linear head evaluation (Mascari and Pedicini, 1994; Danos and Regnier, 2004) ( for short), formulated in the linear substitution calculus (LSC) (Accattoli, 2012; Accattoli et al., 2014), a -calculus with explicit substitutions that is strongly related to linear logic proof nets, and also a minor variation over a calculus by Milner (Milner, 2007). The literature contains a characterisation of -normalisable terms (Kesner and Ventura, 2014). Moreover, (de Carvalho, 2007) measures the executions of the KAM, a result that can also be interpreted as a measure of -evaluation. What we show however is stronger, and somewhat unexpected.

To bound -evaluation, in fact, we can strongly stand on the bounds obtained for head evaluation. More precisely, the result for the exact bounds for head evaluation takes only into account the number of abstraction and application typing rules. For linear head evaluation, instead, we simply need to count also the axioms, i.e. the rules typing variable occurrences, nothing else. It turns out that the length of a linear head evaluation plus the size of the linear head normal form is exactly the size of the tight typing.

Said differently, multi typings simply encode evaluations in the LSC. In particular, we do not have to adapt multi types to the LSC, as for instance de Carvalho does to deal with the KAM. It actually is the other way around. As they are, multi typings naturally measure evaluations in the LSC. To measure evaluations in the

-calculus, instead, one has to forget the role of the axioms. The best way to stress it, probably, is that the LSC is the computing device behind multi types.

Most proofs have been moved to the Appendix.

Other Related Works

Apart from the papers already cited, let us mention some other related works. A recent, general categorical framework to define intersection and multi type systems is in (Mazza et al., 2018).

While the inhabitation problem is undecidable for idempotent intersection types (Urzyczyn, 1999), the quantitative aspects provided by multi types make it decidable (Bucciarelli et al., 2014). Intersection type are also used in (Dudenhefner and Rehof, 2017) to give a bounded dimensional description of -terms via a notion of norm, which is resource-aware and orthogonal to that of rank. It is proved that inhabitation in bounded dimension is decidable (EXPSPACE-complete) and subsumes decidability in rank  (Urzyczyn, 2009).

Other works propose a more practical perspective on resource-aware analyses for functional programs. In particular, type-based techniques for automatically inferring bounds on higher-order functions have been developed, based on sized types (Hughes et al., 1996; Portillo et al., 2002; Vasconcelos and Hammond, 2004; Avanzini and Lago, 2017) or amortized analysis (Hofmann and Jost, 2003; Hoffmann and Hofmann, 2010; Jost et al., 2017). This led to practical cost analysis tools like Resource-Aware ML (Hoffmann et al., 2012) (see raml.co). Intersection types have been used (Simões et al., 2007) to address the size aliasing problem of sized types, whereby cost analysis sometimes overapproximates cost to the point of losing all cost information (Portillo et al., 2002). How our multi types could further refine the integration of intersection types with sized types is a direction for future work, as is the more general combination of our method with the type-based cost analysis techniques mentioned above.

2. A Bird’s Eye View

Our study is based on a schema that is repeated for different evaluation strategies, making most notions parametric in the strategy under study. The following concepts constitute the main ingredients of our technique:

  1. Strategy, together with the normal, neutral, and abs predicates: there is a (deterministic) evaluation strategy whose normal forms are characterised via two related predicates, and , the intended meaning of the second one is that is -normal and can never behave as an abstraction (that is, it does not create a redex when applied to an argument). We further parametrise also this last notion by using a predicate identifying abstractions, because the definition of deterministic strategies requires some subterms to not be abstractions.

  2. Typing derivations: there is a multi types system which has three features:

    • Tight constants: there are two new type constants and , and rules to introduce them. As their name suggests, the constants and are used to type terms whose normal form is neutral or an abstraction, respectively.

    • Tight derivations: there is a notion of tight derivation that requires a special use of the constants.

    • Indices: typing judgements have the shape , where and are indices meant to count, respectively, the number of steps to normal form and the size of the normal form.

  3. Sizes: there is a notion of size of terms that depends on the strategy, noted . Moreover, there is a notion of size of typing derivations that also depends on the strategy / type system, that coincides with the sum of the indices associated to the last judgement of .

  4. Characterisation: we prove that is a tight typing relatively to if and only if there exists an normal term such that and .

  5. Proof technique: the characterisation is obtained always through the same sequence of intermediate results. Correctness follows from the fact that all tight typings of normal forms precisely measure their size, a substitution lemma for typing derivations and subject reduction. Completeness follows from the fact that every normal form admits a tight typing, an anti-substitution lemma for typing derivations, and subject expansion.

  6. Neutral terms: we stress the relevance of neutral terms in normalisation proofs from a typing perspective. In particular, correctness theorems always rely on a lemma about them. Neutral terms are a common concept in the study of -calculus, playing a key role in, for instance, the reducibility candidate technique (Girard et al., 1989).

The proof schema is illustrated in the next section on two standard strategies, namely head and leftmost-outermost evaluation. It is then slightly adapted to deal with maximal evaluation in Sect. 5 and linear head evaluation in Sect. 6.

Evaluation systems.

Each case study treated in the paper relies on the same properties of the strategy and the related predicates , , and , that we collect under the notion of evaluation system.

Definition 2.1 (Evaluation system).

Let be a set of terms, be a (deterministic) strategy and , , and be predicates on . All together they form an evaluation system if for all :

  1. Determinism of : if and then .

  2. Characterisation of -normal terms: is -normal if and only if .

  3. Characterisation of -neutral terms: if and only if and .

Given a strategy we use for its iteration and for its transitive closure.

3. Head and Leftmost-Outermost Evaluation

In this section we consider two evaluation systems at once. The two strategies are the famous head and leftmost-outermost evaluation. We treat the two cases together to stress the modularity of our technique. The set of -terms is given by ordinary -terms:

Normal, neutral, and abs predicates.

The predicates and defining head and leftmost-outermost (shortened LO in the text and in mathematical symbols) normal terms are in Fig. 1, and they are based on two auxiliary predicates defining neutral terms: and —note that implies . The predicates and are equal for the systems and and they are true simply when is an abstraction.

Small-step semantics.

The head and leftmost-outermost strategies and are both defined in Fig. 2. Note that these definitions rely on the predicates defining neutral terms and abstractions.

Proposition 3.1 (Head and LO evaluation systems).

Let . Then
is an evaluation system.

The proof is routine, and it is then omitted also from the Appendix.

Figure 1. Head and leftmost-outermost neutral and normal terms


Figure 2. Head and leftmost-outermost strategies

Sizes.

The notions of head size and LO size of a term are defined as follows—the difference is on applications:

Multi types.

We define the following notions about types.

Figure 3. Type system for head and LO evaluations
  • Multi types are defined by the following grammar:

    where ranges over a non-empty set of atomic types and denotes the multi-set constructor.

  • Examples of multisets: is a multi-set containing two occurrences of and one occurrence of , and is the empty multi-set.

  • A typing context is a map from variables to finite multisets of types such that only finitely many variables are not mapped to the empty multi-set . We write for the domain of , i.e. the set .

  • Tightness: we use the notation for . Moreover, we write if is of the form , if is of the form , and if for all , in which case we also say that is tight.

  • The multi-set union is extended to typing contexts point-wise, i.e.  maps each variable to . This notion is extended to several contexts as expected so that denotes a finite union of contexts (when the notation is to be understood as the empty context). We write for only if . More generally, we write if the intersection between the domains of and is empty.

  • The restricted context with respect to the variable , written is defined by and if .

Typing systems.

There are two typing systems, one for head and one for LO evaluation. Their typing rules are presented in Fig. 3, the head system contains all the rules except , the LO system contains all the rules except .

Roughly, the intuitions behind the typing rules are (please ignore the indices and for the time being):

  • Rules , , and : this rules are essentially the traditional rules for multi types for head and LO evaluation (see e.g. (Bucciarelli et al., 2017)), modulo the presence of the indices.

  • Rule : this is a structural rule allowing typing terms with a multi-set of types. In some presentations of multi types is hardcoded in the right premise of the rule (that requires a multi-set). For technical reasons, it is preferable to separate it from . Morally, it corresponds to the -promotion rule in linear logic.

  • Rule : has already been tightly typed, and all the types associated to are also tight constants. Then receives the tight constant for abstractions. The consequence is that this abstraction can no longer be applied, because it has not an arrow type, and there are no rules to apply terms of type . Therefore, the abstraction constructor cannot be consumed by evaluation and it ends up in the normal form of the term, that has the form .

  • Rule : has already been tightly typed with and so morally it head normalises to a term having neutral form . The rule adds a further argument that cannot be consumed by evaluation, because will never become an abstraction. Therefore, ends up in the head normal form of , that is still neutral—correctly, so that is also typed with . Note that there is no need to type because head evaluation never enters into arguments.

  • Rule : similar to rule , except that LO evaluation enters into arguments and so the added argument now also has to be typed, and with a tight constant. Note a key difference with : in the argument is typed exactly once (that is, the type is not a multi-set)—correctly, because its LO normal form appears exactly once in the LO normal form of (where is the LO normal form of ).

  • Tight constants and predicates: there is of course a correlation between the tight constants and and the predicates and . Namely, a term is -typable with if and only if the -normal form of verifies the predicate , as we shall prove. For the tight constant and the predicate the situation is similar but weaker: if the -normal form of verifies then is typable with , but not the other way around—for instance a variable is typable with without being an abstraction.

  • The type systems are not syntax-directed, e.g. given an abstraction (resp. an application), it can be typed with rule or (resp. or ), depending on whether the constructor typed by the rule ends up in the normal form or not. Thus for example, given the term , where is the identity function , the second occurrence of can be typed with using rule , while the first one can be typed with using rule .

Typing judgements are of the form , where is a pair of integers whose intended meaning is explained in the next paragraph. We write , with being either or , if is a typing derivation in the system and ends in the judgement .

Indices.

The roles of and can be described as follows:

  • and -steps: counts the rules of the derivation that can be used to form -redexes, i.e. the number of and rules. Morally, is at least twice the number of -steps to normal form because typing a -redex requires two rules. For tight typing derivations (introduced below), we are going to prove that is the exact (double of the) length of the evaluation of the typed term to its normal form, according to the chosen evaluation strategy.

  • and size of the result: counts the rules typing constructors that cannot be consumed by -reduction according to the chosen evaluation strategy. It counts the number of and . These rules type the result of the evaluation, according to the chosen strategy, and measure the size of the result. Both the notion of result and the way its size is measured depend on the evaluation strategy.

Typing size.

We define both the head and the LO size and of a typing derivation as the number of rules in , not counting rules and . The size of a derivation is reflected by the pair of indices on its final judgement: whenever , we have . Note indeed that every rule (except and ) adds exactly to this size.

For systems and , the indices on typing judgements are not really needed, as can be recovered as the number of and rules, and as the number of and / rules. We prefer to make them explicit because 1) we want to stress the separate counting, and 2) for linear head evaluation in Sect. 6 the counting shall be more involved, and the indices shall not be recoverable.

The fact that is not counted for and shall change in Sect. 6, where we show that counting rules corresponds to measure evaluations in the linear substitution calculus. The fact that is not counted, instead, is due to the fact that it does not correspond to any constructor on terms. A further reason is that the rule may be eliminated by absorbing it in the rule, that is the only rule that uses multi-sets—it is however technically convenient to separate the two.

Subtleties and easy facts.

Let us overview some peculiarities and consequences of the definition of our type systems.

  1. Relevance: No weakening is allowed in axioms. An easy induction on typing derivations shows that a variable declaration appears explicitly in the typing context of a type derivation for only if occurs free in some typed subterm of . In system , all subterms of are typed, and so appears in if and only if . In system , instead, arguments of applications might not be typed (because of rule ), and so there may be but not appearing in .

  2. Vacuous abstractions: we rely on the convention that the two abstraction rules can always abstract a variable not explicitly occurring in the context. Indeed, in the rule, if , then is equal to and is , while in the rule, if , then is and thus holds.

  3. Head typings and applications: note the rule types an application without typing the right subterm . This matches the fact that is a head normal form when is, independently of the status of .

Tight derivations.

A given term may have many different typing derivations, indexed by different pairs . They always provide upper bounds on -evaluation lengths and lower bounds on the -size of -normal forms, respectively. The interesting aspect of our type systems, however, is that there is a simple description of a class of typing derivations that provide exact bounds for these quantities, as we shall show. Their definition relies on tight constants.

Definition 3.2 (Tight derivations).

Let . A derivation is tight if and .

Let us stress that, remarkably, tightness is expressed as a property of the last judgement only. This is however not so unusual: characterisations of weakly normalising terms via intersection/multi types also rely on properties of the last judgement only, as discussed in Sect. 7.

In Sect. 7, in particular, we show the the size of a tight derivation for a term is minimal among derivations for . Moreover, it is also the same size of the minimal derivations making no use of tight constants nor rules using them. Therefore, tight derivations may be thought as a characterisation of minimal derivations.

Example.

Let , where is the identity function . Let us first consider the head evaluation of to normal-form:

The evaluation sequence has length . The head normal form has size . To give a tight typing for the term let us write for . Then,

Indeed, the pair represents evaluation steps to normal-form and a head normal form of size .

3.1. Tight Correctness

Correctness of tight typings is the fact that whenever a term is tightly typable with indices , then is exactly (the double of) the number of evaluation steps to -normal form while is exactly the size of the -normal form. Thus, tight typing in system (resp. ) gives information about -evaluation to -normal form (resp. -evaluation to -normal form). The correctness theorem is always obtained via three intermediate steps.

First step: tight typings of normal forms.

The first step is to show that, when a tightly typed term is a -normal form, then the first index of its type derivation is , so that it correctly captures the (double of the) number of steps, and the second index coincides exactly with its -size.

Proposition 3.3 (Properties of and tight typings for normal forms).

Let , be such that , and be a typing derivation.

  1. Size bound: .

  2. Tightness: if is tight then and .

  3. Neutrality: if then .

The proof is by induction on the typing derivation . Let us stress three points:

  1. Minimality: the size of typings of a normal form always bounds the size of (Proposition 3.3.1), and therefore tight typings, that provide an exact bound (Proposition 3.3.2), are typing of minimal size. For the sake of conciseness, in most of the paper we focus on tight typings only. In Sect. 7, however, we study in detail the relationship between arbitrary typings and tight typings, extending their minimality beyond normal forms.

  2. Size of tight typings: note that Proposition 3.3.2 indirectly shows that all tight typings have the same indices, and therefore the same size. The only way in which two tight typings can differ, in fact, is whether the variables in the typing context are typed with or , but the structure of different typings is necessarily the same (which is also the structure of the -normal form itself).

  3. Unveiling of a key structural property: Proposition 3.3 relies on the following interesting lemma about -neutral terms and tight typings.

    Lemma 3.4 (Tight spreading on neutral terms).

    Let , be such that , and be a typing derivation such that . Then .

    The lemma expresses the fact that tightness of neutral terms only depends on their contexts. Morally, this fact is what makes tightness to be expressible as a property of the final judgement only. We shall see in Sect. 7 that a similar property is hidden in more traditional approaches to weak normalisation (see Lemma 7.6). Such a spreading property appears repeatedly in our study, and we believe that its isolation is one of the contributions of our work, induced by the modular and comparative study of various strategies.

Second step: substitution lemma.

Then one has to show that types, typings, and indices behave well with respect to substitution, which is essential, given that -reduction is based on it.

Lemma 3.5 (Substitution and typings for and ).

The following rule is admissible in both systems and :

Moreover if the derivations of the premisses are tight then so is the derivation of the conclusion.

The proof is by induction on the derivation of .

Note that the lemma also holds for , in which case is necessarily empty. In system , it is also true that if then and , because all free variables of have non empty type in the typing context. As already pointed out, in system such a matching between free variables and typing contexts does not hold, and it can be that and yet and .

Third step: quantitative subject reduction.

Finally, one needs to shows a quantitative form of type preservation along evaluation. When the typing is tight, every evaluation step decreases the first index of exactly 2 units, accounting for the application and abstraction constructor consumed by the firing of the redex.

Proposition 3.6 (Quantitative subject reduction for and ).

Let . If is tight and then and there exists a tight typing such that .

The proof is by induction on , and it relies on the substitution lemma (Lemma 3.5) for the base case of -reduction at top level.

It is natural to wonder what happens when the typing is not tight. In the head case, the index still decreases exactly of 2. In the lo case things are subtler—they are discussed in Sect. 7.

Summing up.

The tight correctness theorem is proved by a straightforward induction on the evaluation length relying on quantitative subject reduction (Proposition 3.6) for the inductive case, and the properties of tight typings for normal forms (Proposition 3.3) for the base case.

Theorem 3.7 (Tight correctness for and ).

Let and be a tight derivation. Then there exists such that , , and . Moreover, if then .

3.2. Tight Completeness

Completeness of tight typings (in system ) expresses the fact that every -normalising term has a tight derivation (in system ). As for correctness, the completeness theorem is always obtained via three intermediate steps, dual to those for correctness. Essentially, one shows that every normal form has a tight derivation and then extends the result to -normalising term by pulling typability back through evaluation using a subject expansion property.

First step: normal forms are tightly typable.

A simple induction on the structure of normal forms proves the following proposition.

Proposition 3.8 (Normal forms are tightly typable for and ).

Let and be such that . Then there exists a tight derivation . Moreover, if then , and if then .

In contrast to the proposition for normal forms of the correctness part (Proposition 3.3), here there are no auxiliary lemmas, so the property is simpler.

Second step: anti-substitution lemma.

In order to pull typability back along evaluation sequence, we have to first show that typability can also be pulled back along substitutions.

Lemma 3.9 (Anti-substitution and typings for and ).

Let and . Then there exist:

  • a multi-set ;

  • a typing derivation ; and

  • a typing derivation

such that:

  • Typing context: ;

  • Indices: .

Moreover, if is tight then so are and .

The proof is by induction on .

Let us point out that the anti-substitution lemma holds also in the degenerated case in which does not occur in and is not -normalising: rule can indeed be used to type any term with by taking an empty set of indices for the premises. Note also that this is forced by the fact that , and so . Finally, this fact does not contradict the correctness theorem, because here is typed with a multi-set, while the theorem requires a type.

Third step: quantitative subject expansion.

This property guarantees that typability can be pulled back along evaluation sequences.

Proposition 3.10 (Quantitative subject expansion for and ).

Let and be a tight derivation. If then there exists a (tight) typing such that .

The proof is a simple induction over using the anti-substitution lemma in the base case of evaluation at top level.

Summing up.

The tight completeness theorem is proved by a straightforward induction on the evaluation length relying on quantitative subject expansion (Proposition 3.10) for the inductive case, and the existence of tight typings for normal forms (Proposition 3.8) for the base case.

Theorem 3.11 (Tight completeness for and ).

Let and with . Then there exists a tight typing . Moreover, if then , and if then .

4. Extensions and Deeper Analyses

In the rest of the paper we are going to further explore the properties of the tight approach to multi types along three independent axes:

  1. Maximal evaluation: we adapt the methodology to the case of maximal evaluation, which relates to strong normalisation in that the maximal evaluation strategy terminates only if the term being evaluated is strongly normalising. This case is a simplification of (Bernadet and Graham-Lengrand, 2013a) that can be directly related to the head and leftmost evaluation cases. It is in fact very close to leftmost evaluation but for the fact that, during evaluation, typing contexts are not necessarily preserved and the size of the terms being erased has to be taken into account. The statements of the properties in Sections 3.1 and 3.2 have to be adapted accordingly.

  2. Linear head evaluation: we reconsider head evaluation in the linear substitution calculus obtaining exact bounds on the number of steps and on the size of normal forms. The surprise here is that the type system is essentially unchanged and that it is enough to count also axiom rules (that are ignored for head evaluation in the -calculus) in order to exactly bound also the number of linear substitution steps.

  3. LO evaluation and minimal typings: we explore the relationship between tight typings and traditional typings without tight constants. This study is done in the context of LO evaluation, that is the more relevant one with respect to cost models for the -calculus. We show in particular that tight typings are isomorphic to minimal traditional typings.

Let us stress that these three variations on a theme can be read independently.

5. Maximal Evaluation

In this section we consider the maximal strategy, which gives the longest evaluation sequence from any strongly normalising term to its normal form. The maximal evaluation strategy is perpetual in that, if a term has a diverging evaluation path then the maximal strategy diverges on . Therefore, its termination subsumes the termination of any other strategy, which is why it is often used to reason about the strong normalisation property (van Raamsdonk et al., 1999).

Strong normalisation and erasing steps

It is well-known that in the framework of relevant (i.e. without weakening) multi types it is technically harder to deal with strong normalisation (all evaluations terminate)—which is equivalent to the termination of the maximal strategy— than with weak normalisation (there is a terminating evaluation)—which is equivalent to the termination of the LO strategy. The reason is that one has to ensure that all subterms that are erased along any evaluation are themselves strongly normalising.

The simple proof technique that we used in the previous section does not scale up—in general—to strong normalisation (or to the maximal strategy), because subject reduction breaks for erasing steps, as they change the final typing judgement. Of course the same is true for subject expansion. There are at least three ways of circumventing this problem:

  1. Memory: to add a memory constructor, as in Klop’s calculus (Klop, 1980), that records the erased terms and allows evaluation inside the memory, so that diverging subterms are preserved. Subject reduction then is recovered.

  2. Subsumption/weakening: adding a simple form of sub-typing, that allows stabilising the final typing judgement in the case of an erasing step, or more generally, adding a strong form of weakening, that essentially removes the empty multi type.

  3. Big-step subject reduction: abandon the preservation of the typing judgement in the erasing cases, and rely on a more involved big-step subject reduction property relating the term directly to its normal form, stating in particular that the normal form is typable, potentially by a different type.

Surprisingly, the tight characterisation of the maximal strategy that we are going to develop does not need any of these workarounds: in the case of tight typings subject reduction for the maximal strategy holds, and the simple proof technique used before adapts smoothly. To be precise, an evaluation step may still change the final typing judgement, but the key point is that the judgement stays tight. Morally, we are employing a form of subsumption of tight contexts, but an extremely light one, that in particular does not require a sub-typing relation. We believe that this is a remarkable feature of tight multi types.

Maximal evaluation and predicates

The maximal strategy shares with LO evaluation the predicates , , , and the notion of term size , which we respectively write , , , and . We actually define, in Fig. 4, a version of the maximal strategy, denoted , that is indexed by an integer representing the size of what is erased by the evaluation step. We define the transitive closure of as follows:

Figure 4. Deterministic maximal strategy

Figure 5. Type system for maximal evaluation
Proposition 5.1 ( evaluation system).

is an evaluation system.

Also in this case the proof is routine, and it is then omitted even from the Appendix.

Multi types

Multi types are defined exactly as in Section 3. The type system for -evaluation is defined in Fig. 5. Rules and , which is a special 0-ary version of , are used to prevent an argument in rule to be untyped: either it is typed by means of rule —and thus it is typed with at least one type—or it is typed by means of rule —and thus it is typed with exactly one type: the type itself is then forgotten, but requiring the premise to have a type forces the term to be normalising. The fact that arguments are always typed, even those that are erased during reduction, is essential to guarantee strong normalisation: system cannot type anymore a term like . Note that if , then if and only if .

Similarly to the head and leftmost-outermost cases, we define the size of a typing derivation as the number of rule applications in , not counting rules and and . And again if then .

For maximal evaluation, we need also to refine the notion of tightness of typing derivations, which becomes a global condition because it is no longer a property of the final judgment only:

Definition 5.2 (Mx-tight derivations).

A derivation is garbage-tight if in every instance of rule in we have . It is mx-tight if also is tight, in the sense of Definition 3.2.

Similarly to the head and LO cases, the quantitative information in mx-tight derivations characterises evaluation lengths and sizes of normal forms, as captured by the correctness and completeness theorems.

5.1. Tight Correctness

The correctness theorem is proved following the same schema used for head and LO evaluations. Most proofs are similar, and are therefore omitted even from the Appendix.

We start with the properties of typed normal forms. As before, we need an auxiliary lemma about neutral terms, analogous to Proposition 3.3.

Lemma 5.3 (Tight spreading on neutral terms for ).

If and such that , then .

The general properties of typed normal forms hold as well.

Proposition 5.4 (Properties of mx-tight typings for normal forms).

Given with ,

  1. Size bound: .

  2. Tightness: if is mx-tight then and .

  3. Neutrality: if then .

Then we can type substitutions:

Lemma 5.5 (Substitution and typings for ).

The