The Curry-Howard Isomorphism is the well-known relationship between programming languages and logical systems. While Curry first introduced the analogy between Hilbert-style deductions and combinatory logic, Howard highlighted the one between simply typed lambda calculus and natural deduction. Both examples use intuitionistic logic. The extension of the Curry-Howard Isomorphism to classical logic took more than two decades, when Griffin  observed that Feilleisen’s operator can be typed with the double-negation elimination. A major step in this field was done by Parigot , who proposed the -calculus as a simple term notation for classical natural deduction proofs. The -calculus is an extension of the simply typed -calculus that encodes usual control operators as the Felleison’s operator mentioned so far. Other calculi were proposed since then, as for example Curien-Herbelin’s -calculus  based on classical sequent calculus.
The Curry-Howard correspondence has already contributed to the
understanding of many aspects of programming languages by establishing
a rich connection between logic and computation. However, there are
still some crucial aspects of computation, like the use of
resources (e.g. time and
space), that still need to be logically understood in the classical setting.
foundations of resource consumption is nowadays
a big challenge facing the programming language community. It would lead to a new generation of programming
languages and proof assistants, with a clean type-theoretic account of
From qualitative…. Several notions of type assignment systems for -calculus have been defined since its creation, including among others simple types and polymorphic types. However, even if polymorphic types are powerful and convenient in programming practice, they have several drawbacks. For example, it is not possible to assign a type to a term of the form , which can be understood as a meaningful program specified by a terminating term. Intersection types, pioneered by Coppo and Dezani [11, 12], introduce a new constructor for types, allowing the assignment of a type of the form to the term . The intuition behind a term of type is that has both types and . The symbol is to be understood as a mathematical intersection, so in principle, intersection type theory was developed by using idempotent (), commutative , and associative laws.
Intersection types have been used as a behavioural tool to
reason about several operational and semantical properties of
programming languages. For example, a -term/program is
strongly normalizing/terminating if and only if can be assigned a
type in an appropriate intersection type assignment system.
Similarly, intersection types are able to describe and analyse models
of -calculus , characterize
solvability , head normalization ,
linear-head normalization , and weak-normalization [35, 33] among other properties.
…. to quantitative Intersection types: This technology turns
out to be a powerful tool to reason about qualitative properties
of programs, but not about quantitative ones. Indeed, for
example, there is a type system characterizing head normalization
(i.e. is typable in this system if and only if is head
normalizing) and which gives simultaneously a proof that is
head-normalizing if and only if the head reduction strategy terminates
on . But the type system gives no information about the number of
head-reduction steps that are necessary to obtain a head normal form. Here is where non-idempotent types come into
play, thus making a clear distinction between and
, because intuitively, using the resource twice or
once is not the same from the quantitative point of view. This change
of perspective can be related to the
essential spirit of Linear Logic , which removes the
contraction and weakening structural rules in order to provide an
explicit control of the use of logical resources, i.e. to give a full
account of the number of times that a given proposition is used to
derive a conclusion.
The case of the -calculus: Non-idempotent types were pioneered by Gardner  and Kfoury . Relational models of -calculi based on non-idempotent types have been investigated in [16, 19]. D. de Carvalho  established in his PhD thesis a relation between the size of a typing derivation in a non-idempotent intersection type system for the lambda-calculus and the head/weak-normalization execution time of head/weak-normalizing lambda-terms, respectively. Non-idempotency is used to reason about the longest reduction sequence of strongly normalizing terms in both the lambda-calculus [7, 15, 8] and in different lambda-calculi with explicit substitutions [8, 27]. Non-idempotent types also appear in linearisation of the lambda-calculus , type inference and inhabitation [31, 36, 9], different characterisations of solvability , verification of higher-order programs .
The case of the -calculus:
It is essential to go beyond the
-calculus to focus on the challenges posed by the advanced
features of modern higher-order programming languages and proof
assistants. We want in particular to associate quantitative
information to languages being able to express control operators, as
they allow to enrich declarative programming languages with imperative
The non-idempotent intersection and union types for
-calculus that we present in this article can be seen as a quantitative
refinement of Girard’s translation of classical logic into linear logic.
Different qualitative and/or quantitative models for classical calculi were
proposed in [43, 46, 48, 3], thus
limiting the characterization of operational properties to
head-normalization. Intersection and
union types were also studied in the framework of classical
logic [34, 45, 32, 18], but no work
adresses the problem from a quantitative perspective.
Type-theoretical characterization of strong-normalization for
classical calculi were provided both for
-calculus , but the (idempotent)
typing systems do not allow to construct decreasing measures for
reduction, thus a resource aware semantics cannot be extracted from
those interpretations. Combinatorial strong normalization proofs
for the -calculus were proposed for example in , but
they do not provide any explicit decreasing measure,
and their use of
structural induction on simple types does not work anymore with
intersection types, which are more powerful than simple types as
they do not only ensure termination but also characterize it.
Different small step semantics for classical calculi
were developed in the framework of
neededness [5, 41], without resorting to any
resource aware semantical argument.
Contributions: Our first contribution is the definition of a resource aware type system for the -calculus based on non-idempotent intersection and union types. The non-idempotent approach provides very simple combinatorial arguments, only based on a decreasing measure, to characterize head and strongly normalizing terms by means of typability. Indeed, we show that for every typable term with type derivation , if reduces to , then is typable with a type derivation such that the measure of is strictly greater than that of . In the well-known case of the -calculus, such a measure is simply based on the structure of type tree derivations and it is given by the number of its nodes, which strictly decreases along reduction. However, in the -calculus, the creation of nested applications during -reduction may increase the number of nodes of the corresponding type derivations, so that such a simple definition of measure is not decreasing anymore. We then need to also take into account the structure (multiplicity) of certain types appearing in the type derivations, thus ensuring an overall decreasing of the measure during reduction. This first result has been previously presented in .
The second contribution of this paper is the definition of a new resource aware operational semantics for , called , inspired from the substitution at a distance paradigm , which is compatible with the non-idempotent typing system defined for . We then extend the second typing system for , so that the extended reduction system preserves (and decreases the size of) typing derivations. We generalize the type-theoretical characterization of strong normalization to this explicit classical calculus, thus particularly simplifying existing proofs of strong normalization for small-step operational semantics of classical calculi .
2. The -Calculus
This section gives the syntax (Section 2.1) and the operational semantics (Section 2.2) of the calculus . But before this we first introduce some preliminary general notions of rewriting that will be used all along the paper, and that are applicable to any system . We denote by the (one-step) reduction relation associated to system . We write for the reflexive-transitive closure of , and for the composition of -steps of , thus denotes a finite -reduction sequence of length from to . A term is in -normal form, written -nf, if there is no s.t. ; and has an -normal form iff there is -nf such that . A term is said to be strongly -normalizing, written , iff there is no infinite -sequence starting at .
We consider a countable infinite set of variables (resp. continuation names ). The set of objects (), terms () and commands () of the -calculus are given by the following grammars
We write for the the set of -terms, which is a subset of . We abbreviate as or when is clear from the context. The grammar extends -terms with two new constructors: commands and -abstractions . Free and bound variables of objects are defined as expected, in particular and . Free names of objects are defined as expected, in particular and . Bound names are defined accordingly.
We work with the standard notion of -conversion i.e. renaming of bound variables and names, thus for example . Substitutions are (finite) functions from variables to terms specified by the notation . Application of the substitution to the object , written , may require -conversion in order to avoid capture of free variables/names, and it is defined as expected. Replacements are (finite) functions from names to terms specified by the notation . Intuitively, the operation passes the term as an argument to any command of the form . Formally, the application of the replacement to the object , written , may require -conversion in order to avoid the capture of free variables/names, and is defined as follows:
For example, if , then
2.2. Operational Semantics
We consider the following set of contexts, defined inductively by:
The hole can be replaced by a term: indeed, and denote the replacement of in the context by the term .
The -calculus is given by the set of objects introduced in Section 2.1 and the reduction relation , sometimes simply written , which is the closure by all contexts of the following rewriting rules:
defined by means of the substitution and replacement application notions given in Section 2.1. A redex is a term of the form or .
An alternative specification of the -rule  is given by , where denotes the fresh replacement meta-operation assigning to (thus changing the name of the command), in contrast to the replacement operation introduced in Section 2.1 which replaces by . We remark however that the resulting terms and are -equivalent; thus e.g. . We will come back to this alternative definition of -reduction in Section 7.
A typical example of expressivity in the -calculus is the control operator  call-cc which gives raise to the following reduction sequence:
A reduction step is said to be erasing iff and , or and . Thus e.g. and are erasing steps. A reduction step which is not erasing is called non-erasing. Reduction is stable by substitution and replacement. More precisely, if , then and . These stability properties give the following corollary. If (resp. ) , then .
A head-context is a context defined by the following grammar:
A head-normal form is an object of the form , where is any variable replacing the constant . Thus for example is a head-normal form. An object is said to be head-normalizing, written , if , for some head-normal form . Remark that does not imply while the converse necessarily holds. We write and when is restricted to be a -term and the reduction system is restricted to the -reduction rule.
A redex in an object of the form is called the head-redex of . The reduction step contracting the head-redex of is called a head-reduction step. The reduction sequence composing head-reduction steps until head-normal form is called the head-strategy. If the head-strategy starting at terminates, then , while the converse direction is not straightforward (cf. Theorem 4.2).
3. Quantitative Type Systems for the -Calculus
As mentioned before, our results rely on typability of -terms in suitable systems with non-idempotent types. Since the -calculus embeds the -calculus, we start by recalling the well-known [20, 16, 9] quantitative type systems for -calculus, called here and . We then reformulate them, using a different syntactical formulation, resulting in the typing systems and , that are the formalisms we adopt in Section 4 for .
We start by fixing a countable set of base types , then we introduce two different categories of types specified by the following grammars:
An intersection type is a multiset that can be understood as a type , where is associative and commutative, but non-idempotent. The non-deterministic choice operation on intersection types is defined as follows:
Variable assignments (written ) are functions from variables to intersection types. We may write to denote the variable assignment that associates the empty intersection type to every variable. The domain of is given by , where is the empty intersection type. We write for the assignment of domain mapping each to . When , then stands for . We write for , where is multiset union, and . We write for the assignment defined by and if .
To present/discuss different typing systems, we consider the following derivability notions. A type judgment is a triple , where is a variable assignment, a term and a type. A (type) derivation in system is a tree obtained by applying the (inductive) rules of the type system . We write if is a type derivation concluding with the type judgment , and just if there exists such that . A term is -typable iff there is a derivation in typing , i.e. if there is such that . We may omit the index if the name of the system is clear from the context.
3.1. Characterizing Head -Normalizing -Terms
Notice that in rule allows to type an application without necessarily typing the subterm . Thus, if , then from the judgment we can derive for example .
System characterizes head -normalization:
Let . Then is -typable iff iff the head-strategy terminates on .
Moreover, the implication typability implies termination of the head-strategy can be shown by simple arithmetical arguments provided by the quantitative flavour of the typing system , in contrast to classical reducibility arguments usually invoked in other cases [22, 33]. Actually, the arithmetical arguments give the following quantitative property:
If is -typable with tree derivation , then the size (number of nodes) of gives an upper bound to the length of the head-reduction strategy starting at .
To reformulate system in a different way, we now distinguish two sorts of judgments: regular judgments of the form assigning types to terms, and auxiliary judgments of the form assigning intersection types to terms.
An equivalent formulation of system , called , is given in Figure 2, (where we always use the name for the rule typing the application term, even if the rule is different from that in system ). There are two inherited forms of type derivations: regular (resp. auxiliary) derivations are those that conclude with regular (resp. auxiliary) judgments. Notice that in rule gives for any term , e.g. , so that one can also derive in this system. Notice also that systems and are relevant, i.e. they lack weakening. Equivalence between and gives the following result: Let . Then is -typable iff iff the head-strategy terminates on . Auxiliary judgments turn out to substantially lighten the notations and to make the statements (and their proofs) more readable.
3.2. Characterizing Strong -Normalizing -Terms
We now discuss typing systems being able to characterize strong -normalizing -terms. We first consider system in Figure 4, which appears in  (slight variants appear in [15, 8, 27]). Rule forces the erasable argument (the subterm ) to be typed, even if the type of (i.e. ) is not being used in the conclusion of the judgment. Thus, in contrast to system , every subterm of a typed term is now typed.
System characterizes strong -normalization: Let . Then is -typable iff .
As before, the implication typability implies normalization can be show by simple arithmetical arguments provided by the quantitative flavour of the typing system .
An equivalent formulation of system , called , is given in Figure 4. As before, we use regular as well as auxiliary judgments. Notice that in rule is still possible, but derivations of the form , representing untyped terms, will never be used. The choice operation (defined at the beginning of Section 3) in rule is used to impose an arbitrary types for erasable terms, i.e. when has type , then needs to be typed with an arbitrary type , thus the auxiliary judgment typing on the right premise of can never assign to . This should be understood as a sort of controlled weakening. Here is an example of type derivation in system :
Since and are equivalent, we also have: Let . Then is -typable iff .
4. Quantitative Type Systems for the -Calculus
We present in this section two quantitative systems for the -calculus, systems (Section 4.2) and (Section 4.3), characterizing, respectively, head and strong -normalizing objects. Since -calculus is embedded in the -calculus, then the starting points to design and are, respectively, systems and , introduced in Section 3.
We consider a countable set of base types and the following categories of types:
The constant is used to type commands, union types to type terms and intersection types to type variables (thus left-hand sides of arrows). Both and can be seen as multisets, representing, respectively, and , where and are both associative, commutative, but non-idempotent. We may omit the indices in the simplest case: thus and denote singleton multisets. We define the operator (resp. ) on intersection (resp. union) multiset types by : and , where always means multiset union. The non-deterministic choice operation is now defined on intersection and union types:
The choice operator for union type is defined so that (1) the empty union cannot be assigned to -abstractions (2) subject reduction is guaranteed in system for erasing steps , where . We present concrete examples on page 7 which illustrates the need of non-empty union types and blind types to guarantee subject reduction.
The arity of types and union multiset types is defined by induction: for types , if , then , otherwise, ; for union multiset types, . Thus, the arity of a type counts the number of its top-level arrows. The cardinality of multisets is defined by .
Variable assignments (written ), are, as before, functions from variables to intersection multiset types. Similarly, name assignments (written ), are functions from names to union multiset types. The domain of is given by , where is the empty union multiset. We may write to denote the name assignment that associates the empty union type to every name. When , then stands for . We write for , where .
When and are disjoint we may write instead of . We write , even when , for the following variable assignment and if . Similar concepts apply to name assignments, so that and are defined as expected.
We now present our typing systems and , both having regular (resp. auxiliary) judgments of the form (resp. ), together with their respective notions of regular and auxiliary derivations. An important syntactical property they enjoy is that both are syntax directed, i.e. for each (regular/auxiliary) typing judgment there is a unique typing rule whose conclusion matches the judgment . This makes our proofs much simpler than those arising with idempotent types which are based on long generation lemmas (e.g. [8, 45]).
In this section we present a quantitative typing system for , called , characterizing head -normalization. It can be seen as a first intuitive step to understand the typing system , introduced later in Section 4.3, and characterizing strong -normalization. However, to avoid redundancy, the properties of the two systems are not described in the same way:
For , we provide informal discussions to explain the main requirements needed to capture quantitative information in the presence of classical feature (names, -redexes). We particularly focus on the interdiction of empty union types. We do not give the proofs of the properties of , because they are simpler than those of system .
For , we provide a more compact presentation, since the main key technical choices used for are still valid. However, full statements and proofs of the properties of are detailed.
The (syntax directed) rules of the typing system are presented in Figure 5.
Rule is to be understood as a logical admissible rule: if union (resp. intersection) is interpreted as the (resp. ) logical connective, then and implies . As in the simply typed -calculus , the rule saves a type for the name , however, in our system, the corresponding name assignment , specified by means of , collects all the types that has been assigned during the derivation. Notice that the -rule is not deterministic since denotes an arbitrary union type when is , a technical requirement which is discussed at the end of the section.
In the simply typed -calculus, would be typed with (Peirce’s Law), so that the fact that is used twice in the type derivation would not be explicitely materialized with simple types (same comment applies to idempotent intersection/union types). This makes a strong contrast with the derivation in Figure 6, where , , and , and call-cc is typed with the union type .
This example suggests to distinguish two different uses of names:
The name is saved twice by a rule : once for and once for , both times with type . After that, the abstraction restores the union of the two types that were previoulsy stored by (by means of the two -rules). A similar phenomenon occurs with -abstractions, which restore the types of the free ocurrences of variables in the body of the functions.
The name is not free in , so that a new union type is introduced to type the abstraction . From a logical point of view this corresponds to a weakening on the right handside of the sequent. Consequently, and -abstractions are not treated symmetrically: when is not free in , then will be typed with (where is the type of ), and no new arbitrary intersection type is introduced for the abstracted variable .
An interesting observation is about the restriction of system to the pure -calculus: union types, name assignments and rules and are no more necessary, so that every union multiset takes the single form , which can be simply identified with . Thus, the restricted typing system becomes system in Figure 2.
Another observation is about relevance, indeed, variable and name assignments contain the minimal useful information. Formally, [Relevance for System ] Let . If , then and .
By induction on . ∎
We define now our the notion of size derivation, which is a natural number representing the amount of information in a tree derivation. For any type derivation , is inductively defined by the following rules, where we use an abbreviated notation for the premises.
System behaves as expected, in particular, typing is stable by reduction (Subject Reduction) and anti-reduction (Subject Expansion):
[Weighted Subject Reduction for ] Let . If , then there exists a derivation such that . Moreover, if the reduced redex is typed, then .
An important remark is that, if the arity of the types were not taken into account in the size of the rules , then we would only have (and not ) for the -reduction steps. Intuitively, the -reduction dispatches the -rule typing the root of the -redex into several created -rules in the reduct, but it does not not ensure neither an increase nor a decrease of the measure. The solution to recover this key feature (i.e. the decrease) is suggested by the effect of -reduction on the -rules associated to (see Figure 7): indeed, -reduction replaces every named term by , where is the argument of the -redex, so that the saved types are smaller under the created -rules than the ones in the original derivation.
As expected from an intersection (and union) type system, subject expansion holds for , meaning that typing is stable under anti-reduction. Note that we do not state a weighted subject expansion property (although this would be possible) only because this is not necessary to prove the final characterization property of system (cf. Theorem 4.2).
[Subject Expansion for ] Let . If