Canonicity and normalisation for Dependent Type Theory

10/22/2018 ∙ by Thierry Coquand, et al. ∙ 0

We show canonicity and normalization for dependent type theory with a cumulative sequence of universes and a type of Boolean. The argument follows the usual notion of reducibility, going back to Godel's Dialectica interpretation and the work of Tait. A key feature of our approach is the use of a proof relevant notion of reducibility.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

Introduction

We show canonicity and normalization for dependent type theory with a cumulative sequence of universes with -conversion. We give the argument in a constructive set theory CZFu, designed by P. Aczel [2]. We provide a purely algebraic presentation of a canonicity proof, as a way to build new (algebraic) models of type theory. We then present a normalization proof, which is technically more involved, but is based on the same idea. We believe our argument to be a simplification of existing proofs [15, 16, 1, 7], in the sense that we never need to introduce a reduction relation, and the proof theoretic strength of our meta theory is as close as possible to the one of the object theory [2, 9].

Let us expand these two points. If we are only interested in canonicity, i.e. to prove that a closed Boolean is convertible to or , one argument for simple type theory (as presented e.g. in [19]) consists in defining a “reducibility”111The terminology for this notion seems to vary: in [12], where is was first introduced, it is called “berechenbarkeit”, which can be translated by “computable”, in [21] it is called “convertibility”, and in [19] it is called “reducibility”. predicate by induction on the type. For the type of Boolean, it means exactly to be convertible to or , and for function types, it means that it sends a reducible argument to a reducible value. It is then possible to show by induction on the typing relation that any closed term is reducible. In particular, if this term is a Boolean, we obtain canonicity. The problem of extending this argument for a dependent type system with universes is in the definition of what should be the reducibility predicate for universes. It is natural to try an inductive-recursive definition; this was essentially the way it was done in [15], which is an early instance of an inductive-reductive definition. We define when an element of the universe is reducible, and, by induction on this proof, what is the associated reducibility predicate for the type represented by this element. However, there is a difficulty in this approach: it might well be a priori that an element is both convertible for instance to the type of Boolean or of a product type, and if this is the case, the previous inductive-recursive definition is ambiguous.

In [15], this problem is solved by considering first a reduction relation, and then showing this reduction relation to be confluent, and defining convertibility as having a commun reduct. This does not work however when conversion is defined as a judgement (as in [16, 1]). This is an essential difficulty, and a relatively subtle and complex argument is involved in [1, 7] to solve it: one defines first an untyped reduction relation and a reducibility relation, which is used first to establish a confluence property.

The main point of this paper is that this essential difficulty can be solved, in a seemingly magical way, by considering proof-relevant reducibility, that is where reducibility is defined as a structure and not only as a property. Such an approach is hinted in the reference [16], but [16] still introduces a reduction relation, and also presents a version of type theory with a restricted form of conversion (no conversion under abstraction, and no -conversion; this restriction is motivated in [17]).

Even for the base type, reducibility is a structure: the reducibility structure of an element of Boolean type contains either (if and are convertible) or (if and are convertible) and this might a priori contains both and . Another advantage of our approach, when defining reducibility in a proof-relevant way, is that the required meta-language is weaker than the one used for a reducibility relation (where one has to do proofs by induction on this reducibility relation).

Yet another aspect that was not satisfactory in previous attempts [1, 7] is that it involved essentially a partial equivalence relation model. One expects that this would be needed for a type theory with an extensional equality, but not for the present version of type theory. This issue disappears here: we only consider predicates (that are proof-relevant).

A more minor contribution of this paper is its algebraic character. For both canonicity and decidability of conversion, one considers first a general model construction and one obtains then the desired result by instantiating this general construction to the special instance of the initial (term) model, using in both cases only the abstract characteristic property of the initial model.

1 Informal presentation

We first give an informal presentation of the canonicity proof by first expliciting the rules of type theory and then explaining the reducibility argument,

1.1 Type system

We use conversion as judgements [1]. Note that it is not clear a priori that subject reduction holds.

The conversion rules are

We consider type theory with -rules

Finally we add with the rules

with computation rules and .

1.2 Reducibility proof

The informal reducibility proof consists in associating to each closed expression of type theory (treating equally types and terms) an abstract object which represents a “proof” that is reducible. If is a (closed) type, then is a family of sets over the set of closed expressions of type modulo conversion. If is of type then is an element of the set .

The metatheory is a (constructive) set theory with a commulative hierarchy of universes [2].

This is defined by structural induction on the expression as follows

  • is

  • is the function which takes as arguments a closed expression of type and an element in and produces

  • for closed expression of type is the set

  • is the set

  • is the set

It can then be shown222We prove this statement by induction on the derivation and consider a more general statement involving a context; we don’t provide the details in this informal part since this will be covered in the next section. that if then is an element of and furthermore that if then in . In particular, if then is or and we get that is convertible to and .

One feature of this argument is that the required meta theory, here constructive set theory, is known to be of similar strength as the corresponding type theory; for a term involving universes, the meta theory will need universes [9]. This is to be contrasted with the arguments in [15, 1, 7] involving induction recursion which is a much stronger principle.

We believe that the mathematical purest way to formulate this argument is an algebraic argument, giving a (generalized) algebraic presentation of type theory. We then use only of the term model the fact that it is the initial model of type theory. This is what is done in the next section.

2 Model and syntax of dependent type theory with universes

2.1 Cumulative categories with families

We present a slight variation (for universes) of the notion of “category” with families [10]333As emphasized in this reference, these models should be more exactly thought of as generalized algebraic structures rather than categories; e.g. the initial model is defined up to isomorphism and not up to equivalence). This provides a generalized algebraic notion of model of type theory.. A model is given first by a class of contexts. If are two given contexts we have a set of substitutions from to . These collections of sets are equipped with operations that satisfy the laws of composition in a category: we have a substitution in and a composition operator in if is in and in . Furthermore we should have and if .

We assume to have a “terminal” context : for any other context, there is a unique substitution, also written , in . In particular we have in if is in .

We write the set of substitutions .

If is a context we have a cumulative sequence of sets of types over at level (where is a natural number). If in and in we should have in . Furthermore and . If in we also have a collection of elements of type . If in and in we have in . Furthermore and . If is in we write the set .

We have a context extension operation: if is in then we can form a new context . Furthermore there is a projection in and a special element in . If is in and in and in we have an extension operation in . We should have and and and .

If is in we write in . Thus if is in and in we have in . If furthermore is in we have in .

A global type of level is given by a an element in . We write simply instead of in for in . Given such a global element , a global element of type is given by an element in . We then write similarly simply instead of in .

Models are sometimes presented by giving a class of special maps (fibrations), where a type are modelled by a fibration and elements by a section of this fibration. In our case, the fibrations are the maps in , and the sections of these fibrations correspond exactly to elements in . Any element defines a section and any such section is of this form.

2.2 Dependent product types

A category with families has product types if we furthermore have one operation in for is in and is in . We should have where . We have an abstraction operation in given in . We have an application operation such that is in if is in and is in . These operations should satisfy the equations

where we write .

2.3 Cumulative universes

We assume to have global elements in such that .

2.4 Booleans

Finally we add the global constant in and global elements and in . Given in and in and in we have an operation producing an element in satisfying the equations and .

Furthermore, .

3 Reducibility model

Given a model of type theory as defined above, we describe how to build a new associated “reducibility” model . When applied to the initial/term model , this gives a proof of canonicity which can be seen as a direct generalization of the argument presented in [19] for Gödel system T. As explained in the introduction, the main novelty here is that we consider a proof-relevant notion of reducibility.

A context of is given by a context of the model together with a family of sets for in . A substitution in is given by a pair with in and in .

The identity substitution is the pair with .

Composition is defined by with

The set is defined to be the set of pairs where is in and is in . We define then .

We define to be the set of pairs where is in and is in for each in and in . We define then with .

The extension operation is defined by where is the set of pairs with and in .

We define an element in by taking . We have then an element in defined by .

3.1 Dependent product

We define a new operation where is the set

If is in then where is defined by the equation

which is in

We have an application operation where

3.2 Universes

We define for in to be the set of functions . Thus an element of is a family of sets in for in . The universe of is defined to be the pair and we have .

3.3 Booleans

We define for in to be the set consisting of if and of if . We have in . Note that may not be a subsingleton if we have in the model. We define to be if and to be if .

3.4 Main result

Theorem 3.1.

The new collection of context, with the operations and and define a new model of type theory.

The proof consists in checking that the required equalities hold for the operations we have defined. For instance, we have

and

and

When checking the equalities, we only use -conversions at the metalevel.

There are of course strong similarities with the parametricity model presented in

[4]. This model can also be seen as a constructive version of the glueing technique [14, 20]. Indeed, to give a family of sets over is essentially the same as to give a set and a map , which is what happens in the glueing technique [14, 20].

4 The term model

There is a canonical notion of morphism between two models. For instance, the first projection defines a map of models of type theory. As for models of generalized algebraic theories [10], there is an initial model unique up to isomorphism. We define the term model of type theory to be this initial model. As for equational theories, this model can be presented by first-order terms (corresponding to each operations) modulo the equations/conversions that have to hold in any model.

Theorem 4.1.

In the initial model given in we have or . Furthermore we don’t have in the initial model.

Proof.

We have a unique map of models . The composition of the first projection with this map has to be the identity function on . If is in the image of by the initial map has hence to be a pair of the form with in . It follows that we have if and if . Since and we cannot have in the initial model . ∎

5 Presheaf model

We suppose given an arbitrary model . We define from this the following category of “telescopes”. An object of is a list with in , in , in To any such object we can associate a context of the model . If is in , we define the set of numbers such that is in . We may write simply instead of . Similarly we may write for . If is in we write . If is an object of , a map is given by a list such that is in . We then define . It is direct to define a composition operation such that which gives a category structure on these objects.

We use freely that we can interpret the language of dependent types (with universes) in any presheaf category [13]. A presheaf is given by a family of sets indexed by contexts with restriction maps if , satisfying the equations and if . A dependent presheaf over is a presheaf over the category of elements of , so it is given by a family of sets for in with restriction maps.

We write the cumulative sequence of presheaf universes, so that is the set of -valued dependent presheaves on the presheaf represented by .

defines a presheaf over this category, with subpresheaf of . We can see as a dependent presheaf over since it determines a collection of sets for in with restriction maps.

If is in we let (resp. ) be the set of all expressions of type that are in normal form (resp. neutral). As for , we can see and as dependent types over , and we have

We have an evaluation function if . If is in then we let (resp. ) be the subtypes of (resp. ) of elements such that .

Each context defines a presheaf by letting be the set of all substitutions .

Any element of defines internally a function .

We have a canonical isomorphism between and . We can then use this isomorphism to build an operation

such that .

We can also define, given and an operation , for .

Similarly, we can define an operation

such that and given and and an operation such that .

While equality might not be decidable in (because we use arbitrary renaming as maps in the base category), the product operation is injective: if in then in and in .

6 Normalization model

The model is similar to the reducibility model and we only explain the main operations.

As before, a context is a pair where is a context of and is a dependent family over .

A type at level over this context consists now of a pair where is in and in for in and in . An element of for in consists in a 4-uple where the element is in , the element is in , the element is in and is in .

An element of this type is a pair where is in and is an element of where .

The intuition behind this definition is that it is a “proof-relevant” way to express the method of reducibility used for proving normalization [11]: a reducibility predicate has to contain all neutral terms and only normalizable terms. The function (resp. ) is closely connected to the “reify” (resp. “reflect”) function used in normalization by evaluation [5], but for a “glued” model.

We redefine to be the set of elements in such that is or or is neutral. We define and .

We define and for neutral where is and and .

The set is defined to be the set of pairs where is in and is in .

The extension operation is defined by where is the set of pairs with and in .

We define a new operation where and is the tuple

  • with

  • with