The Size-Change Principle for Mixed Inductive and Coinductive types

01/23/2019 ∙ by Pierre Hyvernat, et al. ∙ IAE Savoie Mont Blanc 0

This paper describes how to use Lee, Jones and Ben Amram's size-change principle to check correctness of arbitrary recursive definitions in an ML / Haskell like programming language. The main point is that the size-change principle isn't only used to check termination, but also productivity for infinite objects. The main point is that the resulting principle is sound even in the presence of arbitrary nestings of inductive and coinductive types. A small prototype has been implemented and gives a practical argument in favor of this principle.This work relies on a characterization of least and greatest fixed points as sets of winning strategies for parity games that was developed by L. Santocanale in his work on circular proofs.Half of the paper is devoted to the proof of correctness of the criterion, which relies on an untyped extension of the language's denotational semantics to a domain of values extended with non-deterministic sums. We can recast all the syntactical constructions in this domain and check they are semantically sound.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

Introduction

Inductive types (also called algebraic datatypes) have been a cornerstone for typed functional programming: Haskell and Caml both rely heavily on those. One mismatch between the two languages is that Haskell is lazy while Caml is strict. A definition111The examples in the paper are all given using the syntax of chariot which is briefly described in sections 1.2 and 1.5. like

    val nats : nat -> nat
      | nats n = n::(nats (n+1))
is useless (but valid) in Caml because the evaluation mechanism will try to evaluate it completely (call-by-value evaluation). In Haskell, because evaluation is lazy (call-by-need), such a definition can be used productively. Naively, it seems that types in Caml correspond to “least fixed points” while they correspond to “greatest fixed points” in Haskell.

The aim of this paper is to introduce a language, called chariot,222A prototype implementation in Caml is available from https://github.com/phyver/chariot. where the distinction between least and greatest fixed points makes sense and where one can define datatypes with an arbitrary nesting of polarities. To allow a familiar programming experience, definitions are not restricted: any (well-typed) recursive definition is allowed. In particular, it is possible to write badly behaved definitions like

    val f : nat -> nat
      | f 0 = 1
      | f (n+1) = f(f n)

To guarantee that a definition is correct, two-steps are necessary:

  1. Hindley-Milner type-checking [Mil78] to guarantee that evaluation doesn’t provoke runtime errors,

  2. a totality test to check that the defined function respects the fixed points polarities involved in its type.

The second point generalizes a previous termination checker [Hyv14]: when no coinductive type is involved, totality amount to termination. It is important to keep in mind that any definition that passes this test is guaranteed to be correct but there are correct definitions that are rejected.333The halting problem is, after all, undecidable [Tur36]

.
In a programming context, the result of this second step can be ignored when the programmer (thinks he) knows better. In a proof-assistant context however, it cannot be ignored: non total definitions can lead to inconsistencies. The most obvious example is the definition

    val undefined = undefined
which is non-terminating but belongs to all types, even the empty one! There are subtler examples of definitions that normalize to values but still lead to inconsistencies [AD12] (c.f. example on page 1.5).

In Coq [The04], the productivity condition for coinductive definitions is ensured by a very strict syntactic condition (guardedness [Coq93]) similar to the condition that inductive definitions need to have one structurally decreasing argument. In Agda [Nor08], the user can write arbitrary recursive definitions and the productivity condition is ensured by the termination checker. The implemented checker extends a published version [AA02] to deal with coinductive types, but while this is sound for simple types like streams, it is known to be unsound for nested coinductive and inductive types [AD12]. This paper provides a first step toward a solution for this problem.

Related Works

Circular proofs

The primary inspiration for this work comes from the ideas developed by L. Santocanale in his work on circular proofs [San02c, San02a, San02b]. Circular proofs are defined for a linear proof system and are interpreted in categories with products, coproducts and enough initial algebras / terminal coalgebras. In order to get a functional language, we need to add rules and interpret them in cartesian closed categories with coproducts and enough initial algebras / terminal coalgebras (like the category of sets and functions, or the category of domains).

What is described in this paper seems to amount to using a strong combinatorial principle (the size-change principle) to check a sanity condition on a circular “preproof”. This condition implies that the corresponding cut free preproof (an infinite object) can be interpreted in a sufficiently well behaved category. This condition is strictly stronger than the one L. Santocanale and G. Fortier used in their work, which corresponded to the syntactical structurally decreasing / guardedness condition on recursive definitions.

Note however that while circular proofs were a primary inspiration, this work cannot be reduced to a circular proof system. The main problem is that all such proof systems are linear and do not enjoy a simple cut-elimination procedure. Cuts and exponentials are needed to interpret the full chariot language and while cuts can added to the original system of circular proofs [FS14, For14], adding exponentials looks extremely difficult and hasn’t been done.

Note also that more recent works in circular proof theory replace L. Santocanale’s criterion by a much stronger combinatorial condition. Without going into the details, it is equivalent to some infinite word being recognized by a parity automata (which is decidable) [Dou17b, Dou17a]. The presence of parity automata points to a relation between this work and the present paper, but the different contexts make it all but obvious.

Size-change principle

The main tool used for checking totality is the size-change principle (SCP) from C. S. Lee, N. D. Jones and A. M. Ben-Amram [LJBA01]. The problem of totality is however subtler than termination of programs. While the principle used to check termination of ML-like recursive definitions [Hyv14] was inherently untyped, totality checking needs to be somewhat type aware. For example, in chariot, records are lazy and are used to define coinductive types. The following definition

  val inf = Node { Left = inf; Right = inf }
yields an infinite, lazy binary tree. Depending on the types of Node, Fst and Snd, the definition may be correct or incorrect (refer to page 2 for more details)!

Charity

The closest ancestor to chariot is the language charity444By the way, the name chariot was chosen as a reminder of this genealogy. [CF92, Coc96], developed by R. Cockett and T. Fukushima, allows the user to define types with arbitrary nesting of induction and coinduction. Values in these types are defined using categorical principles.

  • Inductive types are initial algebras: defining a function from an inductive type amounts to defining an algebra for the corresponding operator.

  • Coinductive types are terminal coalgebras: defining a function to an inductive type amount to defining a coalgebra for the corresponding operator.

Concretely, it means the user can only define recursive functions that are “trivially” structurally decreasing on one argument, or “trivially” guarded. In particular, all functions terminate and the language is not Turing complete.

This is very different from the way one can write, for example, the Ackermann function with pattern matching:

    val ack 0 n = n+1
      | ack (m+1) 0 = ack m 1
      | ack (m+1) (n+1) = ack m (ack (m+1) n)

Guarded recursion

Another approach to checking correctness of recursive definitions is based on “guarded recursion”, initiated by H. Nakano [Nak00] and later extended in several directions [CBGB16, Gua18]. In this approach, a new modality “later” (usually written “”) is introduced in the type theory. The new type  gives a syntactical way to talk about terms that “will later (after some computation) have type ”. This work is rather successful and has been extended to very expressive type systems. The drawbacks are that this requires a non-standard type theory with a not quite standard denotational semantics (topos of trees). Moreover, it makes programming more difficult as it introduces new constructs in types and terms. Finally, these works only consider greatest fixed points (as in Haskel) and are thus of limited interest for systems such as Agda or Coq.

Sized-types

This approach also extends type theory with a notion of “size” for types. It has been successful and is implemented in Agda [Abe10, Abe12]. This makes it possible, for example, to specify that the map function on list has type , where  is the type of lists with  elements of type . These extra parameters allow to gather information about recursive functions and make it easier to check termination. A drawback is that functions on sized-types must take extra size parameters. This complexity is balanced by the fact that most of them can be inferred automatically and are thus mostly invisible to the casual user.555The libraries’ implementors still needs to give the appropriate type to map though. Note however that this approach still needs to have a way to check that definition respect the sizes.

Fixed points in game semantics

An important tool for checking totality of definitions in this paper is the notion of parity game. P. Clairambault [Cla13] explored a notion of game (from a categorical, games semantics point of view) enriched with winning conditions for infinite plays. The way the winning condition is defined for least and greatest fixed points is reminiscent of L. Santocanale’s work on circular proofs and the corresponding category is cartesian closed.

Because this work is done in a more complex setting (categories of games) and aims for generality, it seems difficult to extract a practical test for totality from it. The present paper aims for specificity and practicality by devising a totality test for the “intended” semantics (i.e. in plain sets and functions) of recursion.

SubML

C. Raffalli and R. Lepigre also used the size-change principle in order to check correctness of recursive definitions in the language SubML [LR18]. Their approach uses a powerful but non-standard type theory with many features: subtyping, polymorphism, sized-types, control operators, some kind of dependent types, etc. On the downside, it makes their type theory more difficult to compare with other approaches. Note that like in Agda or chariot, they do allow arbitrary definitions that are checked by an incomplete totality checker. The similarity of the approach isn’t surprising considering previous collaborations between the authors. One interesting point of their work is that the size-change termination is only used to check that some object (a proof tree) is well-founded: even coinductive types are justified with well-founded proofs.

Nax

Another programming language with nested inductive / coinductive types is the Nax language [Ahn14], based on so called “Mendler style recursion” [Men91]. One key difference is that the Nax language is very permissive on the definitions of types (it is for example possible to define fixed points for non positive type operators) and rather restrictive on the definition of values: they are defined using various combinators similar (but stronger than) to the way values are defined in charity. From the point of view of a Haskell / Caml programmer, the restriction on the way programs are written is difficult to accept.666No implementation of Nax language is available, it is thus difficult to experiment with it.

Plan of the Paper

We start by introducing the language chariot and its denotational semantics in section 1, together with the notion of totality for functions. Briefly, totality generalizes termination in a way that accounts for inductive and coinductive types. An interesting point is that this notion is semantical rather than syntactical. We then describe, in section 2, a combinatorial approach to totality that comes from L. Santocanale’s work on circular proofs. This reduces checking totality of a definition to checking that the definitions gives a winning strategy in a parity game associated to the type of the definition. Section 3 describes how the size-change principle can be applied to this problem: a recursive definition gives a call-graph, and the size-change principle can be used to check a totality condition on all infinite path in this call-graph. This section is written from the implementor’s point of view, and most proofs are omitted: they are given in the following section. The last section is the longest and gives the proof of correctness. This works by showing that the call-graph and the operations defined on it have a sound semantics in domains.

1. The Language and its Semantics

1.1. Values

We are interested in a condition on the semantics of recursive definitions. What is interesting is that this doesn’t mention the reduction strategy: everything takes place in the realm of values. The set of values with leaves in ,…, , written  is defined coinductively777It is natural to give an infinite semantics for coinductive types, and infinite values are thus allowed. by the grammar

where

  • each is in one of the ,

  • each C belongs to a finite set of constructors,

  • each belongs to a finite set of destructors,

  • the order of fields inside records is irrelevant,

  • can be 0.

Locally, only a finite number of constructors / destructors will be necessary: those appearing in the type definitions involved in the definitions we are checking. There is a natural ordering on finite values, which can be extended to infinite ones (c.f. remark about ideal completion on page 4). If the  are ordered sets, the order on  is generated by

  1. for all values ,

  2. if in , then in ,

  3. if then for any context .

1.2. Type Definitions

The approach described in this paper is entirely first-order. We are only interested in the way values in datatypes are constructed and destructed. Higher order parameters are allowed in the implementation but they are ignored by the totality checker. The examples in the paper will use such higher order parameters but for simplicity’s sake, they are not formalized. Note that it is not possible to just ignore higher order parameters as they can hide some recursive calls:

  val app f x = f x       -- non recursive
  val g x = app g x
In order to deal with that, the implementation first checks that all recursive functions are fully applied. If that is not the case, the checker aborts and gives a negative answer.

Just like in charity, types in chariot come in two flavors: those corresponding to sum types (i.e. colimits) and those corresponding to product types (i.e. limits). The syntax is itself similar to that of charity:

  • a data comes with a list of constructors whose codomain is the type being defined,

  • a codata comes with a list of destructors whose domain is the type being defined.

The syntax is

data \(\mathit{new_type}\) where
    | \({\leavevmode\text{{{C}}}}_{1}\) : \(T_{1}\) -> \(\mathit{new_type}\)
    ...
    | \({\leavevmode\text{{{C}}}}_{k}\) : \(T_{k}\) -> \(\mathit{new_type}\)
  
codata \(\mathit{new_type}\) where
    | \(\leavevmode\text{{{D}}}_{1}\) : \(\mathit{new_type}\) -> \(T_{1}\)
    ...
    | \(\leavevmode\text{{{D}}}_{k}\) : \(\mathit{new_type}\) -> \(T_{k}\)
  

Each  is built from earlier types, parameters and . Types parameters are written with a quote as in Caml but the parameters of  cannot change in the definition. Mutually recursive types are possible, but they need to be of the same polarity (all data or all codata). We can always suppose that all the mutually defined types have the same parameters as otherwise, the definition could be split in several, non mutual definitions. Here are some examples:

    codata unit where           -- no destructor

    codata prod(’x,’y) where  Fst : prod(’x,’y) -> ’x
                            | Snd : prod(’x,’y) -> ’y

    data nat where  Zero : unit -> nat
                  | Succ : nat  -> nat

    data list(’x) where  Nil : unit                -> list(’x)
                       | Cons : prod(’x, list(’x)) -> list(’x)

    codata stream(’x) where  Head : stream(’x) -> ’x
                           | Tail : stream(’x) -> stream(’x)
The examples given in the paper (and the implementation) do not adhere strictly to this syntax: -ary constructors are allowed, and Zero will have type nat (instead of unit -> nat) while Cons will be uncurried and have type ’x -> list(’x) -> list(’x) (instead of prod(’x, list(’x)) -> list(’x)).

Because destructors act as projections, it is useful to think about elements of a codatatype as records. This is reflected in the syntax of terms, and the following defines the stream with infinitely many s.

    val zeros : stream(nat)
      | zeros = { Head = Zero ; Tail = zeros }
As the examples show, codata are going to be interpreted as coinductive types, while data are going to be inductive. The denotational semantics will reflect that, and in order to have an operational semantics that is sound, codata need to be lazy. The simplest is to stop evaluation on records: evaluating “zeros” will give “{Head = ???; Tail = ???}” where the “???” are not evaluated. Surprisingly, the details are irrelevant to rest of paper.

We will use the following conventions:

  • outside of actual type definitions (given using chariot’s syntax), type parameters will be written without quote: x, , …

  • an unknown datatype will be called  and an unknown codatatype will be called ,

  • an unknown type of unspecified polarity will be called .

1.3. Semantics in Domains

Our notion of domain is the historical one: a domain is a

  • consistently complete (finite bounded sets have a least upper bound)

  • algebraic (with a basis of compact elements)

  • directed-complete partial order (DCPO: every directed set has a least upper bound).

While helpful in section 4, intimate knowledge of domain theory is not necessary to follow the description of the totality checker.

There is a natural interpretation of types in the category  of domains, where morphisms are continuous functions. (Note that morphisms are not required to preserve the least element.) The category theory aspect is not important because all the types are in fact subdomains of . The following can be proved directly but is also a direct consequence of a general fact about orders and their “ideal completion”. If the s are domains, then is a domain.

Type expressions with parameters are generated by the grammar

where is any domain (or set, depending on the context) called a parameter, and  is the name of a datatype of arity  and  is the name of a codatatype of arity . A type is closed if it doesn’t contain variables. (It may contains parameters, that is, subdomains of the domain of values.)

The interpretation of a closed type  with domain parameters is defined coinductively as the set of (possibly infinite) values well typed according to:

  1. for any type ,

  2. for any parameter ,

  3. where is a constructor of ,

  4. where , are all the destructors for type .

In the third and fourth rules, denotes a substitution  and  denotes the type where each variable  has been replaced by .

If is a type with free variables , we write  for the interpretation of  where is the substitution . All the  coming from the parameters are identified. There are thus several ways to prove that  belongs to the interpretation of a type: either with rule (1) or rules (2). The following is proved by induction on the type expression . Let ,…, be domains, if  is a type then

  • with the order inherited from the s, is a domain,

  • gives rise to a functor from  to .

  • if is a datatype with constructors , we have

  • if is a codatatype with destructors , we have

The operations  and  are the set theoretic operations (disjoint union and cartesian product), and  is the usual notation for . This shows that the semantics of types are fixed points of natural operators. For example, is the domain of “lazy natural numbers”:

Zero

and the following are different elements of :

  • ,

1.4. Semantics in Domains with Totality

At this stage, there is no distinction between greatest and least fixed point: the functors defined by types are algebraically compact [Bar92], i.e. their initial algebras and terminal coalgebras are isomorphic. For example, is an element of  as the limit of the chain . In order to distinguish between inductive and coinductive types, we add a notion of totality888Intrinsic notions of totality exist [Ber93] but are seemingly unrelated to what is considered below. to the domains.

  1. A domain with totality  is a domain  together with a subset .

  2. An element of  is total when it belongs to .

  3. A function  from  to  is a function from  to . It is total if , i.e. if it sends total elements to total elements.

  4. The category  has domains with totality as objects and total continuous functions as morphisms.

To interpret (co)datatypes inside the category , it is enough to describe the associated totality predicate. The following definition corresponds to the “natural” interpretation of inductive / coinductive types in the category of sets. If is a type whose parameters are domains with totality, we define  by induction

  • if then 

  • if is a datatype, then ,

  • if is a codatatype, then ,

where

  1. if is a datatype with constructors , is the operator

  2. if is a codatatype with destructors  is the operator

In both cases,  is the substitution . The least and greatest fixed points exist by Knaster-Tarski theorem: the corresponding operators act on subsets of the set of all values. It is not difficult to see that each element of  is in  and doesn’t contain , i.e. is a maximal element of the domain : If is a type with domain parameters, is a domain with totality. Moreover, each  is maximal in .

1.5. Recursive Definitions

Like in Haskell, recursive definitions are given by lists of clauses. The Ackermann function was given on page Charity and here is the map function on streams:999This definition isn’t strictly speaking first order as it take a function as argument. We will ignore such arguments and they can be seen as free parameters.

    val  map : (’a -> ’b) -> stream(’a) -> stream(’b)
       | map f { Head = x ; Tail = s } = { Head = f x ; Tail = map f s }
Formally, a (recursive) definition is introduced by the keyword val and consists of several clauses of the form
    f \(p_{1}\) ... \(p_{n}\) = \(u\)
where

  • f is a function name,

  • each  is a finite pattern

    where each  is a variable name,

  • and  is a finite term

    where each  is a variable name and each f is a function name (possibly one of the functions being defined).

Note that it is not possible to directly project a record on one of its field in the syntax of terms. This makes the theory somewhat simpler and doesn’t change expressivity of the language. It is always possible to

  • remove a projection on a variable by extending the pattern on the left,

  • replace a projection on the result of a recursively defined function by several mutually recursive functions for each of the fields,

  • replace a projection on a previously defined function by another previously defined function.

Of course, the implementation doesn’t enforce this restriction and the theory can be extended accordingly.

There can be many clauses and many different functions defined mutually. The system

  1. checks some syntactical constraints (linearity of pattern variables, …),

  2. performs Hindley-Milner type checking (or type inference if no type annotation was given),

  3. performs an exhaustivity check ensuring that the patterns cover all the possibilities and that records have all their fields.

Those steps are well-known [PJ87] and not described here. Hindley-Milner type checking guarantees that each list of clauses for functions  (each  is an arrow type) gives rise to an operator

where the semantics of types is extended with . The semantics of  is then defined as the fixed point of the operator  which exists by Kleene theorem.

Typing ensures that the definition is well behaved from an operational point of view: the “” that appear in the result correspond only to non-termination, not to failure of the evaluation mechanism (projecting on a non-existing field or similar problems). For the definition to be correct from a denotational point of view, we need to check more: that it is total with respect to its type. For example, the definition

    val all_nats : nat -> list(nat)
      | all_nats n = Cons n (all_nats (Succ n))
is well typed and sends elements of the domain to the domain  but its result on Zero contains all the natural numbers. This definition is not total because totality for  contains only the finite lists (it is an inductive type). Similarly, the definition
    val last_stream : stream(nat) -> nat
      | last_stream {Head=_; Tail=s} = last_stream s
sends any stream to , which is non total.

A subtle example

Here is a surprising example due to T. Altenkirch and N. A. Danielsson [AD12]: we define the inductive type

    data stree where Node : stream(stree) -> stree
where the type of stream was defined on page 1.2. This type is similar to the usual type of “Rose trees”, but with streams instead of lists. Because streams cannot be empty, there is no way to build such a tree inductively: this type has no total value. Consider however the following definitions:
    val s : stream(stree)
      | s = { Head = Node s ; Tail = s }
    val t : stree
      | t = Node s
This is well typed, but because evaluation is lazy, evaluation of t or any of its subterms terminates: the semantics of t doesn’t contain . Unfolding the definition, we obtain

Node

{Head=_; Tail=_}

Node

{Head=_; Tail=_}

Node

{Head=_; Tail=_}

{Head=_; Tail=_}

Node

{Head=_; Tail=_}

{Head=_; Tail=_}

Node

{Head=_; Tail=_}

Such a term leads to inconsistencies and shows that a simple termination checker isn’t enough.

The rest of the paper describes a partial totality test on recursive definitions: some definitions are tagged “total” while some other are tagged “unsafe” either because they are indeed not total, or because the argument for totality is too complex.

2. Combinatorial Description of Totality

The set of total values for a given type can be rather complex when datatypes and codatatypes are interleaved. Consider the definition

    val inf = Node { Left = inf; Right = inf }
It is not total with respect to the type definitions
    codata pair(’x,’y) where  Left : pair(’x,’y) -> ’x
                            | Snd : pair(’x,’y) -> ’y
    data tree where Node : pair(tree, tree) -> tree
but it is total with respect to the type definitions
    data box(’x) where Node : ’x -> box(’x)
    codata tree2 where Left : tree2 -> box(tree2)
                     | Right : tree2 -> box(tree2)
In this case, the value inf is of type box(tree2). Analysing totality requires a combinatorial understanding of the least and greatest fixed points involved. Fortunately, there is a close relationship between set theoretic least and greatest fixed points and winning strategies for parity games.

2.1. Parity Games

Parity games are a two players games played on a finite transition system where each node is labeled by a priority (a natural number). The height

of such a game is the maximum priority of its nodes. By extension, the priority of a transition is the priority of its target node. When the node has odd priority,

Marie (or “player”) is required to play. When the node is even, Nicole (or “opponent”) is required to play. A move is simply a choice of a transition from the current node and the game continues from the new node. When Nicole (or Marie) cannot move because there is no outgoing transition from the current node, she looses. In case of infinite play, the winning condition is

  1. if the maximal node visited infinitely often is even, Marie wins,

  2. if the maximal node visited infinitely often is odd, Nicole wins.

Equivalently, the condition could be stated using the priorities of the transitions taken during the infinite play. We will call a priority principal if “it is maximal among the priorities appearing infinitely often”. The winning condition can thus be rephrased as “Marie wins an infinite play if and only if the principal priority of the play is even”.

In order to analyse types with parameters, we add special nodes called parameters. Those nodes have no outgoing transition, have priority 101010Those parameter nodes do not count when defining the depth of a parity game. and each of them has an associated set . On reaching them, Marie is required to choose111111because  is obviously odd

an element of  to finish the game. She wins if she can do it and looses if she cannot (when the set is empty). Here are three examples of such parity games:

Each position in a parity game  with parameters , …, defines a set  depending on ,…, [San02c]. This set valued function is defined by induction on the height of and the number of positions having maximum priority:

  • if all the positions are parameters, each position is interpreted by the corresponding parameter ;

  • otherwise, take  to be one of the positions of maximal priority and construct  with parameters , , …, as follows: it is identical to , except that position  is replaced by parameter  and all its outgoing transitions are removed.121212This game is called the predecessor of  [San02c]. Compute recursively the interpretations , depending on , , … and:

    • if  had an odd priority, define

      where , … are all the transitions out of .

    • if  had an even priority, define

      where , … are all the transitions out of .

An important result is: [L. Santocanale [San02c]]

  • For each position  of , the operation is a functor from  to ,

  • there is a natural isomorphism where  is the set of winning strategies for Marie in game  from position .

In order to analyse totality, we now construct a parity game  from a type expression  in such a way that , for some distinguished position  in .

2.2. Parity Games from Types

If  is a type expression (possibly with parameters), the graph of  is defined as the subgraph reachable from  in the following (infinite) transition system:

  • nodes are type expressions (possibly with parameters),

  • transitions are labeled by constructors and destructors: a transitions  is either a destructor  of type  or a constructor  of type  (note the reversal).

Here is for example the graph of list(nat)

unit

list(nat)

nat

prod(nat,list(nat))

Fst

Snd

Nil

Cons

Zero

Succ

The orientation of transitions means that

  • on data nodes, a transition is a choice of constructor for the origin type,

  • on codata nodes, a transition is a choice of field for a record for the origin type.

Because of that, a value of type  can be seen as a strategy for a game on the graph of  where Marie (the player) chooses constructors and Nicole (the opponent) chooses destructors.

The graph of  is finite.

Proof of Lemma 2.2.

We write if  appears in . More precisely:

  • iff ,

  • if and only if or or … or .

To each datatype / codatatype definition, we associate its “definition order”, an integer giving its index in the list of all the type definitions. A (co)datatype may only use parameters and “earlier” type names in its definition and two types of the same order are part of the same mutual definition. The order of a type is the order of its head type constructor.

Suppose that the graph of some type  is infinite, and that  is minimal in the sense that it is of order  and graphs for types of order less than  are finite. Since the graph of  has bounded out-degree, by König’s lemma, it contains an infinite simple (without repeated vertex) path . For any , there is some  such that  is of order . Otherwise, the path  is infinite and contradicts the minimality of .

All transitions in the graph of  are of the form where is built using the types in and possibly the with the same order as . There are three cases:

  1. In transitions , i.e., transitions to a parameter, the target is a subexpression of the origin. It is the only way the order a type may strictly increase along a transition. This is the case of .

  2. In transitions , i.e. transitions to a type in the same mutual definition, the order remains constant. An example in .

  3. In all other cases, the transition is of the form , where is strictly earlier than . In this case however, because of the way  is built, the subexpressions of  with order greater than that of  are necessarily subexpressions of . This is for example the case of (recall that the transition goes in the opposite direction).

The only types of order  reachable from  (of order ) are thus:

  • subexpressions of (obtained with transitions of the form (1) and (3)),

  • or variants of the above where some types of order  have been replaced by of same order (obtained with transitions of the form (2)).

Since there are only finitely many of those, the infinite path  necessarily contains a cycle! This is a contradiction. ∎

If  is a type expression (possibly with parameters), a parity game for  is a parity game  on the graph of  satisfying

  1. each parameter of  is a parameter of ,

  2. if  is a datatype in the graph of , its priority is odd,

  3. if  is a codatatype in the graph of , its priority is even,

  4. if  then the priority of  is greater than the priority of .

Each type has a parity game.

Proof.

The relation  is a strict order and doesn’t contain cycles. Its restriction to the graph of  can be linearized. This gives the relative priorities of the nodes and ensures condition (4) from the definition. Starting from the least priorities (i.e. the larger types), we can now choose a priority odd / even compatible with this linearization. (Note that we don’t actually need to linearize the graph and can instead chose a normalized parity game, i.e. one that minimizes gaps in priorities.) ∎

Here are the first two parity games from page 2.1, seen as parity games for stream(nat) and list(nat) (the priorities are written as an exponent).

The last example from page 2.1 corresponds to a coinductive version of Rose trees:

    codata rtree(’x) where
        | Root : rtree(’x) -> ’x
        | Subtrees : rtree(’x) -> list(rtree(’x))
with parity game

Root

Subtrees

Nil

Cons

Snd

Fst

As those examples show, the priority of  can be anything from minimal to maximal in its parity graph.

For any type , if  is a parity game for  and if  is a node of , we have a natural isomorphism .

Proof.

The proof follows from the following simple fact by a simple induction: If  is a parity game for  and  one of its maximal nodes, then the predecessor game is a parity game for . ∎

If is a type and  a parity game for , we have . In particular, is total iff every branch of  has even principal priority.131313Since any  gives a strategy in the game of , priorities can be looked up in the game of .

2.3. Forgetting Types

A consequence of the previous section is that checking totality doesn’t really need types: it needs priorities. We thus annotate each occurrence of constructor / destructor in a definition with its priority (taken from the type’s parity game). The refined notion of value is given by

where

  • each is in one of the ,

  • each priority belong to a finite set of natural numbers,

  • each C belongs to a finite set of constructors, and their priority is odd,

  • each belongs to a finite set of destructors, and their priority is even,

  • can be 0.

The whole domain of values  now has a notion of totality. Totality for is defined as  iff and only if every branch of  has even principal priority.

Priorities are an artefact used for checking totality. They play no role in definitions or evaluation and are inferred internally:

  1. each instance of a constructor / destructor is annotated by its type during type checking,

  2. all the types appearing in the definitions are gathered (and completed) to a parity games,

  3. each constructor / destructor is then associated with the priority of its type (and the type itself can be dropped).

Because none of this has any impact on evaluation, checking if a definition is total amounts to checking that the annotated definition is total, which can be done without types.

3. Call-Graph and Totality

3.1. Introduction

A function  between domains with totality is total if it sends total elements to total elements. Equivalently, it is total if whenever , either  is non total, or  is total. As a result, checking that a recursive definition of f is total requires looking at the the arguments that f “consumes” and at the result that f “constructs”. The two are not independent as shown by the following function:

    val sums : stream(list(nat)) -> stream(nat)
      | sums { Head = [] ; Tail = s } = { Head = 0 ; Tail = sums s }
      | sums { Head = [n] ; Tail = s } = { Head = n ; Tail = sums s }
      | sums { Head = n::m::l ; Tail = s }
              = sums { Head = (add n m)::l ; Tail = s }
where we use the following abbreviations:

  • [] for Nil,

  • [a] for Cons { Fst = a ; Snd = Nil },

  • a::l for Cons { Fst = a ; Snd = l }.

This function maps a stream of lists of natural numbers into a stream of natural numbers by computing the sums of all the lists. It does so by accumulating the partial sums of a given list in its first element. This function is productive (it constructs something) because the third clause cannot occur infinitely many times consecutively (it consumes something).

As we saw in the previous section, a total value in type  is a winning strategy for a parity game of . The totality checker thus needs to check something like:

for all pairs  in the graph of the recursive function,

  • either all the infinite branches of  have an even principal priority,

  • or  contains an infinite branch whose principal priority is odd.

The analysis is local: it only looks at one mutually recursive definition. The previously defined functions are assumed to be total, but nothing more is known about them. For that reason, we only look for infinite branches that come from the current definition.

The first step of the analysis is to extract some information from the clauses of the definition. The resulting structure is called the call-graph (section 3.3). The analysis then looks at infinite path in the call-graph. In order to use the size-change principle [LJBA01, Hyv14], care must be taken to restrict the information kept along calls to a finite set. This is done by introducing a notion of approximation and by collapsing calls to bounded information.

Formalizing and proving that this condition is correct will be done in the last section (starting on page 4

). We will for the moment only give a high level description of how one can implement the test, using both the concepts developed in the previous section and the size-change principle.

Simplifying assumptions

In order to reduce the notation overhead, we assume all the functions have a single argument. As far as expressivity is concerned, this is not a real restriction: we can introduce ad-hoc codata (product types) to uncurry all functions. This restriction is of course not enforced in the implementation. Dealing with multiple arguments would require using substitutions instead of terms [Hyv14].

3.2. Interpreting Calls

A single clause may contain several recursive calls. For example, the hypothetic rule

  | f (C\({}_{1}\) { D\({}_{1}\) = x; D\({}_{2}\) = C\({}_{2}\) y })  =  C\({}_{3}\) (f (C\({}_{2}\) (f (C\({}_{1}\) y)))
contains two recursive calls (underlined). It is clear that the final result starts with , constructed above the leftmost recursive call. It is also clear that the rightmost recursive call uses part of the initial argument. It is however unclear if the rightmost call contributes anything to the final result or if the leftmost call uses part of the initial argument. For each recursive call in a recursive definition, we keep some information about

  • the output branch above this recursive call,

  • the way the argument of the call is constructed from the initial argument.

The information about the argument of the recursive call uses the same technology that was used when checking termination [Hyv14]. The information about the output is simpler: we only record the number of constructors above the recursive call. A recursive call is guarded simply when it occurs under at least one constructor / record.141414Refer to page Rewrite rules vs pattern matching for an idea about what could be done without this simplification. Each call will thus be interpreted by a rule of the form:

f x f

where  is a weight and  is a generalized pattern with free variable x. For example, the interpretation of the previous clause will consist of two calls:

  • f x f ( ) for the leftmost call:

    • : this call is guarded by one constructor ()

    • : the argument starts with constructor

  • f x f ( x) for the rightmost call:

    • : we don’t know what this call contributes to the result

    • : the argument starts with  and “” represents the y from the definition: it is obtained from the argument x by: removing  (written ), projecting on field  (written ) and removing  (written ).

Since we want to to check totality using parity games, counting the constructors is not enough: we also need to remember their priority. The weights are thus more complex than plain integers.

Define

  1. with the obvious addition and order,

  2. , the set of weights is generated by

    where  and each  comes from a finite set  of natural numbers called priorities. This set is quotiented by

    • associativity and commutativity of ,

    • ,

    • ,

    • equivalence generated from the order given below.

  3. Order  on weights is generated from

    • ,

    • ,

    • whenever  in ,

    • . (In particular, .)

” is a synonym for  and we usually write  for an arbitrary weight and for (where  is the maximal priority involved locally). The symbols  and  are loosely used as grouping for weights… The intuition is that represents anything that adds at most one constructor of priority . Similarly,  represents anything that removes at least one constructor of priority . Note that  is different from : the former could add one constructor of priority  and remove another, while the later does nothing. The weight  is not very different from the weight  and can be identified with it in the implementation.

Generalized patterns are given by the following grammar

where , x is a formal parameter, each f belongs to a finite set of function names and each  is a weight. As previously, C and D come from a finite set of constructor and destructor names, and their priorities come from a finite set of natural numbers. They are respectively odd and even. The product is implicitly commutative, idempotent and associative. Here are some points worth remembering.

  • x is the parameter (unique in our case) of the definition.

  • Priorities are associated to instances of constructors: the list constructor may appear in the parity game with different priority!151515for example when dealing with lists of streams of lists We write  to give a name to the corresponding priority, and just C when the priority is not important.

  • In ML style, the term is similar to the partial “match with C v -> v” with a single pattern. This is used to deconstruct a value.

  • represents runtime errors which we can ignore in our analysis because typing forbids them. It propagates through values.

  • The product is used to approximate records: {Fst=Succ x; Snd=Succ x} can for example be approximated by . All the branches of an element approximated by must be approximated by a .

A call from g to f is of the form “g f ” where and is a generalized pattern. For symmetry reasons, output weights are counted negatively: adding one constructor on the output will be represented as . Just like removing some constructor in a recursive argument is “good” (think structural recursion), adding a constructor on the result is “good” (think guardedness). The two are thus counted similarly.

The next few pages recall, without proofs, the notions and results that are useful for implementing the totality criterion. The proofs will be given in the next section.

We define a reduction relation on generalized patterns:

Group (1) of reductions corresponds to the operational semantics of the language, group (3) deals with approximations in a way that is compatible with group (1), and groups (0) and (2) deal with errors. In particular, in group (2)

  • the first 3 reductions are forbidden by type checking (we cannot project a constructor, or pattern match on a record),

  • the last reduction is forbidden by the operational semantics (if the argument starts with a , the reduction doesn’t take the C clause).

Looking at groups (3) of reductions, is is clear that weights absorb all constructors on their right and all destructors on their left. As a result, non- normal forms have a very specific shape:

  • a tree of constructors (akin to a value),

  • followed by (linear) branches of destructors.

*

x

x

x

x

constructors: or C

destructors: .D or C
  1. This reduction is confluent and strongly normalizing. We write for the normal form of .

  2. The normal forms are either , or generated by the grammar

There is a notion of approximation: for example, can be approximated by where  means that “at least 2 constructors were removed”. The relation “” (read “ approximates ”) is defined with

  • is contextual: if then ,

  • is compatible with reduction: if then , and in particular, ,

  • is a greatest element,

  • for every weights , we have ,

This order is extended to calls with

3.3. Call-Graph and Composition

The next definition is very verbose, but the example following it should make it clearer. From a recursive definition, we construct its (oriented) call-graph with

  • vertices are the names of the functions mutually defined

  • arcs from f to g are given by all the “f x g ” where  for some clause  and and  are defined with:

    1. Given a pattern , define the substitution  as follows:

      • ,

      • .

      where represents composition of substitutions.

    2. From each right-hand side  of a clause , we define :

      where

      1. is a notation for ,

      2. if f is one of the functions recursively defined and where  is obtained from  by replacing each function application  (recursive or otherwise) by .

      3. the name g could also be a parameter coming from the left pattern.

An example is probably more informative. Consider the definition from page 

3.1 with explicit priorities:

    val sums : stream\({}^{0}\)(list\({}^{1}\)(nat\({}^{1}\))) -> stream\({}^{0}\)(nat\({}^{1}\))
      | sums { Head\({}^{0}\) = []\({}^{1}\) ; Tail\({}^{0}\) = s } = { Head\({}^{0}\) = 0 ; Tail\({}^{0}\) = sums s }
      | sums { Head\({}^{0}\) = [n]\({}^{1}\) ; Tail\({}^{0}\) = s } = { Head\({}^{0}\) = n ; Tail\({}^{0}\) = sums s }
      | sums { Head\({}^{0}\) = n::\({}^{1}\)m::\({}^{1}\)l ; Tail\({}^{0}\) = s }
              = sums { Head\({}^{0}\) = (add n\({}^{1}\) m\({}^{1}\))::\({}^{1}\)l ; Tail\({}^{0}\) = s }
The three associated calls will be sums x sums (.Tail x) (twice) and

(Recall the “::” is an abbreviation for “Cons{Fst=; Snd=}”.)

A call gives some information about one recursive call: depth of the recursive call, and part of the shape of the original argument. We can compose them:

    val length : list(x) -> nat
      | length Nil = Zero
      | length (Cons{Fst=x; Snd=l}) = Succ (length l)
has a single call: “length l length (.Snd Cons l)”. Composing this call with itself will give length l length (.Snd Cons.Snd Cons l). The composition of the calls and is defined as

Some compositions are automatically ignored: “f x f x”, arising from

    val f (C\({}_{1}\) x) = f (C\({}_{2}\) x)
      | ...
gives  when composed with itself. This is because “ ” doesn’t match “”.

Totality can sometimes be checked on the call-graph. Since we haven’t yet formalized infinite compositions, this condition is for the moment expressed in a very informal way.

[Informal Totality Condition] A call-graph is total if, along all infinite path,

  1. either its output weight obviously has an even principal priority,

  2. or one of the branches in its argument obviously has an odd principal priority.

Two such examples are the call-graphs with a single arc:

  1. f x  f x: the prefixes of the only infinite path give the compositions

    • f x  f x

    • f x  f x

    The recursive calls are guarded by an increasing number of constructors of priority 2 (coinductive). Some inductive constructors are also added,161616Recall that constructors are counted negatively for the output. but those have smaller priority. The limit will construct a term with infinitely many constructors of priority 2 and infinitely many constructors of priority 1: this is a total value.

  2. f x  f Succ x: prefixes of the only infinite path in this graph give

    • f x  f Succ Succ x

    • f x  f Succ Succ Succ x

    Such terms only apply to arguments having enough Succ constructors (of priority 1) and no other constructors. The limit thus only applies to arguments having infinitely many constructors of priority 1 (and no other): that is not possible of total values!

[informal] If a call-graph is total, then the original recursive definition defines total functions.

3.4. Collapsing

The totality condition on call-graphs involves infinite path. It is natural to try using the size-change principle to get a decidable approximation of this condition. For that, we need a finite call-graph that is closed under composition (its transitive closure). This is impossible: the example of length l can be composed with itself many times to get

length l length (.Snd Cons  ... .Snd Cons l)

Both the output weight and the recursive argument of the call can grow arbitrarily. To prevent that, we collapse the calls to bound their depth and weights [Hyv14].

Given a strictly positive bound , the weight collapsing function  acts on terms by replacing each weight  by where

Ensuring a bounded depth is more complex: we introduce some “” below  constructors and above  destructors. Because of the reduction, those new weights will absorb the constructors with depth greater than  and the destructors with “inverse depth” greater than . For example, collapsing at depth 2 gives

Given a positive bound , the height collapsing function  acts on terms by integrating constructors below  and destructors above  into weights:

and

  • The clauses are not disjoint and only the first appropriate one is used.

  • We compute a normal form in clause  to ensure that the clauses  cover all cases (since weights absorb constructors on their right,  doesn’t contain constructors),

We can extend collapsing to calls. If  is a call f x  g , we put

[[Hyv14]] For any call , we have

  • ,

  • .

Given some bounds  and , collapsed composition is defined by

Since the bounds are fixed, we usually write . Unfortunately, this composition is not associative.171717except when  and  For example, with bound  if and we have but . The next property can be seen as a kind of weak associativity. If , and if and are the results of computing in two different ways, then and are compatible, written . This means that there is some  such that  and .

Proof.

Take . ∎

3.5. Size-Change Principle

The initial call-graph  of a definition is finite, and collapsed composition ensures that there exists a finite transitive closure of this initial call-graph: starting with , we define the new edges of with:

if and are edges from f to g and from g to h in , then is a new edge from f to h in .

Finiteness of the set of bounded terms guarantees that this sequence stabilizes on some graph, written .

To simplify the statement of the size-change totality principle, we first define the -norm a finite branch inside a generalized pattern: it counts constructors and destructors of priority . Given  and a branch  in a generalized pattern, the -norm of , written  is defined with:

  • and if ,

  • and if ,

  • and if ,

  • and if ,

  • ,