A Type and Scope Safe Universe of Syntaxes with Binding: Their Semantics and Proofs

01/29/2020 ∙ by Guillaume Allais, et al. ∙ 0

Almost every programming language's syntax includes a notion of binder and corresponding bound occurrences, along with the accompanying notions of α-equivalence, capture-avoiding substitution, typing contexts, runtime environments, and so on. In the past, implementing and reasoning about programming languages required careful handling to maintain the correct behaviour of bound variables. Modern programming languages include features that enable constraints like scope safety to be expressed in types. Nevertheless, the programmer is still forced to write the same boilerplate over again for each new implementation of a scope safe operation (e.g., renaming, substitution, desugaring, printing, etc.), and then again for correctness proofs. We present an expressive universe of syntaxes with binding and demonstrate how to (1) implement scope safe traversals once and for all by generic programming; and (2) how to derive properties of these traversals by generic proving. Our universe description, generic traversals and proofs, and our examples have all been formalised in Agda and are available in the accompanying material available online at https://github.com/gallais/generic-syntax.



There are no comments yet.


page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

In modern typed programming languages, programmers writing embedded DSLs (hudak1996building) and researchers formalising them can now use the host language’s type system to help them. Using Generalised Algebraic Data Types (GADTs) or the more general indexed families of Type Theory (dybjer1994inductive) to represent syntax, programmers can statically enforce some of the invariants in their languages. For example, managing variable scope is a popular use case in LEGO, Idris, Coq, Agda and Haskell (altenkirch1999monadic; DBLP:conf/gpce/BradyH06; DBLP:journals/jar/HirschowitzM12; DBLP:conf/icfp/KeuchelJ12; BachPoulsen; plfa2018; eisenbergsticth18) as directly manipulating raw de Bruijn indices is notoriously error-prone. Solutions have been proposed that range from enforcing well scopedness of variables to ensuring full type correctness. In short, these techniques use the host languages’ types to ensure that “illegal states are unrepresentable”, where illegal states correspond to ill scoped or ill typed terms in the object language.

Despite the large body of knowledge in how to use types to define well formed syntax (see the related work in Section 10), it is still necessary for the working DSL designer or formaliser to redefine essential functions like renaming and substitution for each new syntax, and then to reprove essential lemmas about those functions. To reduce the burden of such repeated work and boilerplate, in this paper we apply the methodology of datatype-genericity to programming and proving with syntaxes with binding.

To motivate our approach, let us look at the formalisation of an apparently straightforward program transformation: the inlining of let-bound variables by substitution together with a soundness lemma proving that reductions in the source languages can be simulated by reductions in the target one. There are two languages: the source (S), which has let-bindings, and the target (T), which only differs in that it does not:

Breaking the task down, an implementer needs to define an operational semantics for each language, define the program transformation itself, and prove a correctness lemma that states each step in the source language is simulated by zero or more steps of the transformed terms in the target language. In the course of doing this, they will discover that there is actually a large amount of work:

  1. To define the operational semantics, one needs to define substitution, and hence renaming. This needs to be done separately for both the source and target languages, even though they are very similar;

  2. In the course of proving the correctness lemma, one needs to prove eight lemmas about the interactions of renaming, substitution, and transformation that are all remarkably similar, but must be stated and proved separately (e.g, as observed by Benton, Hur, Kennedy and McBride (benton2012strongly)).

Even after doing all of this work, they have only a result for a single pair of source and target languages. If they were to change their languages or , they would have to repeat the same work all over again (or at least do a lot of cutting, pasting, and editing).

The main contribution of this paper is this: using the universe of syntaxes with binding we present in this paper, we are able to solve this repetition problem once and for all.

Content and Contributions.

To introduce the basic ideas that this paper builds on, we start with primers on scoped and sorted terms (Section 2), scope and sort safe programs acting on them (Section 3), and programmable descriptions of data types (Section 4). These introductory sections help us build an understanding of the problem at hand as well as a toolkit that leads us to the novel content of this paper: a universe of scope safe syntaxes with binding (Section 5) together with a notion of scope safe semantics for these syntaxes (Section 6). This gives us the opportunity to write generic implementations of renaming and substitution (Section 6.2), a generic let-binding removal transformation (generalising the problem stated above) (Section 7.5), and normalisation by evaluation (Section 7.7). Further, we show how to construct generic proofs by formally describing what it means for one semantics to simulate another (Section 9.2), or for two semantics to be fusible (Section 9.3). This allows us to prove the lemmas required above for renaming, substitution, and desugaring of let binders generically, for every syntax in our universe.

Our implementation language is Agda (norell2009dependently). However, our techniques are language independent: any dependently typed language at least as powerful as Martin-Löf Type Theory (martin1982constructive) equipped with inductive families (dybjer1994inductive) such as Coq (Coq:manual), Lean (DBLP:conf/cade/MouraKADR15) or Idris (brady2013idris) ought to do.

Changes with respect to the ICFP 2018 version

This paper is a revised and expanded version of a paper of the same title that appeared at ICFP 2018. This extended version of the paper includes many more examples of the use of our universe of syntax with binding for writing generic programs in Section 7: pretty printing with human readable names (Section 7.1), scope checking (Section 7.2), type checking (Section 7.3), elaboration (Section 7.4), inlining of single use let-bound expressions (shrinking reductions) (Section 7.6), and normalisation by evaluation (Section 7.7). We have also included a discussion of how to define generic programs for deciding equality of terms. Additionally, we have elaborated our descriptions and examples throughout, and expanded our discussion of related work in Section 10.

2 A Primer on Scope And Sort Safe Terms

From Inductive Types to Inductive Families for Abstract Syntax

A reasonable way to represent the abstract syntax of the untyped -calculus in a typed functional programming language is to use an inductive type:

We have used de Bruijn (de1972lambda) indices to represent variables by the number of binders one has to pass up through to reach the binding occurrence. The de Bruijn representation has the advantage that terms are automatically represented up to -equivalence. If the index goes beyond the number of binders enclosing it, then we assume that it is referring to some context, left implicit in this representation.

This representation works well enough for writing programs, but the programmer must constantly be vigilant to guard against the accidental construction of ill scoped terms. The implicit context that accompanies each represented term is prone to being forgotten or muddled with another, leading to confusing behaviour when variables either have dangling pointers or point to the wrong thing.

To improve on this situation, previous authors have proposed to use the host language’s type system to make the implicit context explicit, and to enforce well scopedness of variables. Scope safe terms follow the discipline that every variable is either bound by some binder or is explicitly accounted for in a context. Bellegarde and Hook (BELLEGARDE1994287), Bird and Patterson (bird_paterson_1999), and Altenkirch and Reus (altenkirch1999monadic) introduced the classic presentation of scope safety using inductive families (dybjer1994inductive) instead of plain inductive types to represent abstract syntax. Indeed, using a family indexed by a , we can track scoping information at the type level. The empty represents the empty scope. The type constructor extends the running scope with an extra variable.

Implicit generalisation of variables in Agda

The careful reader may have noticed that we use a seemingly out-ot-scope variable X of type Set. The latest version of Agda allows us to declare variables that the system should implicitly quantify over if it happens to find them used in types. This allows us to lighten the presentation by omitting a large number of prenex quantifiers. The reader will hopefully be familiar enough with ML-style polymorphic types that this will seem natural to them.

The Lam type is now a family of types, indexed by the set of variables in scope. Thus, the context for each represented term has been made visible to the type system, and the types enforce that only variables that have been explicitly declared can be referenced in the ‘var constructor. We have made illegal terms unrepresentable.

Since Lam is defined to be a function , it makes sense to ask whether it is also a functor and a monad. Indeed it is, as Altenkirch and Reus showed. The functorial action corresponds to renaming, the monadic ‘return’ corresponds to the use of variables (the ‘var constructor), and the monadic ‘bind’ corresponds to substitution. The functor and monad laws correspond to well known properties from the equational theories of renaming and substitution. We will revisit these properties, for our whole universe of syntax with binding, in Section 9.3.

A Typed Variant of Altenkirch and Reus’ Calculus

There is no reason to restrict this technique to inductive families indexed by . The more general case of inductive families in can be endowed with similar functorial and monadic operations by using Altenkirch, Chapman and Uustalu’s relative monads (Altenkirch2010; JFR4389).

We pick as our index type the category whose objects are inhabitants of List I (I is a parameter of the construction) and whose morphisms are thinnings (permutations that may forget elements, see Section 7). Values of type List I are intended to represent the list of the sorts (or kinds, or types, depending on the application) of the de Bruijn variables in scope. We can recover an unsorted approach by picking to be the unit type. Given this sorted setting, our functors take an extra argument corresponding to the sort of the expression being built. This is captured by the large type I ─Scoped:[Data/Var.tex]scoped We use Agda’s mixfix operator notation where underscores denote argument positions.

To lighten the presentation, we exploit the observation that the current scope is either passed unchanged to subterms (e.g. in the application case) or extended (e.g. in the λ-abstraction case) by introducing combinators to build indexed types. We conform to the convention (see e.g. martin1982constructive) of mentioning only context extensions when presenting judgements. That is to say that we aim to write sequents with an implicit ambient context. Concretely: we would rather use the rule appᵢ than appₑ as the inference rule for application in STLC.

f : σ → τ t : σ f t : τ appᵢ Γ ⊢ f : σ → τ Γ ⊢ t : σ Γ ⊢ f t : τ appₑ

In this discipline, the turnstile is used in rules which are binding fresh variables. It separates the extension applied to the ambient context on its left and the judgment that lives in the thus extended context on its right. Concretely: we would rather use the rule lamᵢ than lamₑ as the inference rule for λ-abstraction in STLC.

x:σ ⊢ b : τ λx.t : σ → τ lamᵢ Γ, x:σ ⊢ b : τ Γ ⊢ λx.t : σ → τ lamₑ

This observation that an ambient context is either passed around as is or extended for subterms is critical to our whole approach to syntax with binding, and will arise again in our generic formulation of syntax traversals in Section 6.

[Stdlib.tex]arrow [Stdlib.tex]adjust
[Stdlib.tex]constant [Stdlib.tex]forall
Figure 1: Combinators to build indexed Sets

We lift the function space pointwise with _⇒_, silently threading the underlying scope. The _⊢_ makes explicit the adjustment made to the index by a function, a generalisation of the idea of extension. We write f ⊢ T where f is the adjustment and T the indexed Set it operates on. Although it may seem surprising at first to define binary infix operators as having arity three, they are meant to be used partially applied, surrounded by ∀[_] which turns an indexed Set into a Set by implicitly quantifying over the index. Lastly, const is the constant combinator, which ignores the index.

We make _⇒_ associate to the right as one would expect and give it the highest precedence level as it is the most used combinator. These combinators lead to more readable type declarations. For instance, the compact expression ∀[ (const P ⇒ s ⊢ Q) ⇒ R ] desugars to the more verbose type ∀ {i} → (P → Q (s i)) → R i.

As the context argument comes second in the definition of _─Scoped, we can readily use these combinators to thread, modify, or quantify over the scope when defining such families, as for example in Figure 2.


Figure 2: Scope and Kind Aware de Bruijn Indices

The inductive family Var represents well scoped and well sorted de Bruijn indices. Its z (for zero) constructor refers to the nearest binder in a non-empty scope. The s (for successor) constructor lifts a a variable in a given scope to the extended scope where an extra variable has been bound. Both of the constructors’ types have been written using the combinators defined above. They respectively normalise to:

z : ∀ {σ Γ} → Var σ (σ :: Γ)   s : ∀ {σ τ Γ} → Var σ Γ → Var σ (τ :: Γ)

We will reuse the Var family to represent variables in all the syntaxes defined in this paper.

[StateOfTheArt/ACMM.tex]type [StateOfTheArt/ACMM.tex]tm
Figure 3: Simple Types and Intrinsically Typed definition of STLC

The Type ─Scoped family Lam is Altenkirch and Reus’ intrinsically typed representation of the simply typed λ-calculus, where Type is the Agda type of simple types. We can readily write well scoped-and-typed terms such as e.g. application, a closed term of type ((σ ‘→ τ) ‘→ (σ ‘→ τ)) ({- and -} delimit comments meant to help the reader see which binder the de Bruijn indices are referring too):


3 A Primer on Type and Scope Safe Programs

The scope- and type- safe representation described in the previous section is naturally only a start. Once the programmer has access to a good representation of the language they are interested in, they will want to write programs manipulating terms. Renaming and substitution are the two typical examples that are required for almost all syntaxes. Now that well typedness and well scopedness are enforced statically, all of these traversals have to be implemented in a type and scope safe manner. These constraints show up in the types of renaming and substitution defined in Figure 4.

[StateOfTheArt/ACMM.tex]ren [StateOfTheArt/ACMM.tex]sub
Figure 4: Type and Scope Preserving Renaming and Substitution

We have intentionally hidden technical details behind some auxiliary definitions left abstract here: var and extend. Their implementations are distinct for ren and sub but they serve the same purpose: var is used to turn a value looked up in the evaluation environment into a term and extend is used to alter the environment when going under a binder. This presentation highlights the common structure between ren and sub which we will exploit later in this section, particularly in Section 3.2 where we define an abstract notion of semantics and the corresponding generic traversal.

3.1 A Generic Notion of Environments

Both renaming and substitution are defined in terms of environments. We typically call Γ-environment an environment that associates values to each variable in Γ. This informs our notation choice: we write ((Γ ─Env) 𝓥 Δ) for an environment that associates a value 𝓥 (variables for renaming, terms for substitution) well scoped and typed in Δ to every entry in Γ. Formally, we have the following record structure (using a record helps Agda’s type inference reconstruct the type family 𝓥 of values for us):


Figure 5: Well Typed and Scoped Environments of Values
Record syntax in Agda

As with (all) other record structures defined in this paper, we are able to profit from Agda’s copattern syntax, as introduced in (abel2013copatterns) and showcased in (thibodeau2016case). That is, when defining an environment , we may either use the constructor pack, packaging a function r as an environment  = pack r, or else define in terms of the underlying function obtained from it by projecting out the (in this case, unique) lookup field, as lookup  = r. Examples of definition in this style are given in Figure 6 below, and throughout the rest of the paper. A value of a record type with more than one field requires each of its fields to be given, either by a named constructor (or else Agda’s default record syntax), or in copattern style. By analogy with record/object syntax in other languages, Agda further supports ‘dot’ notation, so that an equivalent definition here could be expressed as  .lookup = r.

We can readily define some basic building blocks for environments in Figure 6. The empty environment (ε) is implemented by remarking that there can be no variable of type (Var σ []) and to correspondingly dismiss the case with the impossible pattern (). The function _∙_ extends an existing Γ-environment with a new value of type σ thus returning a (σ ∷ Γ)-environment. We also include the definition of _<$>_, which lifts in a pointwise manner a function acting on values into a function acting on environment of such values.

[Data/Environment.tex]empty [Data/Environment.tex]extension


Figure 6: Combinators to Build Environments

As we have already observed, the definitions of renaming and substitution have very similar structure. Abstracting away this shared structure would allow for these definitions to be refactored, and their common properties to be proved in one swift move.

Previous efforts in dependently typed programming (benton2012strongly; allais2017type) have achieved this goal and refactored renaming and substitution, but also normalisation by evaluation, printing with names or CPS conversion as various instances of a more general traversal. As we will show in Section 7.3, typechecking in the style of Atkey (atkey2015algebraic) also fits in that framework. To make sense of this body of work, we need to introduce three new notions: Thinning, a generalisation of renaming; Thinnables, which are types that permit thinning; and the □ functor, which freely adds Thinnability to any indexed type. We use □, and our compact notation for the indexed function space between indexed types, to crisply encapsulate the additional quantification over environment extensions which is typical of Kripke semantics.


Figure 7: Thinnings: A Special Case of Environments
The Special Case of Thinnings

Thinnings subsume more structured notions such as the Category of Weakenings (altenkirch1995categorical) or Order Preserving Embeddings (chapman2009type), cf. Figure 8 for some examples of combinators. In particular, they do not prevent the user from defining arbitrary permutations or from introducing contractions although we will not use such instances. However, such extra flexibility will not get in our way, and permits a representation as a function space which grants us monoid laws “for free” as per Jeffrey’s observation (jeffrey2011assoc).

[Data/Environment.tex]identity [Data/Environment.tex]extend


Figure 8: Identity Thinning, context extension, and (generalised) transitivity

The □ combinator turns any (List I)-indexed Set into one that can absorb thinnings. This is accomplished by abstracting over all possible thinnings from the current scope, akin to an S4-style necessity modality. The axioms of S4 modal logic incite us to observe that the functor □ is a comonad: extract applies the identity Thinning to its argument, and duplicate is obtained by composing the two Thinnings we are given. The expected laws hold trivially thanks to Jeffrey’s trick mentioned above.

The notion of Thinnable is the property of being stable under thinnings; in other words Thinnables are the coalgebras of □. It is a crucial property for values to have if one wants to be able to push them under binders. From the comonadic structure we get that the □ combinator freely turns any (List I)-indexed Set into a Thinnable one.

[Data/Environment.tex]box [Data/Environment.tex]thinnable [Data/Environment.tex]extract [Data/Environment.tex]duplicate [Data/Environment.tex]thBox
Figure 9: The □ comonad, Thinnable, and the cofree Thinnable.

3.2 A Generic Notion of Semantics

As we show in our previous work (ACMM) (allais2017type), equipped with these new notions we can define an abstract concept of semantics for our scope- and type- safe language. Provided that a set of constraints on two (Type ─Scoped) families 𝓥 and 𝓒 is satisfied, we will obtain a traversal of the following type:


Broadly speaking, a semantics turns our deeply embedded abstract syntax trees into the shallow embedding of the corresponding parametrised higher order abstract syntax term. We get a choice of useful scope- and type- safe traversals by using different ‘host languages’ for this shallow embedding.

Semantics, specified in terms of a record Semantics, are defined in terms of a choice of values 𝓥 and computations 𝓒. A semantics must satisfy constraints on the notions of values 𝓥 and computations 𝓒 at hand.

In the following paragraphs, we interleave the definition of the record of constraints Semantics with explanations of our choices. It is important to understand that all of the indented Agda snippets are part of the record’s definition. Some correspond to record fields (highlighted in pink) while others are mere auxiliary definitions (highlighted in blue) as permitted by Agda. [StateOfTheArt/ACMM.tex]rsemtype First of all, values 𝓥 should be Thinnable so that semantics may push the environment under binders. We call this constraint th^𝓥, using a caret to generate a mnemonic name: th refers to thinnable and 𝓥 clarifies the family which is proven to be thinnable222 We use this convention consistently throughout the paper, using names such as vl^Tm for the proof that terms are VarLike.

[StateOfTheArt/ACMM.tex]thV This constraint allows us to define extend, the generalisation of the two auxiliary definitions we used in Figure 4, in terms of the building blocks introduced in Figure 6. It takes a context extension from Δ to Θ in the form of a thinning, an existing evaluation environment mapping Γ variables to Δ values and a value living in the extended context Θ and returns an evaluation environment mapping (σ ∷ Γ) variables to Θ values.

[StateOfTheArt/ACMM.tex]extend Second, the set of computations needs to be closed under various combinators which are the semantical counterparts of the language’s constructors. For instance in the variable case we obtain a value from the evaluation environment but we need to return a computation. This means that values should embed into computations.

[StateOfTheArt/ACMM.tex]var The semantical counterpart of application is an operation that takes a representation of a function and a representation of an argument and produces a representation of the result.

[StateOfTheArt/ACMM.tex]app The interpretation of the λ-abstraction is of particular interest: it is a variant on the Kripke function space one can find in normalisation by evaluation (berger1991inverse; berger1993program; CoqDybSK; coquand2002formalised). In all possible thinnings of the scope at hand, it promises to deliver a computation whenever it is provided with a value for its newly bound variable. This is concisely expressed by the constraint’s type:


Agda allows us to package the definition of the generic traversal function semantics together with the fields of the record Semantics. This causes the definition to be specialised and brought into scope for any instance of Semantics the user will define. We thus realise the promise made earlier, namely that any given Semantics 𝓥 𝓒 induces a function which, given a value in 𝓥 for each variable in scope, transforms a Lam term into a computation 𝓒.


Figure 10: Fundamental Lemma of Semantics for Lam, relative to a given Semantics 𝓥 𝓒

3.3 Instances of Semantics

Recall that each Semantics is parametrised by two families: 𝓥 and 𝓒. During the evaluation of a term, variables are replaced by values of type 𝓥 and the overall result is a computation of type 𝓒. Coming back to renaming and substitution, we see that they both fit in the Semantics framework. The family 𝓥 of values is respectively the family of variables for renaming, and the family of λ-terms for substitution. In both cases 𝓒 is the family of λ-terms because the result of the operation will be a term. We notice that the definition of substitution depends on the definition of renaming: to be able to push terms under binder, we need to have already proven that they are thinnable.

[StateOfTheArt/ACMM.tex]semren [StateOfTheArt/ACMM.tex]semrenfun [StateOfTheArt/ACMM.tex]semsub [StateOfTheArt/ACMM.tex]semsubfun
Figure 11: Renaming and Substitution as Instances of Semantics

In both cases we use (extend) defined in Figure 8 as (pack s) (where pack is the constructor for environments and s, defined in Section 2, is the function lifting an existing de Bruijn variable into an extended scope) as the definition of the thinning embedding Γ into (σ ∷ Γ.

We also include the definition of a basic printer relying on a name supply to highlight the fact that computations can very well be effectful. The ability to generate fresh names is given to us by a monad that here we decide to call Fresh. Concretely, Fresh is implemented as an instance of the State monad where the state is a stream of distinct strings. The Printing semantics is defined by using Names (i.e. Strings) as values and Printers (i.e. monadic actions in Fresh returning a String) as computations. We use a Wrapper with a type and a context as phantom types in order to help Agda’s inference propagate the appropriate constraints. We define a function fresh that fetches a name from the name supply and makes sure it is not available anymore.

[StateOfTheArt/ACMM.tex]valprint [StateOfTheArt/ACMM.tex]monad [StateOfTheArt/ACMM.tex]name [StateOfTheArt/ACMM.tex]printer [StateOfTheArt/ACMM.tex]freshprint
Figure 12: Wrapper and fresh name generation

The wrapper Wrap does not depend on the scope Γ so it is automatically a thinnable functor, that is to say that we have the (used but not shown here) definitions map^Wrap witnessing the functoriality of Wrap and th^Wrap witnessing its thinnability. We jump straight to the definition of the printer.

To print a variable, we are handed the Name associated to it by the environment and return it immediately.


To print an application, we produce a string representation, f, of the term in function position, then one, t, of its argument and combine them by putting the argument between parentheses.


To print a λ-abstraction, we start by generating a fresh name, x, for the newly-bound variable, use that name to generate a string b representing the body of the function to which we prepend a “λ” binding the name x.


Putting all of these pieces together, we get the Printing semantics shown in Figure 13.


Figure 13: Printing as an instance of Semantics

We show how one can use this newly-defined semantics to implement print, a printer for closed terms assuming that we have already defined names, a stream of distinct strings used as our name supply. We show the result of running print on the term apply (first introduced in Figure 3).



Both printing and renaming highlight the importance of distinguishing values and computations: the type of values in their respective environments are distinct from their type of computations.

All of these examples are already described at length by ACMM (allais2017type) so we will not spend any more time on them. In ACMM we have also obtained the simulation and fusion theorems demonstrating that these traversals are well behaved as corollaries of more general results expressed in terms of semantics. We will come back to this in Section 9.2.

One important observation to make is the tight connection between the constraints described in Semantics and the definition of Lam: the semantical counterparts of the Lam constructors are obtained by replacing the recursive occurrences of the inductive family with either a computation or a Kripke function space depending on whether an extra variable was bound. This suggests that it ought to be possible to compute the definition of Semantics from the syntax description. Before doing this in Section 5, we need to look at a generic descriptions of datatypes.

4 A Primer on Universes of Data Types

Chapman, Dagand, McBride and Morris (CDMM) (Chapman:2010:GAL:1863543.1863547) defined a universe of data types inspired by Dybjer and Setzer’s finite axiomatisation of inductive-recursive definitions (Dybjer1999) and Benke, Dybjer and Jansson’s universes for generic programs and proofs (benke-ugpp). This explicit definition of codes for data types empowers the user to write generic programs tackling all of the data types one can obtain this way. In this section we recall the main aspects of this construction we are interested in to build up our generic representation of syntaxes with binding.

The first component of the definition of CDMM’s universe (Figure 14) is an inductive type of Descriptions of strictly positive functors from to . These functors correspond to I-indexed containers of J-indexed payloads. Keeping these index types distinct prevents mistaking one for the other when constructing the interpretation of descriptions. Later of course we can use these containers as the nodes of recursive datastructures by interpreting some payloads sorts as requests for subnodes (DBLP:journals/jfp/AltenkirchGHMM15).

The inductive type of descriptions has three constructors: ‘σ to store data (the rest of the description can depend upon this stored value), ‘X to attach a recursive substructure indexed by and ‘ to stop with a particular index value.

The recursive function ⟦_⟧ makes the interpretation of the descriptions formal. Interpretation of descriptions give rise right-nested tuples terminated by equality constraints.

[StateOfTheArt/CDMM.tex]desc [StateOfTheArt/CDMM.tex]interp
Figure 14: Datatype Descriptions and their Meaning as Functors

These constructors give the programmer the ability to build up the data types they are used to. For instance, the functor corresponding to lists of elements in stores a Boolean which stands for whether the current node is the empty list or not. Depending on its value, the rest of the description is either the “stop” token or a pair of an element in and a recursive substructure i.e. the tail of the list. The List type is unindexed, we represent the lack of an index with the unit type whose unique inhabitant is tt.


Figure 15: The Description of the base functor for List A

Indices can be used to enforce invariants. For example, the type Vec A n of length-indexed lists. It has the same structure as the definition of listD. We start with a Boolean distinguishing the two constructors: either the empty list (in which case the branch’s index is enforced to be ) or a non-empty one in which case we store a natural number n, the head of type A and a tail of size n (and the branch’s index is enforced to be suc n).


Figure 16: The Description of the base functor for Vec A n

The payoff for encoding our datatypes as descriptions is that we can define generic programs for whole classes of data types. The decoding function ⟦_⟧ acted on the objects of , and we will now define the function fmap by recursion over a code d. It describes the action of the functor corresponding to d over morphisms in . This is the first example of generic programming over all the functors one can obtain as the meaning of a description.


Figure 17: Action on Morphisms of the Functor corresponding to a Description

All the functors obtained as meanings of Descriptions are strictly positive. So we can build the least fixpoint of the ones that are endofunctors (i.e. the ones for which equals ). This fixpoint is called μ and its iterator is given by the definition of fold d333NB In Figure 18 the Size (DBLP:journals/corr/abs-1012-4896) index added to the inductive definition of μ plays a crucial role in getting the termination checker to see that fold is a total function. .

[StateOfTheArt/CDMM.tex]mu [StateOfTheArt/CDMM.tex]fold

Figure 18: Least Fixpoint of an Endofunctor and Corresponding Generic Fold

We can see in Figure 19

that we can recover the types we are used to thanks to this least fixpoint. Pattern synonyms let us hide away the encoding: users can use them to pattern-match on lists and Agda conveniently resugars them when displaying a goal. Finally, we can get our hands on the types’ eliminators by instantiating the generic fold.

[StateOfTheArt/CDMM.tex]list [StateOfTheArt/CDMM.tex]nilcons [StateOfTheArt/CDMM.tex]foldr
Figure 19: List, its constructors, and eliminator

The CDMM approach therefore allows us to generically define iteration principles for all data types that can be described. These are exactly the features we desire for a universe of data types with binding, so in the next section we will see how to extend CDMM’s approach to include binding.

The functor underlying any well scoped and sorted syntax can be coded as some Desc (I × List I) (I × List I), with the free monad construction from CDMM uniformly adding the variable case. Whilst a good start, Desc treats its index types as unstructured, so this construction is blind to what makes the List I index a scope. The resulting ‘bind’ operator demands a function which maps variables in any sort and scope to terms in the same sort and scope. However, the behaviour we need is to preserve sort while mapping between specific source and target scopes which may differ. We need to account for the fact that scopes change only by extension, and hence that our specifically scoped operations can be pushed under binders by weakening.

5 A Universe of Scope Safe and Well Kinded Syntaxes

Our universe of scope safe and well kinded syntaxes (defined in Figures 2021) follows the same principle as CDMM’s universe of datatypes, except that we are not building endofunctors on any more but rather on I ─Scoped. We now think of the index type I as the sorts used to distinguish terms in our embedded language. The ‘ and ‘∎ constructors are as in the CDMM Desc type, and are used to represent data and index constraints respectively. What distinguishes this new universe Desc from that of Section 4 is that the ‘X constructor is now augmented with an additional List I argument that describes the new binders that are brought into scope at this recursive position. This list of the kinds of the newly-bound variables will play a crucial role when defining the description’s semantics as a binding structure in Figures 21, 22 and 23.


Figure 20: Syntax Descriptions

The meaning function ⟦_⟧ we associate to a description follows closely its CDMM equivalent. It only departs from it in the ‘X case and the fact it is not an endofunctor on I ─Scoped; it is more general than that. The function takes an X of type List I I ─Scoped to interpret ‘X Δ j (i.e. substructures of sort j with newly-bound variables in Δ) in an ambient scope Γ as X Δ j Γ.


Figure 21: Descriptions’ Meanings

The astute reader may have noticed that ⟦_⟧ is uniform in and ; however refactoring ⟦_⟧ to use the partially applied following this observation would lead to a definition harder to use with the combinators for indexed sets described in Section 2 which make our types much more readable.

If we pre-compose the meaning function ⟦_⟧ with a notion of ‘de Bruijn scopes’ (denoted Scope here) which turns any I ─Scoped family into a function of type List I → I ─Scoped by appending the two List indices, we recover a meaning function producing an endofunctor on I ─Scoped. So far we have only shown the action of the functor on objects; its action on morphisms is given by a function fmap defined by induction over the description just as in Section 4.


Figure 22: De Bruijn Scopes

The endofunctors thus defined are strictly positive and we can take their fixpoints. As we want to define the terms of a language with variables, instead of considering the initial algebra, this time we opt for the free relative monad (JFR4389) (with respect to the functor Var): the ‘var constructor corresponds to return, and we will define bind (also known as the parallel substitution sub) in the next section.


Figure 23: Term Trees: The Free Var-Relative Monads on Descriptions

Coming back to our original examples, we now have the ability to give codes for the well scoped untyped λ-calculus and, just as well, the intrinsically typed simply typed λ-calculus. We add a third example to showcase the whole spectrum of syntaxes: a well scoped and well sorted but not well typed bidirectional language. In all examples, the variable case will be added by the free monad construction so we only have to describe the other constructors.

Un(i)typed λ-calculus

For the untyped case, the lack of type translates to picking the unit type (⊤) as our notion of sort. We have two possible constructors: application where we have two substructures which do not bind any extra argument and λ-abstraction which has exactly one substructure with precisely one extra bound variable. A single Boolean is enough to distinguish the two constructors.


Figure 24: Description for the Untyped λ-calculus
Bidirectional STLC

Our second example is a bidirectional (pierce2000local) language hence the introduction of a notion of Mode: each term is either part of the Infer or Check fraction of the language. This language has four constructors which we list in the ad-hoc ‘Bidi type of constructor tags, its decoding Bidi is defined by a pattern-matching λ-expression in Agda. Application and λ-abstraction behave as expected, with the important observation that λ-abstraction binds an Inferrable term. The two remaining constructors correspond to changes of direction: one can freely Embbed inferrable terms as checkable ones whereas we require a type annotation when forming a Cut (we reuse the notion of Type introduced in Figure 3).

[Generic/Syntax/Bidirectional.tex]tagmode [Generic/Syntax/Bidirectional.tex]desc
Figure 25: Description for the bidirectional STLC
Intrinsically typed STLC

In the typed case (for the same notion of Type defined in Figure 3), we are back to two constructors: the terms are fully annotated and therefore it is not necessary to distinguish between Modes anymore. We need our tags to carry extra information about the types involved so we use once more and ad-hoc datatype ‘STLC, and define its decoding STLC by a pattern-matching λ-expression.

[Generic/Syntax/STLC.tex]tag [Generic/Syntax/STLC.tex]desc
Figure 26: Description for the intrinsically typed STLC

For convenience we use Agda’s pattern synonyms corresponding to the original constructors in Section 2. These synonyms can be used when pattern-matching on a term and Agda resugars them when displaying a goal. This means that the end user can seamlessly work with encoded terms without dealing with the gnarly details of the encoding. These pattern definitions can omit some arguments by using “_”, in which case they will be filled in by unification just like any other implicit argument: there is no extra cost to using an encoding! The only downside is that the language currently does not allow the user to specify type annotations for pattern synonyms. We only include examples of pattern synonyms for the two extreme examples, the definition for Bidi are similar.

[Generic/Syntax/UTLC.tex]LCpat [Generic/Syntax/STLC.tex]patST
Figure 27: Respective Pattern Synonyms for UTLC and STLC.

As a usage example of these pattern synonyms, we define the identity function in all three languages in Figure 28, using the same caret-based naming convention we introduced earlier. The code is virtually the same except for Bidi which explicitly records the change of direction from Check to Infer.

[Generic/Syntax/UTLC.tex]LCid [Generic/Syntax/Bidirectional.tex]BDid [Generic/Syntax/STLC.tex]STid
Figure 28: Identity function in all three languages

It is the third time (the first and second times being the definition of listD and vecD in Figure 15 and 16) that we use a Bool to distinguish between two constructors. In order to avoid re-encoding the same logic, the next section introduces combinators demonstrating that descriptions are closed under finite sums.

Common Combinators and Their Properties.

As seen previously, we can use a dependent pair whose first component is a Boolean to take the coproduct of two descriptions: depending on the value of the first component, we will return one or the other. We can abstract this common pattern as a combinator _‘+_ together with an appropriate eliminator case which, given two continuations, picks the one corresponding to the chosen branch.

[Generic/Syntax.tex]descsum [Generic/Syntax.tex]case
Figure 29: Descriptions are closed under Sum

A concrete use case for this combinator will be given in section 7.5 where we explain how to seamlessly enrich an existing syntax with let-bindings and how to use the Semantics framework to elaborate them away.

6 Generic Scope Safe and Well Kinded Programs for Syntaxes

Based on the Semantics type we defined for the specific example of the simply typed λ-calculus in Section 3, we can define a generic notion of semantics for all syntax descriptions. It is once more parametrised by two I─Scoped families 𝓥 and 𝓒 corresponding respectively to values associated to bound variables and computations delivered by evaluating terms. These two families have to abide by three constraints:

  • th^𝓥 Values should be thinnable so that we can push the evaluation environment under binders;

  • var Values should embed into computations for us to be able to return the value associated to a variable as the result of its evaluation;

  • alg We should have an algebra turning a term whose substructures have been replaced with computations (possibly under some binders, represented semantically by the Kripke type-valued function defined below) into computations


Figure 30: A Generic Notion of Semantics

Here we crucially use the fact that the meaning of a description is defined in terms of a function interpreting substructures which has the type List I → I─Scoped, i.e. that gets access to the current scope but also the exact list of the kinds of the newly bound variables. We define a function Kripke by case analysis on the number of newly bound variables. It is essentially a subcomputation waiting for a value associated to each one of the fresh variables.

  • If it’s we expect the substructure to be a computation corresponding to the result of the evaluation function’s recursive call;

  • But if there are newly bound variables then we expect to have a function space. In any context extension, it will take an environment of values for the newly-bound variables and produce a computation corresponding to the evaluation of the body of the binder.


Figure 31: Substructures as either Computations or Kripke Function Spaces

It is once more the case that the abstract notion of Semantics comes with a fundamental lemma: all I ─Scoped families 𝓥 and 𝓒 satisfying the three criteria we have put forward give rise to an evaluation function. We introduce a notion of computation _─Comp analogous to that of environments: instead of associating values to variables, it associates computations to terms.


6.1 Fundamental Lemma of Semantics

We can now define the type of the fundamental lemma (called semantics) which takes a semantics and returns a function from environments to computations. It is defined mutually with a function body turning syntactic binders into semantic binders: to each de Bruijn Scope (i.e. a substructure in a potentially extended context) it associates a Kripke (i.e. a subcomputation expecting a value for each newly bound variable).


Figure 32: Statement of the Fundamental Lemma of Semantics

The proof of semantics is straightforward now that we have clearly identified the problem structure and the constraints we need to enforce. If the term considered is a variable, we lookup the associated value in the evaluation environment and turn it into a computation using var. If it is a non variable constructor then we call fmap to evaluate the substructures using body and then call the algebra to combine these results.


Figure 33: Proof of the Fundamental Lemma of Semantics – semantics

The auxiliary lemma body distinguishes two cases. If no new variable has been bound in the recursive substructure, it is a matter of calling semantics recursively. Otherwise we are provided with a Thinning, some additional values and evaluate the substructure in the thinned and extended evaluation environment (thanks to a auxiliary function _>>_ which given two environments (Γ ─Env) 𝓥 Θ and (Δ ─Env) 𝓥 Θ produces an environment ((Γ ++ Δ) ─Env) 𝓥 Θ).


Figure 34: Proof of the Fundamental Lemma of Semantics – body

Given that fmap introduces one level of indirection between the recursive calls and the subterms they are acting upon, the fact that our terms are indexed by a Size is once more crucial in getting the termination checker to see that our proof is indeed well founded.

We immediately introduce closed, a corollary of the fundamental lemma of semantics for the special cases of closed terms in Figure 35. Given a Semantics with value type 𝓥 and computation type 𝓒, we can evaluate a closed term of type σ and obtain a computation of type (𝓒 σ []) by kickstarting the evaluation with an empty environment.


Figure 35: Corollary: evaluation of closed terms

6.2 Our First Generic Programs: Renaming and Substitution

Similarly to ACMM (allais2017type) renaming can be defined generically for all syntax descriptions as a semantics with Var as values and Tm as computations. The first two constraints on Var described earlier are trivially satisfied. Observing that renaming strictly respects the structure of the term it goes through, it makes sense for the algebra to be implemented using fmap. When dealing with the body of a binder, we ‘reify’ the Kripke function by evaluating it in an extended context and feeding it placeholder values corresponding to the extra variables introduced by that context. This is reminiscent both of what we did in Section 3 and the definition of reification in the setting of normalisation by evaluation (see e.g. Catarina Coquand’s formal development (coquand2002formalised)).

Substitution is defined in a similar manner with Tm as both values and computations. Of the two constraints applying to terms as values, the first one corresponds to renaming and the second one is trivial. The algebra is once more defined by using fmap and reifying the bodies of binders.

[Generic/Semantics/Syntactic.tex]renaming [Generic/Semantics/Syntactic.tex]ren [Generic/Semantics/Syntactic.tex]substitution [Generic/Semantics/Syntactic.tex]sub
Figure 36: Generic Renaming and Substitution for All Scope Safe Syntaxes with Binding

The reification process mentioned in the definition of renaming and substitution can be implemented generically for Semantics families which have VarLike values, i.e. values which are Thinnable and such that we can craft placeholder values in non-empty contexts. It is almost immediate that both Var and Tm are VarLike (with proofs vl^Var and vl^Tm, respectively).


Figure 37: VarLike: Thinnable and with placeholder values

Given a proof that 𝓥 is VarLike, we can manufacture several useful environments of values 𝓥. We provide users with base of type (Γ ─Env) 𝓥 Γ, freshr of type (Γ ─Env) 𝓥 (Δ ++ Γ) and freshl of type (Γ ─Env) 𝓥 (Γ ++ Δ) by combining the use of placeholder values and thinnings. In the Var case these very general definitions respectively specialise to the identity renaming for a context Γ and the injection of Γ fresh variables to the right or the left of an ambient context Δ. Similarly, in the Tm case, we can show base vl^Tm extensionally equal to the identity environment id^Tm given by lookup id^Tm = ‘var, which associates each variable to itself (seen as a term). Using these definitions, we can then implement reify as in Figure 38.


Figure 38: Generic Reification thanks to VarLike Values

7 A Catalogue of Generic Programs for Syntax with Binding

In this section we explore a large part of the spectrum of traversals a compiler writer may need when implementing their own language. In Section 7.1 we look at the production of human-readable representations of internal syntax; in Section 7.2 we write a generic scope checker thus bridging the gap between raw data fresh out of a parser to well scoped syntax; we then demonstrate how to write a type checker in Section 7.3 and even an elaboration function turning well scoped into well scoped and typed syntax in Section 7.4. We then study type and scope respecting transformations on internal syntax: desugaring in Section 7.5 and size preserving inlining in Section 7.6. We conclude with an unsafe but generic evaluator defined using normalisation by evaluation in Section 7.7.

7.1 Printing with Names

We have seen in Section 3.3 that printing with names is an instance of ACMM’s notion of Semantics. We will now show that this observation can be generalised to arbitrary syntaxes with binding. Unlike renaming or substitution, this generic program will require user guidance: there is no way for us to guess how an encoded term should be printed. We can however take care of the name generation (using the monad Fresh introduced in Figure 12), deal with variable binding, and implement the traversal generically. We want our printer to have type: [Generic/Semantics/Printing.tex]printtype where Display explains how to print one ‘layer’ of term provided that we are handed the Pieces corresponding to the printed subterm and names for the bound variables: [Generic/Semantics/Printing.tex]display Reusing the notion of Name introduced in Section 3.3, we can make Pieces formal. A subterm has already been printed if we have a string representation of it together with an environment of Names we have attached to the newly-bound variables this structure contains. That is to say: [Generic/Semantics/Printing.tex]pieces The key observation that will help us define a generic printer is that Fresh composed with Name is VarLike. Indeed, as the composition of a functor and a trivially thinnable Wrapper, Fresh is Thinnable, and fresh (defined in Figure 12) is the proof that we can generate placeholder values thanks to the name supply.


This VarLike instance empowers us to reify in an effectful manner a Kripke function space taking Names and returning a Printer to a set of Pieces.


In case there are no newly bound variables, the Kripke function space collapses to a mere Printer which is precisely the wrapped version of the type we expect.


Otherwise we proceed in a manner reminiscent of the pure reification function defined in Figure 38. We start by generating an environment of names for the newly-bound variables by using the fact that Fresh composed with Name is VarLike together with the fact that environments are Traversable (mcbride_paterson_2008), and thus admit the standard Haskell-like mapA and sequenceA traversals. We then run the Kripke function on these names to obtain the string representation of the subterm. We finally return the names we used together with this string.


We can put all of these pieces together to obtain the Printing semantics presented in Figure 39. The first two constraints can be trivially discharged. When defining the algebra we start by reifying the subterms, then use the fact that one “layer” of term of our syntaxes with binding is always traversable to combine all of these results into a value we can apply our display function to.


Figure 39: Printing with Names as a Semantics

This allows us to write a printer for open terms as demonstrated in Figure 40. We start by using base (defined in Section 6.2) to generate an environment of Names for the free variables, then use our semantics to get a printer which we can run using a stream names of distinct strings as our name supply.


Figure 40: Generic Printer for Open Terms
Untyped λ-calculus

Defining a printer for the untyped λ-calculus is now very easy: we define a Display by case analysis. In the application case, we combine the string representation of the function, wrap its argument’s representation between parentheses and concatenate the two together. In the lambda abstraction case, we are handed the name the bound variable was assigned together with the body’s representation; it is once more a matter of putting the Pieces together.


As always, these functions are readily executable and we can check their behaviour by writing tests. First, we print the identity function defined in Figure 28 in an empty context and verify that we do obtain the string "λa. a". Next, we print an open term in a context of size two and can immediately observe that names are generated for the free variables first, and then the expression itself is printed.



7.2 Writing a Generic Scope Checker

Converting terms in the internal syntax to strings which can in turn be displayed in a terminal or an editor window is only part of a compiler’s interaction loop. The other direction takes strings as inputs and attempts to produce terms in the internal syntax. The first step is to parse the input strings into structured data, the second is to perform scope checking, and the third step consists of type checking.

Parsing is currently out of scope for our library; users can write safe ad-hoc parsers for their object language by either using a library of total parser combinators (DBLP:conf/icfp/Danielsson10; allais2018agdarsec) or invoking a parser generator oracle whose target is a total language (Stump:2016:VFP:2841316). As we will see shortly, we can write a generic scope checker transforming terms in a raw syntax where variables are represented as strings into a well scoped syntax. We will come back to typechecking with a concrete example in section 7.3 and then discuss related future work in the conclusion.

Our scope checker will be a function taking two explicit arguments: a name for each variable in scope Γ and a raw term for a syntax description d. It will either fail (the Monad Fail granting us the ability to fail is made explicit in Figure 43) or return a well scoped and sorted term for that description.



We can obtain Names, the datastructure associating to each variable in scope its raw name as a string by reusing the standard library’s All. The inductive family All is a predicate transformer making sure a predicate holds of all the element of a list. It is defined in a style common in Agda: because All’s constructors are in one to one correspondence with that of its index type (List A), the same name are reused: [] is the name of the proof that P trivially holds of all the elements in the empty list []; similarly _∷_ is the proof that provided that P holds of the element a on the one hand and of the elements of the list as on the other then it holds of all the elements of the list (a ∷ as).

[Stdlib.tex]all [Generic/Scopecheck.tex]names
Figure 41: Associating a raw string to each variable in scope
Raw terms

The definition of WithNames is analogous to Pieces in the previous section: we expect Names for the newly bound variables. Terms in the raw syntax then leverage these definitions. They are either a variables or another “layer” of raw terms. Variables ’var carry a String and potentially some extra information E (typically a position in a file). The other constructor ’con carries a layer of raw terms where subterms are raw terms equiped with names for any newly-bound variables.

[Generic/Scopecheck.tex]withnames [Generic/Scopecheck.tex]raw

Figure 42: Names and Raw Terms
Error Handling

Various things can go wrong during scope checking: evidently a name can be out of scope but it is also possible that it may be associated to a variable of the wrong sort. We define an enumerating type covering these two cases. The scope checker will return a computation in the Monad Fail thus allowing us to fail and return an error, the string that caused the failure and the extra data of type E that accompanied it.

[Generic/Scopecheck.tex]error [Generic/Scopecheck.tex]monad [Generic/Scopecheck.tex]fail
Figure 43: Error Type and Scope Checking Monad

Equipped with these notions, we can write down the type of toVar which tackles the core of the problem: variable resolution. The function takes a string and a sort as well the names and sorts of the variables in the ambient scope. Provided that we have a function _≟I_ to decide equality on sorts, we can check whether the string corresponds to an existing variable and whether that binding is of the right sort. Thus we either fail or return a well scoped and well sorted Var.

If the ambient scope is empty then we can only fail with an OutOfScope error. Alternatively, if the variable’s name corresponds to that of the first one in scope we check that the sorts match up and either return z or fail with a WrongSort error. Otherwise we look for the variable further down the scope and use s to lift the result to the full scope.


Figure 44: Variable Resolution

Scope checking an entire term then amounts to lifting this action on variables to an action on terms. The error Monad Fail is by definition an Applicative and by design our terms are Traversable (bird_paterson_1999; DBLP:journals/jfp/GibbonsO09). The action on term is defined mutually with the action on scopes. As we can see in the second equation for toScope, thanks to the definition of WithNames, concrete names arrive just in time to check the subterm with newly-bound variables.


Figure 45: Generic Scope Checking for Terms and Scopes

7.3 An Algebraic Approach to Typechecking

Following Atkey (atkey2015algebraic), we can consider type checking and type inference as a possible semantics for a bidirectional (pierce2000local) language. We reuse the syntax introduced in Section 5 and the types introduced in Figure 3; it gives us a simply typed bidirectional calculus as a bisorted language using a notion of Mode to distinguish between terms for which we will be able to Infer the type and the ones for which we will have to Check a type candidate.

The values stored in the environment of the typechecking function attach Type information to bound variables whose Mode is Infer, guaranteeing no variable ever uses the Check mode. In contrast, the generated computations will, depending on the mode, either take a type candidate and Check it is valid or Infer a type for their argument. These computations are always potentially failing so we use the Maybe monad. In an actual compiler pipeline we would naturally use a different error monad and generate helpful error messages pointing out where the type error occured. The interested reader can see a fine-grained analysis of type errors in the extended example of a typechecker in DBLP:journals/jfp/McBrideM04.

[Generic/Semantics/TypeChecking.tex]varmode [Generic/Semantics/TypeChecking.tex]typemode
Figure 46: Var- and Type- Relations indexed by Mode

A change of direction from inferring to checking will require being able to check that two types agree so we introduce the function _=?_. Similarly we will sometimes expect a function type but may be handed anything so we will have to check with isArrow that our candidate’s head constructor is indeed an arrow, and collect the domain and codomain.

[Generic/Semantics/TypeChecking.tex]typeeq [Generic/Semantics/TypeChecking.tex]isArrow
Figure 47: Tests for Type values

We can now define typechecking as a Semantics. We describe the algorithm constructor by constructor; in the Semantics definition (omitted here) the algebra will simply perform a dispatch and pick the relevant auxiliary lemma. Note that in the following code, _<$_ is, following classic Haskell notations, the function which takes an A and a Maybe B and returns a Maybe A which has the same structure as its second argument.


When facing an application: infer the type of the function, make sure it is an arrow type, check the argument at the domain’s type and return the codomain. [Generic/Semantics/TypeChecking.tex]app


For a λ-abstraction: check that the input type arr is an arrow type and check the body b at the codomain type in the extended environment (using bind) where the newly-bound variable is of mode Infer and has the domain’s type. [Generic/Semantics/TypeChecking.tex]lam

Embedding of Infer into Check

The change of direction from Inferrable to Checkable is successful when the inferred type is equal to the expected one. [Generic/Semantics/TypeChecking.tex]emb

Cut: A Check in an Infer position

So far, our bidirectional syntax only permits the construction of STLC terms in canonical form (Pfenning:04; Dunfield:2004:TT:964001.964025). In order to construct non-normal (redex) terms, whose semantics is given logically by the ‘cut’ rule, we need to reverse direction. Our final semantic operation, cut, always comes with a type candidate against which to check the term and to be returned in case of success. [Generic/Semantics/TypeChecking.tex]cut We have defined a bidirectional typechecker for this simple language by leveraging the Semantics framework. We can readily run it on closed terms using the closed corollary defined in Figure 35 and (defining β to be (α ‘→ α)) infer the type of the expression (λx. x : β → β) (λx. x).

[Generic/Semantics/TypeChecking.tex]type- [Generic/Semantics/TypeChecking.tex]example
Figure 48: Type- Inference / Checking as a Semantics

The output of this function is not very informative. As we will see shortly, there is nothing stopping us from moving away from a simple computation returning a (Maybe Type) to an evidence-producing function elaborating a term in Bidi to a well scoped and typed term in STLC.

7.4 An Algebraic Approach to Elaboration

Instead of generating a type or checking that a candidate will do, we can use our language of Descriptions to define not only an untyped source language but also an intrinsically typed internal language. During typechecking we simultaneously generate an expression’s type and a well scoped and well typed term of that type. We use STLC (defined in Section 5) as our internal language.

Before we can jump right in, we need to set the stage: a Semantics for a Bidi term will involve (Mode ─Scoped) notions of values and computations but an STLC term is (Type ─Scoped). We first introduce a Typing associating types to each of the modes in scope, together with an erasure function ⌞_⌟ extracting the context of types implicitly defined by such a Typing. We will systematically distinguish contexts of modes (typically named ms) and their associated typings (typically named Γ).

[Generic/Semantics/Elaboration/Typed.tex]typing [Generic/Semantics/Elaboration/Typed.tex]fromtyping
Figure 49: Typing: From Contexts of Modes to Contexts of Types

We can then explain what it means for an elaboration process of type σ in a context of modes ms to produce a term of the (Type ─Scoped) family T: for any typing Γ of this context of modes, we should get a value of type (T σ ⌞ Γ ⌟).


Figure 50: Elaboration of a Scoped Family

Our first example of an elaboration process is our notion of environment values. To each variable in scope of mode Infer we associate an elaboration function targeting Var. In other words: our values are all in scope i.e. provided any typing of the scope of modes, we can assuredly return a type together with a variable of that type.


Figure 51: Values as Elaboration Functions for Variables

We can for instance prove that we have such an inference function for a newly-bound variable of mode Infer: given that the context has been extended with a variable of mode Infer, the Typing must also have been extended with a type σ. We can return that type paired with the variable z.


Figure 52: Inference Function for the 0-th Variable

The computations are a bit more tricky. On the one hand, if we are in checking mode then we expect that for any typing of the scope of modes and any type candidate we can Maybe return a term at that type in the induced context. On the other hand, in the inference mode we expect that given any typing of the scope, we can Maybe return a type together with a term at that type in the induced context.


Figure 53: Computations as Mode-indexed Elaboration Functions

Because we are now writing a typechecker which returns evidence of its claims, we need more informative variants of the equality and isArrow checks. In the equality checking case we want to get a proof of propositional equality but we only care about the successful path and will happily return nothing when failing. Agda’s support for (dependent!) do-notation makes writing the check really easy. For the arrow type, we introduce a family Arrow constraining the shape of its index to be an arrow type and redefine isArrow as a view targeting this inductive family (DBLP:conf/popl/Wadler87; DBLP:journals/jfp/McBrideM04). We deliberately overload the constructor of the isArrow family by calling it _‘→_. This means that the proof that a given type has the shape (σ ‘→ τ) is literally written (σ ‘→ τ). This allows us to specify in the type whether we want to work with the full set of values in Type or only the subset corresponding to function types and to then proceed to write the same programs a Haskell programmers would, with the added confidence that ours are guaranteed to be total.

[Generic/Semantics/Elaboration/Typed.tex]equal [Generic/Semantics/Elaboration/Typed.tex]arrow
Figure 54: Informative Equality Check and Arrow View

We now have all the basic pieces and can start writing elaboration code. We will use lowercase letter for terms in Bidi and uppercase ones for their elaborated counterparts in STLC. We once more start by dealing with each constructor in isolation before putting everything together to get a Semantics. These steps are very similar to the ones in the previous section.


In the application case, we start by elaborating the function and we get its type together with its internal representation. We then check that the inferred type is indeed an Arrow and elaborate the argument using the corresponding domain. We conclude by returning the codomain together with the internal function applied to the internal argument. [Generic/Semantics/Elaboration/Typed.tex]app


For the λ-abstraction case, we start by checking that the type candidate arr is an Arrow. We can then elaborate the body b of the lambda in a context of modes extended with one Infer variable, and the corresponding Typing extended with the function’s domain. From this we get an internal term B corresponding to the body of the λ-abstraction and conclude by returning it wrapped in a ‘lam constructor. [Generic/Semantics/Elaboration/Typed.tex]lam

Cut: A Check in an Infer position

For cut, we start by elaborating the term with the type annotation provided and return them paired together. [Generic/Semantics/Elaboration/Typed.tex]cut

Embedding of Infer into Check

For the change of direction Emb we not only want to check that the inferred type and the type candidate are equal: we need to cast the internal term labelled with the inferred type to match the type candidate. Luckily, Agda’s dependent do-notation make our job easy once again: when we make the pattern refl explicit, the equality holds in the rest of the block. [Generic/Semantics/Elaboration/Typed.tex]emb

We have almost everything we need to define elaboration as a semantics. Discharging the th^𝓥 constraint is a bit laborious and the proof doesn’t yield any additional insight so we leave it out here. The semantical counterpart of variables (var) is fairly straightforward: provided a Typing, we run the inference and touch it up to return a term rather than a mere variable. Finally we define the algebra (alg) by pattern-matching on the constructor and using our previous combinators.


Figure 55: Elaborate, the elaboration semantics

We can once more define a specialised version of the traversal induced by this Semantics for closed terms: not only can we give a (trivial) initial environment (using the closed corollary defined in Figure 35) but we can also give a (trivial) initial Typing. This leads to the definitions in Figure 56.

[Generic/Semantics/Elaboration/Typed.tex]typemode [Generic/Semantics/Elaboration/Typed.tex]type-
Figure 56: Evidence-producing Type (Checking / Inference) Function

Revisiting the example introduced in Section 7.3, we can check that elaborating the expression (λx. x : β → β) (λx. x) yields the type β together with the term (λx. x) (λx. x) in internal syntax. Type annotations have disappeared in the internal syntax as all the type invariants are enforced intrinsically.


7.5 Sugar and Desugaring as a Semantics

One of the advantages of having a universe of programming language descriptions is the ability to concisely define an extension of an existing language by using Description transformers grafting extra constructors à la Swiestra (swierstra_2008). This is made extremely simple by the disjoint sum combinator _‘+_ which we defined in Figure 29. An example of such an extension is the addition of let-bindings to an existing language.

Let bindings allow the user to avoid repeating themselves by naming sub-expressions and then using these names to refer to the associated terms. Preprocessors adding these types of mechanisms to existing languages (from C to CSS) are rather popular. In Figure 57, we introduce a description Let which can be used to extend any language description d to a language with let-bindings (d ‘+ Let).

[Generic/Syntax/LetBinder.tex]letcode [Generic/Syntax/LetBinder.tex]letpattern
Figure 57: Description of a single let binding, associated pattern synonyms

This description states that a let-binding node stores a pair of types and and two subterms. First comes the let-bound expression of type and second comes the body of the let which has type in a context extended with a fresh variable of type . This defines a term of type .

In a dependently typed language, a type may depend on a value which in the presence of let bindings may be a variable standing for an expression. The user naturally does not want it to make any difference whether they used a variable referring to a let-bound expression or the expression itself. Various typechecking strategies can accommodate this expectation: in Coq (Coq:manual) let bindings are primitive constructs of the language and have their own typing and reduction rules whereas in Agda they are elaborated away to the core language by inlining.

This latter approach to extending a language d with let bindings by inlining them before typechecking can be implemented generically as a semantics over (d ‘+ Let). For this semantics values in the environment and computations are both let-free terms. The algebra of the semantics can be defined by parts thanks to case, the eliminator for _‘+_ defined in Figure 29: the old constructors are kept the same by interpreting them using the generic substitution algebra (Sub); whilst the let-binder precisely provides the extra value to be added to the environment.


Figure 58: Desugaring as a Semantics

The process of removing let binders is then kickstarted with the placeholder environment id^Tm = pack ‘var of type (Γ ─Env) (Tm d ∞) Γ.


Figure 59: Specialising semantics with an environment of placeholder values

In less than 10 lines of code we have defined a generic extension of syntaxes with binding together with a semantics which corresponds to an elaborator translating away this new construct. In previous work (allais2017type), we focused on STLC only and showed that it is similarly possible to implement a Continuation Passing Style transformation as the composition of two semantics à la Hatcliff and Danvy (hatcliff1994generic). The first semantics embeds STLC into Moggi’s Meta-Language (DBLP:journals/iandc/Moggi91) and thus fixes an evaluation order. The second one translates Moggi’s ML back into STLC in terms of explicit continuations with a fixed return type.

We have demonstrated how easily one can define extensions and combine them on top of a base language without having to reimplement common traversals for each one of the intermediate representations. Moreover, it is possible to define generic transformations elaborating these added features in terms of lower-level ones. This suggests that this setup could be a good candidate to implement generic compilation passes and could deal with a framework using a wealth of slightly different intermediate languages à la Nanopass (Keep:2013:NFC:2544174.2500618).

7.6 Reference Counting and Inlining as a Semantics

Although useful in its own right, desugaring all let bindings can lead to an exponential blow-up in code size. Compiler passes typically try to maintain sharing by only inlining let-bound expressions which appear at most one time. Unused expressions are eliminated as dead code whilst expressions used exactly one time can be inlined: this transformation is size preserving and opens up opportunities for additional optimisations.

As we will see shortly, we can implement reference counting and size respecting let-inlining as a generic transformation over all syntaxes with binding equipped with let binders. This two-pass simple transformation takes linear time which may seem surprising given the results due to Appel and Jim (DBLP:journals/jfp/AppelJ97). Our optimisation only inlines let-bound variables whereas theirs also encompasses the reduction of static β-redexes of (potentially) recursive function. While we can easily count how often a variable is used in the body of a let binder, the interaction between inlining and β-reduction in theirs creates cascading simplification opportunities thus making the problem much harder.

But first, we need to look at an example demonstrating that this is a slightly subtle matter. Assuming that expensive takes a long time to evaluate, inlining all of the lets in the first expression is a really good idea whilst we only want to inline the one binding y in the second one to avoid duplicating work. That is to say that the contribution of the expression bound to y in the overall count depends directly on whether y itself appears free in the body of the let which binds it.

[Generic/Syntax/LetCounter.tex]cheap [Generic/Syntax/LetCounter.tex]expensive

Our transformation will consist of two passes: the first one will annotate the tree with accurate count information precisely recording whether let-bound variables are used zero, one, or many times. The second one will inline precisely the let-binders whose variable is used at most once.

During the counting phase we need to be particularly careful not to overestimate the contribution of a let-bound expression. If the let-bound variable is not used then we can naturally safely ignore the associated count. But if it used many times then we know we will not inline this let-binding and the count should therefore only contribute once to the running total. We define the control combinator in Figure 64 precisely to explicitly handle this subtle case.

The first step is to introduce the Counter additive monoid (cf. Figure 60). Addition will allow us to combine counts coming from different subterms: if any of the two counters is zero then we return the other, otherwise we know we have many occurences.

[Generic/Syntax/LetCounter.tex]counter [Generic/Syntax/LetCounter.tex]addition
Figure 60: The (Counter, zero, _+_) additive monoid

The syntax extension CLet defined in Figure 61 is a variation on the Let syntax extension of Section 7.5, attaching a Counter to each Let node. The annotation process can then be described as a function computing a (d ‘+ CLet) term from a (d ‘+ Let) one.


Figure 61: Counted Lets

We keep a tally of the usage information for the variables in scope. This allows us to know which Counter to attach to each Let node. Following the same strategy as in Section 7.2, we use the standard library’s All to represent this mapping. We say that a scoped value has been Counted if it is paired with a Count.

[Generic/Syntax/LetCounter.tex]count [Generic/Semantics/Elaboration/LetCounter.tex]counted
Figure 62: Counting i.e. Associating a Counter to each Var in scope.

The two most basic counts are described in Figure 63: the empty one is zero everywhere and the one corresponding to a single use of a single variable v which is zero everywhere except for v where it’s one.

[Generic/Syntax/LetCounter.tex]zeros [Generic/Syntax/LetCounter.tex]fromVar
Figure 63: Zero Count and Count of One for a Specific Variable

When we collect usage information from different subterms, we need to put the various counts together. The combinators in Figure 64 allow us to easily do so: merge adds up two counts in a pointwise manner while control uses one Counter to decide whether to erase an existing Count. This is particularly convenient when computing the contribution of a let-bound expression to the total tally: the contribution of the let-bound expression will only matter if the corresponding variable is actually used.

[Generic/Syntax/LetCounter.tex]merge [Generic/Syntax/LetCounter.tex]control
Figure 64: Combinators to Compute Counts

We can now focus on the core of the annotation phase. We define a Semantics whose values are variables themselves and whose computations are the pairing of a term in (d ‘+ CLet) together with a Count. The variable case is trivial: provided a variable v, we return (‘var v) together with the count (fromVar v).

The non-let case is purely structural: we reify the Kripke function space and obtain a scope together with the corresponding Count. We unceremoniously drop the Counters associated to the variables bound in this subterm and return the scope together with the tally for the ambient context.


Figure 65: Purely Structural Case

The Let-to-CLet case in Figure 66 is the most interesting one. We start by reifying the body of the let binder which gives us a tally cx for the bound variable and ct for the body’s contribution to the ambient environment’s Count. We annotate the node with cx and use it as a control to decide whether we are going to merge any of the let-bound’s expression contribution ce to form the overall tally.


Figure 66: Annotating Let Binders

Putting all of these things together we obtain the Semantics Annotate. We promptly specialise it using an environment of placeholder values to obtain the traversal annotate elaborating raw let-binders into counted ones.


Figure 67: Specialising semantics to obtain an annotation function

Using techniques similar to the ones described in Section 7.5, we can write an Inline semantics working on (d ‘+ CLet) terms and producing (d ‘+ Let) ones. We make sure to preserve all the let-binders annotated with many and to inline all the other ones. By composing Annotate with Inline we obtain a size-preserving generic optimisation pass.

7.7 (Unsafe) Normalisation by Evaluation

A key type of traversal we have not studied yet is a language’s evaluator. Our universe of syntaxes with binding does not impose any typing discipline on the user-defined languages and as such cannot guarantee their totality. This is embodied by one of our running examples: the untyped λ-calculus. As a consequence there is no hope for a safe generic framework to define normalisation functions.

The clear connection between the Kripke functional space characteristic of our semantics and the one that shows up in normalisation by evaluation suggests we ought to manage to give an unsafe generic framework for normalisation by evaluation. By temporarily disabling Agda’s positivity checker, we can define a generic reflexive domain Dm (cf. Figure 68) in which to interpret our syntaxes. It has three constructors corresponding respectively to a free variable, a constructor’s counterpart where scopes have become Kripke functional spaces on Dm and an error token because the evaluation of untyped programs may go wrong.


Figure 68: Generic Reflexive Domain

This datatype definition is utterly unsafe. The more conservative user will happily restrict themselves to particular syntaxes where the typed settings allows for domain to be defined as a logical predicate or opt instead for a step-indexed approach.

But this domain does make it possible to define a generic nbe semantics which, given a term, produces a value in the reflexive domain. Thanks to the fact we have picked a universe of finitary syntaxes, we can traverse (mcbride_paterson_2008; DBLP:journals/jfp/GibbonsO09) the functor to define a (potentially failing) reification function turning elements of the reflexive domain into terms. By composing them, we obtain the normalisation function which gives its name to normalisation by evaluation.

The user still has to explicitly pass an interpretation of the various constructors because there is no way for us to know what the binders are supposed to represent: they may stand for λ-abstractions, -types, fixpoints, or anything else.


Figure 69: Generic Normalisation by Evaluation Framework

Using this setup, we can write a normaliser for the untyped λ-calculus by providing an algebra. The key observation that allows us to implement this algebra is that we can turn a Kripke function, f, mapping values of type σ to computations of type τ into an Agda function from values of type σ to computations of type τ. This is witnessed by the application function (_$$_) defined in Figure 70: we first use extract (defined in Figure 9) to obtain a function taking environments of values to computations. We then use the combinators defined in Figure 6 to manufacture the singleton environment (ε ∙ t) containing the value t of type σ.


Figure 70: Applying a Kripke Function to an argument

We now define two patterns for semantical values: one for application and the other for lambda abstraction. This should make the case of interest of our algebra (a function applied to an argument) fairly readable.


Figure 71: Pattern synonyms for UTLC-specific Dm values

We finally define the algebra by case analysis: if the node at hand is an application and its first component evaluates to a lambda, we can apply the function to its argument using _$$_. Otherwise we have either a stuck application or a lambda, in other words we already have a value and can simply return it using C.


Figure 72: Normalisation by Evaluation for the Untyped λ-Calculus

We have not used the ⊥ constructor so if the evaluation terminates (by disabling totality checking we have lost all guarantees of the sort) we know we will get a term in normal form. See for instance in Figure 73 the evaluation of an untyped yet normalising term: (λx. x) ((λx. x) (λx. x)) normalises to (λx. x).


Figure 73: Example of a normalising untyped term

8 Other Opportunities for Generic Programming

Some generic programs of interest do not fit in the Semantics framework. They can still be implemented once and for all, and even benefit from the Semantics-based definitions.

We will first explore existing work on representing cyclic structures using a syntax with binding: a binder is a tree node declaring a pointer giving subtrees the ability to point back to it, thus forming a cycle. Substitution will naturally play a central role in giving these finite terms a semantics as their potentially infinite unfolding.

We will then see that many of the standard traversals produced by the ‘deriving’ machinery familiar to Haskell programmers can be implemented on syntaxes too, sometimes with more informative types.

8.1 Binding as Self-Reference: Representing Cyclic Structures

Ghani, Hamana, Uustalu and Vene (ghani2006representing) have demonstrated how Altenkirch and Reus’ type-level de Bruijn indices (altenkirch1999monadic) can be used to represent potentially cyclic structures by a finite object. In their representation each bound variable is a pointer to the node that introduced it. Given that we are, at the top-level, only interested in structures with no “dangling pointers”, we introduce the notation TM d to mean closed terms (i.e. terms of type Tm d ∞ []).

A basic example of such a structure is a potentially cyclic list which offers a choice of two constructors: [] which ends the list and _::_ which combines a head and a tail but also acts as a binder for a self-reference; these pointers can be used by using the var constructor which we have renamed ↶ (pronounced “backpointer”) to match the domain-specific meaning. We can see this approach in action in the examples [0, 1] and 01↺ (pronounced “0-1-cycle”) which describe respectively a finite list containing 0 followed by 1 and a cyclic list starting with 0, then 1, and then repeating the whole list again by referring to the first cons cell represented here by the de Bruijn variable 1 (i.e. s z).

[Generic/Examples/Colist.tex]clistD [Generic/Examples/Colist.tex]clistpat [Generic/Examples/Colist.tex]zeroones
Figure 74: Potentially Cyclic Lists: Description, Pattern Synonyms and Examples

These finite representations are interesting in their own right and we can use the generic semantics framework defined earlier to manipulate them. A basic building block is the unroll function which takes a closed tree, exposes its top node and unrolls any cycle which has it as its starting point. We can decompose it using the plug function which, given a closed and an open term, closes the latter by plugging the former at each free ‘var leaf. Noticing that plug’s fundamental nature is that of substituting a term for each leaf, it makes sense to implement it by re-using the Substitution semantics we already have.

[Generic/Cofinite.tex]plug [Generic/Cofinite.tex]unroll
Figure 75: Plug and Unroll: Exposing a Cyclic Tree’s Top Layer

However, one thing still out of our reach with our current tools is the underlying co-finite trees these finite objects are meant to represent. We start by defining the coinductive type corresponding to them as the greatest fixpoint of a notion of layer. One layer of a co-finite tree is precisely given by the meaning of its description where we completely ignore the binding structure. We show with 01⋯ the infinite list that corresponds to the unfolding of the example 01↺ given above in Figure 74.

[Generic/Cofinite.tex]cotm [Generic/Examples/Colist.tex]zeroones2
Figure 76: Co-finite Trees: Definition and Example

We can then make the connection between potentially cyclic structures and the co-finite trees formal by giving an unfold function which, given a closed term, produces its unfolding. The definition proceeds by unrolling the term’s top layer and co-recursively unfolding all the subterms.


Figure 77: Generic Unfold of Potentially Cyclic Structures

Even if the powerful notion of semantics described in Section 6 cannot encompass all the traversals we may be interested in, it provides us with reusable building blocks: the definition of unfold was made very simple by reusing the generic program fmap and the Substitution semantics whilst the definition of ∞Tm was made easy by reusing ⟦_⟧.

8.2 Generic Decidable Equality for Terms

Haskell programmers are used to receiving help from the ‘deriving’ mechanism (DBLP:journals/entcs/HinzeJ00; DBLP:conf/haskell/MagalhaesDJL10) to automatically generate common traversals for every inductive type they define. Recalling that generic programming is normal programming over a universe in a dependently typed language (DBLP:conf/ifip2-1/AltenkirchM02), we ought to be able to deliver similar functionalities for syntaxes with binding.

We will focus in this section on the definition of an equality test. The techniques used in this concrete example are general enough that they also apply to the definition of an ordering test, a Show instance, etc. In type theory we can do better than an uninformative boolean function claiming that two terms are equal: we can implement a decision procedure for propositional equality (DBLP:conf/icfp/LohM11) which either returns a proof that its two inputs are equal or a proof that they cannot possibly be.

The notion of decidability can be neatly formalised by an inductive family with two constructors: a Set P is decidable if we can either say yes and return a proof of P or no and provide a proof of the negation of P (here, a proof that P implies the empty type ⊥).

[Stdlib.tex]bottom [Stdlib.tex]dec
Figure 78: Empty Type and Decidability as an Inductive Family

To get acquainted with these new notions we can start by proving that equality of variables is decidable.

8.2.1 Deciding Variable Equality

The type of the decision procedure for equality of variables is as follows: given any two variables (of the same type, in the same context), the set of equality proofs between them is Decidable.


We can easily dismiss two trivial cases: if the two variables have distinct head constructors then they cannot possibly be equal. Agda allows us to dismiss the impossible premise of the function stored in the no contructor by using an absurd pattern ().


Otherwise if the two head constructors agree we can be in one of two situations. If they are both z then we can conclude that the two variables are indeed equal to each other.


Finally if the two variables are (s v) and (s w) respectively then we need to check recursively whether v is equal to w. If it is the case we can conclude by invoking the congruence rule for s. If v and w are not equal then a proof that (s v) and (s w) are will lead to a direct contradiction by injectivity of the constructor s.


8.2.2 Deciding Term Equality

The constructor ‘σ for descriptions gives us the ability to store values of any Set in terms. For some of these Sets (e.g. (ℕ → ℕ)), equality is not decidable. As a consequence our decision procedure will be conditioned to the satisfaction of a certain set of Constraints which we can compute from the Desc itself, as show in Figure 79. We demand that we are able to decide equality for all of the Sets mentioned in a description.


Figure 79: Constraints Necessary for Decidable Equality

Remembering that our descriptions are given a semantics as a big right-nested product terminated by an equality constraint, we realise that proving decidable equality will entail proving equality between proofs of equality. We are happy to assume Streicher’s axiom K (DBLP:conf/lics/HofmannS94) to easily dismiss this case. A more conservative approach would be to demand that equality is decidable on the index type I and to then use the classic Hedberg construction (DBLP:journals/jfp/Hedberg98) to recover uniqueness of identity proofs for I.

Assuming that the constraints computed by (Constraints d) are satisfied, we define the decision procedure for equality of terms together with its equivalent for bodies. The function eq^Tm is a straightforward case analysis dismissing trivially impossible cases where terms have distinct head constructors (‘var vs. ‘con) and using either eq^Var or eq^⟦⟧ otherwise. The latter is defined by induction over e. The somewhat verbose definitions are not enlightening so we leave them out here.


Figure 80: Type of Decidable Equality for Terms and Bodies

We now have an informative decision procedure for equality between terms provided that the syntax they belong to satisfies a set of constraints. Other generic functions and decision procedures can be defined following the same approach: implement a similar function for variables first, compute a set of constraints, and demonstrate that they are sufficient to handle any input term.

9 Building Generic Proofs about Generic Programs

In ACMM (allais2017type) we have already shown that, for the simply typed λ-calculus, introducing an abstract notion of Semantics not only reveals the shared structure of common traversals, it also allows us to give abstract proof frameworks for simulation or fusion lemmas. This idea naturally extends to our generic presentation of semantics for all syntaxes.

9.1 Relations and Relation Transformers

In our exploration of generic proofs about the behaviour of various Semantics, we are going to need to manipulate relations between distinct notions of values or computations. In this section, we introduce the notion of relation we are going to use as well as these two key relation transformers.

In Section 3.1 we introduced a generic notion of well typed and scoped environment as a function from variables to values. Its formal definition is given in Figure 5 as a record type. This record wrapper helps Agda’s type inference reconstruct the type family of values whenever it is passed an environment.

For the same reason, we will use a record wrapper for the concrete implementation of our notion of relation over (I ─Scoped) families. A Relation between two such families T and U is a function which to any σ and Γ associates a relation between (T σ Γ) and (U σ Γ). Our first example of such a relation is Eqᴿ the equality relation between an (I─Scoped) family T and itself.

[Data/Relation.tex]rel [Data/Relation.tex]eqR
Figure 81: Relation Between I─Scoped Families and Equality Example

Once we know what relations are, we are going to have to lift relations on values and computations to relations on environments, Kripke function spaces or on d-shaped terms whose subterms have been evaluated already. This is what the rest of this section focuses on.

Environment relator

Provided a relation 𝓥ᴿ for notions of values 𝓥ᴬ and 𝓥ᴮ, by pointwise lifting we can define a relation (All 𝓥ᴿ Γ) on Γ-environments of values 𝓥ᴬ and 𝓥ᴮ respectively. We once more use a record wrapper simply to facilitate Agda’s job when reconstructing implicit arguments.


Figure 82: Relating Γ-Environments in a Pointwise Manner

The first example of two environment being related is reflᴿ that, to any environment ρ associates a trivial proof of the statement (All Eqᴿ Γ ρ ρ). The combinators we introduced in Figure 6 to build environments (ε, _∙_, etc.) have natural relational counterparts. We reuse the same names for them, simply appending an ᴿ suffix.

Kripke relator

We assume that we have two types of values 𝓥ᴬ and 𝓥ᴮ as well as a relation 𝓥ᴿ for pairs of such values, and two types of computations 𝓒ᴬ and 𝓒ᴮ whose notion of relatedness is given by 𝓒ᴿ. We can define Kripkeᴿ relating Kripke functions of type (Kripke 𝓥ᴬ 𝓒ᴬ) and (Kripke 𝓥ᴮ 𝓒ᴮ) respectively by stating that they send related inputs to related outputs. We use the relation transformer All defined in the previous paragraph.


Figure 83: Relational Kripke Function Spaces: From Related Inputs to Related Outputs
Desc relator

The relator (⟦ d ⟧ᴿ) is a relation transformer which characterises structurally equal layers such that their substructures are themselves related by the relation it is passed as an argument. It inherits a lot of its relational arguments’ properties: whenever R is reflexive (respectively symmetric or transitive) so is (⟦ d ⟧ᴿ R).

It is defined by induction on the description and case analysis on the two layers which are meant to be equal:

  • In the stop token case ‘∎ i, the two layers are considered to be trivially equal (i.e. the constraint generated is the unit type)

  • When facing a recursive position ‘X j d, we demand that the two substructures are related by R j and that the rest of the layers are related by (⟦ d ⟧ᴿ R)

  • Two nodes of type ‘ A d will be related if they both carry the same payload a of type A and if the rest of the layers are related by (⟦ d a ⟧ᴿ R)


Figure 84: Relator: Characterising Structurally Equal Values with Related Substructures

If we were to take a fixpoint of ⟦_⟧ᴿ, we could obtain a structural notion of equality for terms which we could prove equivalent to propositional equality. Although interesting in its own right, this section will focus on more advanced use-cases.

9.2 Simulation Lemma

A constraint mentioning all three relation transformers appears naturally when we want to say that a semantics can simulate another one. For instance, renaming is simulated by substitution: we simply have to restrict ourselves to environments mapping variables to terms which happen to be variables. More generally, given a semantics 𝓢ᴬ with values 𝓥ᴬ and computations 𝓒ᴬ and a semantics 𝓢ᴮ with values 𝓥ᴮ and computations 𝓒ᴮ, we want to establish the constraints under which these two semantics yield related computations provided they were called with environments of related values.

These constraints are packaged in a record type called Simulation and parametrised over the semantics as well as the notion of relatedness used for values (given by a relation 𝓥ᴿ) and computations (given by a relation 𝓒ᴿ).


The two first constraints are self-explanatory: the operations th^𝓥 and var defined by each semantics should be compatible with the notions of relatedness used for values and computations.

[Generic/Simulation.tex]thR [Generic/Simulation.tex]varR

The third constraint is similarly simple: the algebras (alg) should take related recursively evaluated subterms of respective types ⟦ d ⟧ (Kripke 𝓥ᴬ 𝓒ᴬ) and ⟦ d ⟧ (Kripke 𝓥ᴮ 𝓒ᴮ) to related computations. The difficuly is in defining an appropriate notion of relatedness bodyᴿ for these recursively evaluated subterms.


We can combine ⟦_⟧ᴿ and Kripkeᴿ to express the idea that two recursively evaluated subterms are related whenever they have an equal shape (which means their Kripke functions can be grouped in pairs) and that all the pairs of Kripke function spaces take related inputs to related outputs.


The fundamental lemma of simulations is a generic theorem showing that for each pair of Semantics respecting the Simulation constraint, we get related computations given environments of related input values. In Figure 85, this theorem is once more mutually proven with a statement about Scopes, and Sizes play a crucial role in ensuring that the function is indeed total.


Figure 85: Fundamental Lemma of Simulations

Instantiating this generic simulation lemma, we can for instance prove that renaming is a special case of substitution, or that renaming and substitution are extensional i.e. that given environments equal in a pointwise manner they produce syntactically equal terms . Of course these results are not new but having them generically over all syntaxes with binding is convenient. We experience this first hand when tackling the POPLMark Reloaded challenge (poplmarkreloaded) where rensub (defined in Figure 86) was actually needed.

[Generic/Simulation/Syntactic.tex]rensub [Generic/Simulation/Syntactic.tex]rensubfun

Figure 86: Renaming as a Substitution via Simulation

When studying specific languages, new opportunities to deploy the fundamental lemma of simulations arise. Our solution to the POPLMark Reloaded challenge for instance describes the fact that (sub t) reduces to (sub ’ t) whenever for all v, (v) reduces to ’(v) as a Simulation. The main theorem (strong normalisation of STLC via a logical relation) is itself an instance of (the unary version of) the simulation lemma.

The Simulation proof framework is the simplest example of the abstract proof frameworks introduced in ACMM (allais2017type). We also explain how a similar framework can be defined for fusion lemmas and deploy it for the renaming-substitution interactions but also their respective interactions with normalisation by evaluation. Now that we are familiarised with the techniques at hand, we can tackle this more complex example for all syntaxes definable in our framework.

9.3 Fusion Lemma

Results that can be reformulated as the ability to fuse two traversals obtained as Semantics into one abound. When claiming that Tm is a Functor, we have to prove that two successive renamings can be fused into a single renaming where the Thinnings have been composed. Similarly, demonstrating that Tm is a relative Monad (JFR4389) implies proving that two consecutive substitutions can be merged into a single one whose environment is the first one, where the second one has been applied in a pointwise manner. The Substitution Lemma central to most model constructions (see for instance (mitchell1991kripke)) states that a syntactic substitution followed by the evaluation of the resulting term into the model is equivalent to the evaluation of the original term with an environment corresponding to the evaluated substitution.

A direct application of these results is our (to be published) entry to the POPLMark Reloaded challenge (poplmarkreloaded). By using a Desc-based representation of intrinsically well typed and well scoped terms we directly inherit not only renaming and substitution but also all four fusion lemmas as corollaries of our generic results. This allows us to remove the usual boilerplate and go straight to the point. As all of these statements have precisely the same structure, we can once more devise a framework which will, provided that its constraints are satisfied, prove a generic fusion lemma.

Fusion is more involved than simulation; we will once more step through each one of the constraints individually, trying to give the reader an intuition for why they are shaped the way they are.

9.3.1 The Fusion Constraints

The notion of fusion is defined for a triple of Semantics; each 𝓢ⁱ being defined for values in 𝓥ⁱ and computations in 𝓒ⁱ. The fundamental lemma associated to such a set of constraints will state that running 𝓢ᴮ after 𝓢ᴬ is equivalent to running 𝓢ᴬᴮ only.

The definition of fusion is parametrised by three relations: 𝓔ᴿ relates triples of environments of values in (Γ ─Env) 𝓥ᴬ Δ, (Δ ─Env) 𝓥ᴮ Θ and (Γ ─Env) 𝓥ᴬᴮ Θ respectively; 𝓥ᴿ relates pairs of values 𝓥ᴮ and 𝓥ᴬᴮ; and 𝓒ᴿ, our notion of equivalence for evaluation results, relates pairs of computation in 𝓒ᴮ and 𝓒ᴬᴮ.

[Generic/Fusion.tex]fusionrec The first obstacle we face is the formal definition of “running 𝓢ᴮ after 𝓢ᴬ”: for this statement to make sense, the result of running 𝓢ᴬ ought to be a term. Or rather, we ought to be able to extract a term from a 𝓒ᴬ. Hence the first constraint: the existence of a reifyᴬ function, which we supply as a field of the record Fusion. When dealing with syntactic semantics such as renaming or substitution this function will be the identity. Nothing prevents proofs, such as the idempotence of NbE, which use a bona fide reification function that extracts terms from model values.

[Generic/Fusion.tex]reify Then, we have to think about what happens when going under a binder: 𝓢ᴬ will produce a Kripke function space where a syntactic value is required. Provided that 𝓥ᴬ is VarLike, we can make use of reify to get a Scope back. Hence the second constraint.

[Generic/Fusion.tex]vlV Still thinking about going under binders: if three evaluation environments ρᴬ in (Γ ─Env) 𝓥ᴬ Δ, ρᴮ in (Δ ─Env) 𝓥ᴮ Θ, and ρᴬᴮ in (Γ ─Env) 𝓥ᴬᴮ Θ are related by 𝓔ᴿ and we are given a thinning σ from Θ to Ω then ρᴬ, the thinned ρᴮ and the thinned ρᴬᴮ should still be related.

[Generic/Fusion.tex]thV Remembering that _>>_ is used in the definition of body (Figure 34) to combine two disjoint environments (Γ ─Env) 𝓥 Θ and (Δ ─Env) 𝓥 Θ into one of type ((Γ ++ Δ) ─Env) 𝓥 Θ), we mechanically need a constraint stating that _>>_ is compatible with 𝓔ᴿ. We demand as an extra precondition that the values ρᴮ and ρᴬᴮ are extended with are related according to 𝓥ᴿ. Lastly, for all the types to match up, ρᴬ has to be extended with placeholder variables which is possible because we have already insisted on 𝓥ᴬ being VarLike.

[Generic/Fusion.tex]appendR We finally arrive at the constraints focusing on the semantical counterparts of the terms’ constructors. Each constraint essentially states that evaluating a term with 𝓢ᴬ, reifying the result and running 𝓢ᴮ is equivalent to using 𝓢ᴬᴮ straight away. This can be made formal by defining the following relation 𝓡.

[Generic/Fusion.tex]crel When evaluating a variable, on the one hand 𝓢ᴬ will look up its meaning in the evaluation environment, turn the resulting value into a computation which will get reified and then the result will be evaluated with 𝓢ᴮ. Provided that all three evaluation environments are related by 𝓔ᴿ this should be equivalent to looking up the value in 𝓢ᴬᴮ’s environment and turning it into a computation. Hence the constraint varᴿ:

[Generic/Fusion.tex]varR The case of the algebra follows a similar idea albeit being more complex: a term gets evaluated using 𝓢ᴬ and to be able to run 𝓢ᴮ afterwards we need to recover a piece of syntax. This is possible if the Kripke functional spaces are reified by being fed placeholder 𝓥ᴬ arguments (which can be manufactured thanks to the vl𝓥̂ᴬ we mentioned before) and then quoted. Provided that the result of running 𝓢ᴮ on that term is related via ⟦ d ⟧ᴿ (Kripkeᴿ 𝓥ᴿ 𝓒ᴿ) to the result of running 𝓢ᴬᴮ on the original term, the algᴿ constraint states that the two evaluations yield related computations.


9.3.2 The Fundamental Lemma of Fusion

This set of constraints is enough to prove a fundamental lemma of Fusion stating that from a triple of related environments, one gets a pair of related computations: the composition of 𝓢ᴬ and 𝓢ᴮ on one hand and 𝓢ᴬᴮ on the other. This lemma is once again proven mutually with its counterpart for Semantics’s body’s action on Scopes.


Figure 87: Fundamental Lemma of Fusion

9.3.3 Instances of Fusion

A direct consequence of this result is the four lemmas collectively stating that any pair of renamings and / or substitutions can be fused together to produce either a renaming (in the renaming-renaming interaction case) or a substitution (in all the other cases). One such example is the fusion of substitution followed by renaming into a single substitution where the renaming has been applied to the environment.


Figure 88: A Corollary: Substitution-Renaming Fusion

Another corollary of the fundamental lemma of fusion is the observation that Kaiser, Schäfer, and Stark (Kaiser-wsdebr) make: assuming functional extensionality, all the ACMM (allais2017type) traversals are compatible with variable renaming. We reproduced this result generically for all syntaxes (see accompanying code). The need for functional extensionality arises in the proof when dealing with subterms which have extra bound variables. These terms are interpreted as Kripke functional spaces in the host language and we can only prove that they take equal inputs to equal outputs. An intensional notion of equality will simply not do here. As a consequence, we refrain from using the generic result in practice when an axiom-free alternative is provable. Kaiser, Schäfer and Stark’s observation naturally raises the question of whether the same semantics are also stable under substitution. Our semantics implementing printing with names is a clear counter-example.

9.4 Definition of Bisimilarity for Co-finite Objects

Although we were able to use propositional equality when studying syntactic traversals working on terms, it is not the appropriate notion of equality for co-finite trees. What we want is a generic coinductive notion of bisimilarity for all co-finite tree types obtained as the unfolding of a description. Two trees are bisimilar if their top layers have the same shape and their substructures are themselves bisimilar. This is precisely the type of relation ⟦_⟧ᴿ was defined to express. Hence the following coinductive relation.


Figure 89: Generic Notion of Bisimilarity for Co-finite Trees

We can then prove by coinduction that this generic definition always gives rise to an equivalence relation by using the relator’s stability properties (if R is reflexive / symmetric / transitive then so is (⟦ d ⟧ᴿ R) mentioned in Section 9.1.


This definition can be readily deployed to prove e.g. that the unfolding of 01↺ defined in Section 8.1 is indeed bisimilar to 01⋯ which was defined in direct style. The proof is straightforward due to the simplicity of this example: the first refl witnesses the fact that both definitions pick the same constructor (a cons cell), the second that they carry the same natural number, and we can conclude by an appeal to the coinduction hypothesis.


10 Related Work

10.1 Variable Binding

The representation of variable binding in formal systems has been a hot topic for decades. Part of the purpose of the first POPLMark challenge (poplmark) was to explore and compare various methods.

Having based our work on a de Bruijn encoding of variables, and thus a canonical treatment of -equivalence classes, our work has no direct comparison with permutation-based treatments such as those of Pitts’ and Gabbay’s nominal syntax (gabbay:newaas-jv).

Our generic universe of syntax is based on scoped and typed de Bruijn indices (de1972lambda) but it is not a necessity. It is for instance possible to give an interpretation of Descriptions corresponding to Chlipala’s Parametric Higher-Order Abstract Syntax (DBLP:conf/icfp/Chlipala08) and we would be interested to see what the appropriate notion of Semantics is for this representation.

10.2 Alternative Binding Structures

The binding structure we present here is based on a flat, lexical scoping strategy. There are other strategies and it would be interesting to see whether our approach could be reused in these cases.

Weirich, Yorgey, and Sheard’s work (DBLP:conf/icfp/WeirichYS11) encompassing a large array of patterns (nested, recursive, telescopic, and n-ary) can inform our design. They do not enforce scoping invariants internally which forces them to introduce separate constructors for a simple binder, a recursive one, or a telescopic pattern. They recover guarantees by giving their syntaxes a nominal semantics thus bolting down the precise meaning of each combinator and then proving that users may only generate well formed terms.

Bach Poulsen, Rouvoet, Tolmach, Krebbers and Visser (BachPoulsen) introduce notions of scope graphs and frames to scale the techniques typical of well scoped and typed deep embeddings to imperative languages. They showcase the core ideas of their work using STLC extended with references and then demonstrate that they can already handle a large subset of Middleweight Java. We have demonstrated that our framework could be used to define effectful semantics by choosing an appropriate monad stack (DBLP:journals/iandc/Moggi91). This suggests we should be able to model STLC+Ref. It is however clear that the scoping structures handled by scope graphs and frames are, in their full generality, out of reach for our framework. In constrast, our work shines by its generality: we define an entire universe of syntaxes and provide users with traversals and lemmas implemented once and for all.

Many other opportunities to enrich the notion of binder in our library are highlighted by Cheney (DBLP:conf/icfp/Cheney05a). As we have demonstrated in Sections 7.5 and 7.6 we can already handle let-bindings generically for all syntaxes. We are currently considering the modification of our system to handle deeply-nested patterns by removing the constraint that the binders’ and variables’ sorts are identical. A notion of binding corresponding to hierarchical namespaces would be an exciting addition.

We have demonstrated how to write generic programs over the potentially cyclic structures of Ghani, Hamana, Uustalu and Vene (ghani2006representing). Further work by Hamana (Hamana2009) yielded a different presentation of cyclic structures which preserves sharing: pointers can not only refer to nodes above them but also across from them in the cyclic tree. Capturing this class of inductive types as a set of syntaxes with binding and writing generic programs over them is still an open problem.

10.3 Semantics of Syntaxes with Binding

An early foundational study of a general semantic framework for signatures with binding, algebras for such signatures, and initiality of the term algebra, giving rise to a categorical ‘program’ for substitution and proofs of its properties, was given by Fiore, Plotkin and Turi (FiorePlotkinTuri99). They worked in the category of presheaves over renamings, (a skeleton of) the category of finite sets. The presheaf condition corresponds to our notion of being Thinnable. Exhibiting algebras based on both de Bruijn level and index encodings, their approach isolates the usual (abstract) arithmetic required of such encodings.

By contrast, we are working in an implemented type theory where the encoding can be understood as its own foundation without appeal to an external mathematical semantics. We are able to go further in developing machine-checked such implementations and proofs, themselves generic with respect to an abstract syntax Desc of syntaxes-with-binding. Moreover, the usual source of implementation anxiety, namely concrete arithmetic on de Bruijn indices, has been successfully encapsulated via the □ coalgebra structure. It is perhaps noteworthy that our type-theoretic constructions, by contrast with their categorical ones, appear to make fewer commitments as to functoriality, thinnability, etc. in our specification of semantics, with such properties typically being provable as a further instance of our framework.

10.4 Meta-Theory Automation via Tactics and Code Generation

The tediousness of repeatedly proving similar statements has unsurprisingly led to various attempts at automating the pain away via either code generation or the definition of tactics. These solutions can be seen as untrusted oracles driving the interactive theorem prover.

Polonowski’s DBGen (polonowski:db) takes as input a raw syntax with comments annotating binding sites. It generates a module defining lifting, substitution as well as a raw syntax using names and a validation function transforming named terms into de Bruijn ones; we refrain from calling it a scopechecker as terms are not statically proven to be well scoped.

Kaiser, Schäfer, and Stark (Kaiser-wsdebr) build on our previous paper to draft possible theoretical foundations for Autosubst, a so-far untrusted set of tactics. The paper is based on a specific syntax: well scoped call-by-value System F. In contrast, our effort has been here to carve out a precise universe of syntaxes with binding and give a systematic account of these syntaxes’ semantics and proofs.

Keuchel, Weirich, and Schrijvers’ Needle (needleandknot) is a code generator written in Haskell producing syntax-specific Coq modules implementing common traversals and lemmas about them.

10.5 Universes of Syntaxes with Binding

Keeping in mind Altenkirch and McBride’s observation that generic programming is everyday programming in dependently-typed languages (DBLP:conf/ifip2-1/AltenkirchM02), we can naturally expect generic, provably sound, treatments of these notions in tools such as Agda or Coq.

Keuchel (Keuchel:Thesis:2011) together with Jeuring (DBLP:conf/icfp/KeuchelJ12) define a universe of syntaxes with binding with a rich notion of binding patterns closed under products but also sums as long as the disjoint patterns bind the same variables. They give their universe two distinct semantics: a first one based on well scoped de Bruijn indices and a second one based on Parametric Higher-Order Abstract Syntax (PHOAS) (DBLP:conf/icfp/Chlipala08) together with a generic conversion function from the de Bruijn syntax to the PHOAS one. Following McBride (mcbride2005type), they implement both renaming and substitution in one fell swoop. They leave other opportunities for generic programming and proving to future work.

Keuchel, Weirich, and Schrijvers’ Knot (needleandknot) implements as a set of generic programs the traversals and lemmas generated in specialised forms by their Needle program. They see Needle as a pragmatic choice: working directly with the free monadic terms over finitary containers would be too cumbersome. In our experience solving the POPLMark Reloaded challenge, Agda’s pattern synonyms make working with an encoded definition almost seamless.

The GMeta generic framework (gmeta) provides a universe of syntaxes and offers various binding conventions (locally nameless (Chargueraud2012) or de Bruijn indices). It also generically implements common traversals (e.g. computing the sets of free variables, shifting de Bruijn indices or substituting terms for parameters) as well as common predicates (e.g. being a closed term) and provides generic lemmas proving that they are well behaved. It does not offer a generic framework for defining new well scoped-and-typed semantics and proving their properties.

Érdi (gergodraft) defines a universe inspired by a first draft of this paper and gives three different interpretations (raw, scoped and typed syntax) related via erasure. He provides scope- and type- preserving renaming and substitution as well as various generic proofs that they are well behaved but offers neither a generic notion of semantics, nor generic proof frameworks.

Copello (copello2017) works with named binders and defines nominal techniques (e.g. name swapping) and ultimately -equivalence over a universe of regular trees with binders inspired by Morris’ (morris-regulartt).

10.6 Fusion of Successive Traversals

The careful characterisation of the successive recursive traversals which can be fused together into a single pass in a semantics-preserving way is not new. This transformation is a much needed optimisation principle in a high-level functional language.

Through the careful study of the recursion operator associated to each strictly positive datatype, Malcolm (DBLP:journals/scp/Malcolm90) defined optimising fusion proof principles. Other optimisations such as deforestation (DBLP:journals/tcs/Wadler90) or the compilation of a recursive definition into an equivalent abstract machine-based tail-recursive program (DBLP:conf/icfp/CortinasS18) rely on similar generic proofs that these transformations are meaning-preserving.

11 Conclusion and Future Work

Recalling our earlier work (allais2017type) we have started from an example of a scope- and type- safe language (the simply typed λ-calculus), have studied common invariant preserving traversals and noticed their similarity. After introducing a notion of semantics and refactoring these traversals as instances of the same fundamental lemma, we have observed the tight connection between the abstract definition of semantics and the shape of the language.

By extending a universe of datatype descriptions to support a notion of binding, we have given a generic presentation of syntaxes with binding. We then described a large class of scope- and type-safe generic programs acting on all of them. We started with syntactic traversals such as renaming and substitution. We then demonstrated how to write a small compiler pipeline: scope checking, type checking and elaboration to a core language, desugaring of new constructors added by a language transformer, dead code elimination and inlining, partial evaluation, and printing with names.

We have seen how to construct generic proofs about these generic programs. We first introduced a Simulation relation showing what it means for two semantics to yield related outputs whenever they are fed related input environments. We then built on our experience to tackle a more involved case: identifying a set of constraints guaranteeing that two semantics run consecutively can be subsumed by a single pass of a third one.

We have put all of these results into practice by using them to solve the (to be published) POPLMark Reloaded challenge which consists of formalising strong normalisation for the simply typed λ-calculus via a logical-relation argument. This also gave us the opportunity to try our framework on larger languages by tackling the challenge’s extensions to sum types and Gödel’s System T.

Finally, we have demonstrated that this formalisation can be re-used in other domains by seeing our syntaxes with binding as potentially cyclic terms. Their unfolding is a non-standard semantics and we provide the user with a generic notion of bisimilarity to reason about them.

11.1 Limitations of the Current Framework

Although quite versatile already our current framework has some limitations which suggest avenues for future work. We list these limitations from easiest to hardest to resolve. Remember that each modification to the universe of syntaxes needs to be given an appropriate semantics.

Closure under Products

Our current universe of descriptions is closed under sums as demonstrated in Section 5. It is however not closed under products: two arbitrary right-nested products conforming to a description may disagree on the sort of the term they are constructing. An approach where the sort is an input from which the description of allowed constructors is computed (à la Dagand (DBLP:phd/ethos/Dagand13) where, for instance, the ‘lam constructor is only offered if the input sort is a function type) would not suffer from this limitation.

Unrestricted Variables

Our current notion of variable can be used to form a term of any kind. We remarked in Sections 7.3 and 7.4 that in some languages we want to restrict this ability to one kind in particular. In that case, we wanted users to only be able to use variables at the kind Infer of our bidirectional language. For the time being we made do by restricting the environment values our Semantics use to a subset of the kinds: terms with variables of the wrong kind will not be given a semantics.

Flat Binding Structure

Our current setup limits us to flat binding structures: variable and binder share the same kinds. This prevents us from representing languages with binding patterns, for instance pattern-matching let-binders which can have arbitrarily nested patterns taking pairs apart.

Closure under Derivation

One-hole contexts play a major role in the theory of programming languages. Just like the one-hole context of a datatype is a datatype (DBLP:journals/fuin/AbbottAMG05), we would like our universe to be closed under derivatives so that the formalisation of e.g. evaluation contexts could benefit directly from the existing machinery.

Closure under Closures

Jander’s work on formalising and certifying continuation passing style transformations (Jander:Thesis:2019) highlighted the need for a notion of syntaxes with closures. Recalling that our notion of Semantics is always compatible with precomposition with a renaming (Kaiser-wsdebr) but not necessarily precomposition with a substitution (printing is for instance not stable under substitution), accommodating terms with suspended substitutions is a real challenge. Preliminary experiments show that a drastic modification of the type of the fundamental lemma of Semantics makes dealing with such closures possible. Whether the resulting traversal has good properties that can be proven generically is still an open problem.

11.2 Future work

The diverse influences leading to this work suggest many opportunities for future research.

  • Our example of the elaboration of an enriched language to a core one, ACMM’s implementation of a Continuation Passing Style conversion function, and Jander’s work (Jander:Thesis:2019) on the certification of a intrinsically typed CPS transformation raises the question of how many such common compilation passes can be implemented generically.

  • Our universe only includes syntaxes that allow unrestricted variable use. Variables may be used multiple times or never, with no restriction. We are interested in representing syntaxes that only allow single use of variables, such as term calculi for linear logic (DBLP:conf/tlca/BentonBPH93; barber96dual; context-constrained), or that annotate variables with usage information (BrunelGMZ14; GhicaS14; PetricekOM14), or arrange variables into non-list like structures such as bunches (DBLP:journals/jfp/OHearn03), or arbitrary algebraic structures (DBLP:conf/rta/LicataSR17), and in investigating what form a generic semantics for these syntaxes takes.

  • An extension of McBride’s theory of ornaments (mcbride2010ornamental) could provide an appropriate framework to formalise and mechanise the connection between various languages, some being seen as refinements of others. This is particularly evident when considering the informative typechecker (see the accompanying code) which given a scoped term produces a scoped-and-typed term by type-checking or type-inference.

  • Our work on the POPLMark Reloaded challenge highlights a need for generic notions of congruence closure which would come with guarantees (if the original relation is stable under renaming and substitution so should the closure). Similarly, the “evaluation contexts” corresponding to a syntax could be derived automatically by building on the work of Huet (huet_1997) and Abbott, Altenkirch, McBride and Ghani (DBLP:journals/fuin/AbbottAMG05), allowing us to revisit previous work based on concrete instances of ACMM such as McLaughlin, McKinna and Stark (craig2018triangle).

We now know how to generically describe syntaxes and their well behaved semantics. We can now start asking what it means to define well behaved judgments. Why stop at helping the user write their specific language’s meta-theory when we could study meta-meta-theory?