Elegant elaboration with function invocation

05/31/2021 ∙ by Tesla Zhang, et al. ∙ Penn State University 0

We present an elegant design of the core language in a dependently-typed lambda calculus with δ-reduction and an elaboration algorithm.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1. Introduction

Throughout this paper, we will use boxes in the following two cases:

  • to clarify the precedences of symbols when the formulae become too large.
    e.g. .

  • to distinguish type theory terms from natural language text.
    e.g. we combine a term with a term to get a term .

In the context of practical functional programming languages, functions can be either primitive (like arithmetic operations for primitive numbers, cubical primitives [TR-X, §3.2, §3.4, §5.1] such as transp, hcomp, Glue, etc.) or defined

(that have user-definable reduction rules, e.g. by using pattern matching). The reduction of function applications are known as

-reduction [ACM-Delta] and the to-be-reduced terms are called -redexes (redex for reducible expressions).

In general, function definitions may reduce only when fully applied, while they are also curried, allowing the flexible partial application. So, from the type theoretical perspective, functions are always unary and function application as an operation is binary, while in the operational semantics, function application is n-ary and -reduction only works when enough arguments are supplied. We may use elaboration, a process that transforms a user-friendly concrete syntax into a compact, type theoretical core language using type information, to deal with this inconsistency between the intuitive concrete syntax and the optimal internal representation.

For defined functions, even when they are sometimes elaborated as a combination of lambdas and case-trees [DepPM] or eliminators [Goguen06] (so we can think of the function syntax as a syntactic sugar of lambdas), the elaborated terms are related to the implementation detail of the programming language, which we tend to hide from the users.

To ensure the functions are fully applied, we may check whether sufficient arguments are supplied every time we want to reduce an application to a function Elaboration is also helpful here: we could choose an efficient representation of function application in the core language and elaborate a user-friendly concrete syntax into it. Consider a term where is a function taking two natural numbers and returning the larger one, we present two styles of function applications in the core language.

  • binary application. The syntax definition of application is like . The above term will be structured as .

  • spine-normal form. The syntax definition of application is like
    ( is called a spine), where arguments are collected as a list in application terms. The above term will be structured as-is.

If we choose the binary representation, checking the number of supplied arguments requires a traversal of the terms, which is an process where is the number of parameters of the applied function (accessing the applied function from an application term is also ).

This is not a problem in spine-normal form representation, where we only need to check if the size of the arguments is larger than the number of parameters during -reduction. Size comparison is an process for memory-efficient arrays. However, inserting extra arguments to the spine is a list mutation operation, which requires a list reconstruction in a purely functional setting. The list reconstruction creates a new list with a larger size and copies the old list with the new argument appended to the end, which is again an process.

Similar problems also exist for indexed types [SIT], where we need the types to be fully applied to determine the available constructors.

1.1. Contributions

We discuss an even more elegant design of application term representation where neither traversal of terms during reduction nor mutation of the argument lists during application are needed. We will use both binary application and spine-normal form to guarantee that the number of supplied arguments always fits the requirement, and we could always assume function applications to have sufficient arguments supplied.

We present the syntax (in sec:syntax) and a bidirectional-style elaboration algorithm (in sec:elab) which can be adapted to the implementation of any programming language with -reduction.

2. Syntax

We will assume the existence of well-typed function definitions (will be further discussed in sec:elab), and focus on lambda calculus terms. Inductive types and pattern matching are also assumed and omitted.

2.1. Core language

The syntax is defined in fig:term.

Core terms have several assumed properties: well-typedness, well-scopedness, and the functions are exactly-applied – the number of arguments matches exactly the number of parameters.

variable names
exactly-applied function
reference
binary application
-type
lambda abstraction
substitution object
Figure 1. Syntax of terms

We will assume the substitution operation on core terms, written as .

For a function who has two parameters and returns a function with two parameters, the application is structured as . The innermost subterm is an exactly-applied function and the outer terms are binary applications.

2.2. Concrete syntax

The concrete syntax is defined in fig:expr. We assume the names to be resolved in concrete syntax, so we can assume the terms to be well-scoped and distinguish local references and function references.

The symbol for local references and functions are overloaded because they make no difference between concrete and core.

function reference
local reference
application
lambda abstraction
Figure 2. Concrete syntax tree

In the concrete syntax, applications are always binary.

We do not have -types in concrete syntax because they are unrelated to application syntax and their type-checking is relevant to the universe type which is quite complicated.

3. Elaboration

In this section, we describe the process that type-checks concrete terms against core terms and translates them into well-typed, exactly-applied core terms.

We define the following operation to eliminate obviously reducible core terms:

We will use a bidirectional elaboration algorithm [BidirTyck, MiniTT]. It uses the following typing judgments, parameterized by a context :

  • , normally called a checking or an inherit rule.

  • , normally called an inferring or a synthesis rule.

  • , that the context contains a local binding .

Checking judgments are for introduction rules, while synthesis judgments are for the formation and elimination rules. The direction of the arrows is inspired from [MiniTT, §6.6.1]. The arrows ( and ) separate the inputs and outputs of the corresponding elaboration procedure: checking judgments take as input a term in concrete form and its expected type in core form, and produce as output an elaborated version of the term in core form; synthesis judgments take as input a term in concrete form and emit its elaborated form and the inferred type as output.

The context contains a list of local bindings (pairs of local variables and types, written as ) and a list of functions.

A function is defined with a name and a signature (and a body that we do not care about in this paper), where the signature tells us about its parameters and the return type.

3.1. Abstraction and application

First of all, we have the basic elaboration rules for type conversion and local references:

M ⇒u:A
A =_βη B M:B ⇐u Conv x:A ∈Γx ⇒x:A Var

The rules for lambda abstraction and application are quite straightforward:

Γ,x:A ⊢M:B[x/y] ⇐u  :  Lam M ⇒u:
N:A ⇐v M N ⇒apply(u, v):B[v/x] App

3.2. Functions

Before continuing to function references, we need to define the notation for function signature. We define parameters to be a list of local bindings:

Then, we assume two operations for every function symbol :

  • that returns a that represents the parameters of in .

  • that returns a term that represents the type (combining parameters and the return type) of in .

We will need a few more operations before introducing the elaboration rules for functions.

: extracts the variables (as a list of terms) from :

: generates a lambda abstraction by induction on :

With these two operations, we can define the elaboration rule for functions. Let , we have the synthesis rule for function references as in eqn:fun.

f ∈Γ f ⇒lambda(, Δ): typeOf(Γ, f)

Figure 3. Elaboration of functions

Here are some examples. Consider the function discussed in sec:intro, we have the following facts:

Concrete term . By the rule in eqn:fun,

Observe that is exactly-applied. The result type is , which equals .

Concrete term , assuming . We already know the elaboration result of in ex:max. By the App rule, since , the elaborated version of is , typed . Observe that is still exactly-applied.

Functions are always exactly-applied in the core language, as promised in sub:contrib.

Proof.

We only generate exactly-applied functions in eqn:fun, and we do not take arguments away or insert new arguments to function application terms in any other rules or operations. ∎

4. Conclusion

We have discussed an elegant core language design with -redexes with an elaboration algorithm. With this design, wherever in the compiler, we can assume any given -redexes to be exactly-applied. Appending extra arguments to an application term results in a binary application term to a non-function (as in ex:nest).

4.1. Implementation

The discussed core language design is implemented in two proof assistants:

  • Arend [Arend]. There is an abstract class DefCallExpression that generalizes all sorts of definition invocations, including functions, data types, constructors, etc. These expressions are always exactly-applied. The source code is hosted on GitHub 111 See https://github.com/JetBrains/Arend.

  • Aya [Aya]. Similar to Arend, there is an interface CallTerm in Aya. The source code is also hosted on GitHub 222 See https://github.com/aya-prover/aya-dev.

In the implementations, there is one extra complication: invocations to constructors of inductive types have access to the parameters of the inductive type.

4.2. Related work

The notion of -reduction was discussed in [ACM-Delta], but the -redexes are represented in binary application form. Exactly-applied -redexes are discussed in [DeltaRed, DepPM], but they did not discuss elaboration. The idea of separating spine-normal function application and binary application also appeared in an early work on LISP [LISP, §6], where spines referred to as rails and binary application referred to as pairs, but they also did not discuss elaboration.

In the elaboration of Lean 2 [Lean2], functions are transformed into lambdas and recursors (and they refer to the corresponding redexes as -redexes and -redexs [Lean2, §3.3], respectively). This design is not friendly for primitive functions that only work when fully applied.

4.3. Acknowledgments

We would like to thank Marisa Kirisame, Ende Jin, Qiantan Hong, and Zenghao Gao for their comments and suggestions on the draft versions of this paper.