1. Combining safe coercions with dependent types
A newtype in Haskell111In this paper, we use “Haskell” to refer to the language implemented by the Glasgow Haskell Compiler (GHC), version 8.6. is a user-defined algebraic datatype with exactly one constructor; that constructor takes exactly one argument. Here is an example:
This declaration creates a generative abstraction; the HTML type is new in the sense that it is not equal to any existing type. We call the argument type (String) the representation type. Because a newtype is isomorphic to its representation type, the Haskell compiler uses the same in-memory format for values of these types. Thus, creating a value of a newtype (i.e., calling MkHTML
) is free at runtime, as is unpacking it (i.e., using a pattern-match).
However, the Haskell type checker treats the newtype and its representation type as wholly distinct, meaning programmers cannot accidentally confuse HTML objects with String objects. We thus call newtypes a zero-cost abstraction: a convenient compile-time distinction with no cost to runtime efficiency. A newtype exported from a module without its constructor is an abstract datatype; clients do not have access to its representation.
Inside the defining module, a newtype is a translucent abstraction: you can see through it with effort. The safe coercions (Breitner et al., 2016) extension to the Glasgow Haskell Compiler (GHC) reduces this effort through the availability of the coerce primitive. For example, as HTML and String are represented by the same bits in memory, so are the lists [HTML] and [String]. Therefore, we can define a no-cost operation that converts the former to the latter by coercing between representationally equal types.
However, coerce must be used with care. Not every structure is appropriate for conversion. For example, converting a Map HTML Int to a Map String Int would be disastrous if the ordering relation used for keys differs between HTML and String. Even worse, allowing coerce on types that use the type family feature (Chakravarty et al., 2005) leads to unsoundness. Haskell thus includes role annotations for type constructors that indicate whether it is appropriate to lift newtype-coercions through abstract structures, such as Map.
The key idea of safe coercions is that there are two different notions of type equality at play—the usual definition of type equality, called nominal equality, that distinguishes between the types HTML and String, and representational equality that identifies types suitable for coercion. Some type constructor arguments are not congruent with respect to representational equivalence, so role annotations prohibit the derivation of these undesired equalities.
1.1. Extending GHC with Dependent Types
Recent work has laid out a design for Haskell extended with dependent types (Weirich et al., 2013; Gundry, 2013; Eisenberg, 2016; Weirich et al., 2017) and there is ongoing work dedicated to implementing this theory (Xie and Eisenberg, 2018).222Also, see https://github.com/ghc-proposals/ghc-proposals/pull/102 Dependent types are desirable for Haskell because they increase the ability to create abstractions. Indexing types by terms allows datatypes to maintain application-specific invariants, increasing program reliability and expressiveness (Oury and Swierstra, 2008; Weirich, 2014, 2017).
However, even though dependent type theories are fundamentally based on a rich definition of type equality, it is not clear how best to incorporate roles and safe coercions with these systems. In the context of GHC, this omission has interfered with the incorporation of dependent types. To make progress we must reconcile the practical efficiency of safe, zero-cost coercions with the power of dependent types. We need to know how a use of coerce interacts with type equality, and we must resolve how roles can be assigned in the presence of type dependency.
The contribution of this work is System DR, a language that safely integrates dependent types and roles. The starting point for our design is System D, the core language for Dependent Haskell from Weirich et al. (2017). Though this language has full-spectrum dependent types, it is not a standard dependent type theory: it admits logical inconsistency and the axiom, along with support for equality assumptions and type erasure. Our integration of roles is meant to be realizable in GHC and is thus based on the existing design of Breitner et al. (2016).
Integrating roles with Dependent Haskell’s type system is not straightforward. Unpacking the point above, our paper describes the following aspects of our contribution:
To model the two different notions of type equality, we index the type system’s definition of equality by roles, using the declared roles of abstract constructors to control the sorts of equivalences that may be derived. In Section 3 we describe how roles and newtype axioms interact with a minimal dependent type system. In particular, type equality is based on computation, so we also update the operational semantics to be role-sensitive. Newtypes evaluate to their representations only at the representational role; at the nominal role, they are values. In contrast, type family axioms step to their definitions at all roles.
Supporting GHC’s type families requires an operation for intensional type-analysis as type families branch on the head constructors of types. Therefore, in Section 4.3 we add a case expression to model this behavior. Because our language is dependently-typed, this expression supports both compile-time type analysis (as in type families) and run-time type analysis (i.e. typecase).
Our type equality includes nth projections, a way to decompose equalities between type constants. We describe the rules that support these projections and how they interact with roles in Section 4.4.
Our mechanized proof in Coq, available online,333https://github.com/sweirich/corespec is presented in Section 5. Proving safety is important because the combination of coerce and type families, without roles, is known to violate type safety.444This safety violation was originally reported as http://ghc.haskell.org/trac/ghc/ticket/1496. This work provides the first mechanically checked proof of the safety of this combination.
Our work solves a longstanding issue in GHC, known as the Constraint vs. Type problem. In Section 6.1 we describe this problem and how defining Constraint as a newkind resolves this tension.
Our work sheds new light on the semantics of safe-coercions. Prior work (Breitner et al., 2016), includes a phantom role, in addition to the nominal and representational roles. This role allows free conversion between the parameters of type constructors that do not use their arguments. In 6.2 we show that this role need not be made primitive, but instead can be encoded using irrelevant arguments.
We also observe that although our work integrates roles and dependent types at the level of GHC’s core intermediate language, we lack a direct specification of source Haskell augmented with the coerce primitive. The problem, which we describe in detail in Section 6.3, is that it is difficult to give an operational semantics of coerce: reducing it away would violate type preservation, but it quite literally has no runtime behavior. Instead, in 6.4 we argue that our core language can provide a (type-sound) specification through elaboration.
Although our work is tailored to our goal of adding dependent types to Haskell, an existing language with safe-coercions, we also view it as a blueprint for adding safe-coercions to dependently-typed languages. Many dependently-typed languages include features related to the ones discussed here. For example, some support semi-opaque definitions, such as Coq’s Opaque and Transparent commands. Such definitions often guide type-class resolution (Sozeau and Oury, 2008; Devriese and Piessens, 2011; Brady, 2013), so precise control over their unfolding is important. Cedille includes zero-cost coercions (Diehl et al., 2018) and Idris has recently added experimental support for typecase555https://gist.github.com/edwinb/25cd0449aab932bdf49456d426960fed. Because the design considerations of these languages differ from that of GHC, we compare our treatment of roles to modal dependent type theory in Section 8.
Our Coq is a significant extension of prior work (Weirich et al., 2017). Our work is fully integrated—the same source is used to generate the LaTeXinference rules, Coq definitions, and variable binding infrastructure. We use the tools Ott (Sewell et al., 2010) and LNgen (Aydemir and Weirich, 2010) to represent the binding structure using a locally nameless representation (Aydemir et al., 2008). Our code includes over 21k nonblank, noncomment lines of Coq definitions and proofs. This total includes 1.8k lines generated directly from our Ott definitions, and 7k lines generated by LNgen.
In the next section, we review existing mechanisms for newtypes and safe coercions in GHC in more detail and lay out the considerations that govern their design. We present our new system starting in Section 3.
2. Newtypes and safe coercions in Haskell
2.1. Newtypes provide zero-cost abstractions
We first flesh out the HTML example of the introduction, by considering this Haskell module:
As above, HTML is a newtype; its representation type is String. This means that HTMLs and Strings are represented identically at runtime and that the MkHTML constructor compiles into the identity function. However, the type system keeps HTML and String separate: a function that expects an HTML will not accept something of type String.
Even in this small example, the Haskell programmer benefits from the newtype abstraction. The module exports the type HTML but not its data constructor MkHTML. Accordingly, outside the module, the only way to construct a value of this type is to use the text function, which enforces the invariant of the data structure. By exporting the type HTML without its data constructor, the module ensures that the type is abstract—clients cannot make arbitrary strings into HTML—and thereby prevents, for instance, cross-site scripting attacks.
Naturally, the author of this module will want to reuse functions that work with Strings to also work with values of type HTML—even if those functions actually work with, say, lists of Strings. To support this reuse, certain types, including functions (->) and lists , allow us to lift coercions between String and HTML. For example, suppose we wish to break up chunks of HTML into their constituent lines. We define
Using Haskell’s standard lines :: String -> [String] function, we have now, with minimal effort, lifted its action to work over HTML. Critically, the use of coerce above is allowed only when the MkHTML constructor is in scope; in other words, linesH can be written in the Html module but not outside it. In this way, the author of HTML has a chance to ensure that any invariants of the HTML type are maintained if an HTML chunk is split into lines.
2.2. Newtypes guide type-directed programming
Newtypes allow more than just abstraction; they may also be used to guide type-directed programming. For example, the sorting function in the base library has the following type.
The Ord type class constraint means that sorting requires a comparison function. When this function is called, the standard comparison function for the element type will be used. In other words, the type of the list determines how it is sorted.
Suppose our application sometimes must work with sorted lists of HTML chunks. For efficiency reasons, we wish to partition our sorted lists into a region where all chunks start with a tag (that is, the ’<’ character) and a region where no chunk starts with a tag. To that end, we define a custom Ord instance that will sort all HTML chunks that begin with < before all those that do not.
Now, when we sort a list of chunks, we can be confident that the sorting algorithm will use our custom comparison operation. The validity of this approach vitally depends on the generative nature of newtypes: if the type-checker could confuse HTML with String, we could not be sure whether type inference would select our custom ordering or the default lexicographic ordering.
Newtypes can also be used to locally override the behavior of the sorting operation. For example, the newtype Down a, defined in the Haskell base library, is isomorphic to its representation a, but reverses the comparison in its instance for Ord. Therefore, to sort a list in reverse order, use coerce to change the element type from a to Down a, thus modifying the comparison operation used by sort.
More generally, GHC’s recent DerivingVia extension (Blöndal et al., 2018), based on coerce, uses newtypes and their zero-cost coercions to extend this idea. This extension allows programmers to effectively write templates for instances; individual types need not write their own class instances but can select among the templates, each one embodied in a newtype.
2.3. The problem with unfettered coerce
We have shown that functions and lists support lifting coercions, but doing so is not safe for all types. Consider this (contrived) example:
The Discern type family (Chakravarty et al., 2005; Eisenberg et al., 2014) behaves like a function, where Discern String is Bool and Discern HTML is Char. Thus, a D String wraps a Bool, while a D HTML wraps a Char. Being able to use coerce to go between D String and D HTML would be disastrous: those two types have different runtime representations (in contrast to [String] and [HTML]). The goal of roles is to permit safe liftings (like for lists) and rule out unsafe ones (like for D).
Therefore, to control the use of coerce, all datatype and newtype parameters are assigned one of two roles: nominal and representational.666The implementation in GHC supports a third role, phantom, which behaves somewhat differently from the other two. We ignore it for the bulk of this paper, returning to it in Section 6.2. In a nominally-roled parameter, the name of the type provided is important to the definition, not just its representation. The one parameter of D is assigned a nominal role because the definition of D distinguishes between the names String and HTML. We cannot safely coerce between D String and D HTML, because these two types have different representations. In contrast, the type parameter of list has a representational role; coercing between [String] and [HTML] is indeed safe.
Roles are assigned either by user annotation or by role inference (Breitner et al., 2016, Section 4.6). The safety of user-provided role annotations is ensured by the compiler; the user would be unable to assign a representational role to the parameter of D.
2.4. Representational equality
The full type of coerce is Coercible a b => a -> b. That is, we can safely coerce between two types if those types are Coercible. The pseudo-class Coercible (it has custom solving rules and is thus not a proper type class) is an equivalence relation; we call it representational equality. We can thus coerce between any two representationally equal types. Representational equality is coarser than Haskell’s standard type equality (also called nominal equality): not only does it relate all pairs of types that are traditionally understood to be equal, it also relates newtypes to their representation types.
Crucially, representational equality relates datatypes whose parameters have the relationship indicated by the datatype’s parameters’ roles. Thus, because the list type’s parameter has a representational role, [ty1] is representationally equal to [ty2] iff ty1 is representationally equal to ty2. And because D’s parameter has a nominal role, Dty1 is representationally equal to D ty2 iff ty1 is nominally equal to ty2.
In addition to the lifting rules sketched above, representational equality also relates a newtype to its representation type, but with this caveat: this relationship holds only when the newtype’s constructor is in scope. This caveat is added to allow the Html module to enforce its abstraction barrier. If a newtype were always representationally equal to its representation, then any client of Html could use coerce in place of the unavailable constructor MkHTML, defeating the goal of abstraction.
2.5. Design considerations
The system for safe-coercions laid out in Breitner et al. (2016) is subject to design constraints that arise from the context of integration with the Haskell language. In particular, safe coercions are considered an advanced feature and should have minimal interaction with the rest of the language. In other words, Haskell programmers should not need to think about roles if they never use coerce.
This separation between types and kinds was not present in the first design of a role system for Haskell (Weirich et al., 2011). Due to its complexity, that first system was never integrated into GHC. Instead, by keeping roles separate from types, Breitner et al. (2016) simplified both the implementation of coerce (i.e., it was easier to extend the compiler) and the language specification, as programmers who do not use coerce need not understand roles.
Keeping types and roles separate imposes two major constraints on the design of System DR.
First, the type checking judgment should not also check roles. In the system that we present in the next section, the type checking judgment does not depend on the role-checking judgment . The only interaction between these two systems is confined to checking the role annotations on top-level axioms. (In contrast, in the first version of the system, type and role checking occurred together in a single judgment.)
Second, the syntax of types and kinds should not include roles. In the first version of the system, the kinds of type constructors included role information for parameters. However, this means that all users of higher-order polymorphism must understand (and choose) these roles. Instead, Breitner et al. (2016) does not modify the syntax of kinds, safely approximating role information with the nominal role when necessary.
In practice, the loss of expressiveness due to this simplification has not been significant and roles have proven to be a popular extension in GHC. However, we return to this discussion in Section 8, when we compare our design with the framework provided by modal dependent type theory.
3. A calculus with dependent types and roles
We now introduce System DR, a dependently-typed calculus with role-indexed equality. To make our work more approachable, we present this calculus incrementally, starting with the core ideas. In this section, we start with a minimal calculus that contains only dependent functions, constants and axioms. In Section 4, we extend the discussion to full System DR, including case analysis, irrelevant arguments, coercion abstraction, and decomposition rules.
System DR is intended to model an extension of FC (Sulzmann et al., 2007), the explicitly-typed intermediate language of GHC. As an intermediate language, it does not need to specify Haskell’s type inference algorithm or include features, like type classes, that exist only in Haskell’s source language.
Furthermore, the goal of our design of System DR is to describe what terms type check and how they evaluate. Like System D from prior work (Weirich et al., 2017), this calculus need not have decidable type checking for this purpose. Instead, once we have determined the language that we want, we can then figure out how to annotate it in the implementation with enough information to make type checking simple and syntax directed. The connection between System D and System DC in prior work provides a roadmap for this (fairly mechanical) process. Furthermore, this process is also constrained by implementation details of GHC that are beyond the scope of this paper, so we do not include an annotated system here.
Therefore, the core calculus that we start with is a Curry-style dependently-typed lambda-calculus with a single sort . The syntax is shown in Figure 1. As in pure type systems (Barendregt, 1991), we have a shared syntax for terms and types, but, as we don’t require decidable type checking, there are no typing annotations on function binders. This syntax has been decorated with role information in two places—applications are marked by flags () and declarations of data type and newtype constants in the signature () include role annotations. Application flags are not needed for source Haskell—they are easily added via elaboration and their presence here is a mere technical device to make role information easily accessible.
Roles are drawn from a lattice, with bottom element for Haskell’s nominal role and top element for the representational role. We use to talk about the ordering within the lattice. The minimum operation () calculates the greatest lower bound of the two roles. For concreteness, this paper fixes that lattice to the two element lattice, which is all that is needed for GHC. However, treating this lattice generally allows us to define the type system more uniformly.
The rules for typing and definitional equality for this fragment are shown in Figure 2.777For the purposes of presentation, these rules presented in this section are simplified versions of the rules that we use in our proofs. The complete listing of rules is available in the appendix. These rules are implicitly parameterized over a global signature of type constant declarations.
Most rules of the typing relation are standard for dependently-typed languages. Because Haskell includes nontermination, we do not need include a universe hierarchy, instead using the axiom (Cardelli, 1986). The novel rules are the application rules (AE-App,AE-TApp), the conversion rule (AE-Conv) and the rules for constants and axioms (AE-Const,AE-Fam), all discussed below.
Role-indexed type equality.
In System DR, the equality relation is indexed by a role that determines whether the equality is nominal or representational. The judgment defines when the terms and , of type , are equal at role . The rules for this judgment appear in the middle of Figure 2. This relation is defined as essentially the language’s small-step operational semantics, closed over reflexivity, symmetry, transitivity, and congruence. (Note that because System DR allows nontermination, the equality judgment is not a decidable relation.)
The role sensitivity of the equality relation derives from the fact that System DR’s small-step operational semantics, written , is also indexed by a role. Specifically, the role in the small-step relation determines whether top-level definitions can unfold to their right-hand sides or are kept abstract (see ABeta-Axiom, in Figure 3).
For example, we have that , but at role , the expression is treated as a value. This step is reflected into the equality relation via AE-Beta.
Dependently-typed languages use definitional equality for conversion: allowing the types of terms to be implicitly replaced with equal types. In source Haskell, conversion is available for all types that are nominally equal. The primitive is required to convert between types that are representationally equal. This primitive ensures that newtype distinctions are maintained by default but are erasable when desired.
However, System DR is intended to define GHC’s intermediate language, so we can assume that the source language type checker has already made sure that users do not confuse HTML and String. Instead, the optimizer is free to conflate these types, for great benefit.
Therefore System DR does not include a primitive. Instead, the conversion rule, AE-Conv, allows conversion using the coarsest relation, representational equality. This choice simplifies the design because all uses of coercion are implicit; there are no special rules in the equality relation or operational semantics. The downside of this design is that System DR is not a definition of source Haskell, an issue that we return to in Section 6.4.
3.1. Role annotations and application congruence
Haskell allows data type and newtype constants to be optionally ascribed with role annotations for their parameters. (Definitions without role ascriptions get their roles inferred (Breitner et al., 2016, Section 4.6).) These role annotations control what equalities can be derived about these constants. For example, Maybe has a representational parameter so Maybe HTML is representationally equal to Maybe String. However, the type Set HTML can be prevented from being coercible to Set String by annotating its parameter with the nominal role.
In System DR, a constant, like , is an opaque declaration in the signature of the form . This declaration specifies the type of the constant , as well as a list of roles for its parameters. (We assume that role inference has already happened, so all constants include role annotations.) For example, the signature might declare constants and with their usual types and roles.
The key idea from Breitner et al. (2016) is that the equality rule for applications headed by constants uses these declared role annotations to determine how to compare their arguments. We adapt this idea in this context using application flags, , marking arguments in applications. Consistent usage of flags is checked by the typing judgment using AE-TApp. If the head of the application is a constant, then this rule ensures that the flag must be the one calculated by the (partial) function , shown at the bottom of Figure 2. For example, and are valid terms. However, role annotations are optional and can be replaced by the application flag , in which case AE-App is used to check the term.
The equality AE-TAppCong defines when two role-annotated applications and are equal at some (other) role . This rule is most interesting when is —it explains why Maybe HTML and Maybe String are representationally equal but SetString and Set HTML are not. Here, applications such as MaybeString use role annotations on their arguments to enable this rule. In the case of Maybe, the above should also be because Maybe is declared with a representational argument.
The first two premises of the rule specify that the corresponding components of the application must be equal. Importantly, the role used for equality between the arguments of the application ( and ) is the minimum of the current role and the declared role for the argument . For example, for Set the declared role is , so the arguments must be nominally equal, as is , but for Maybe they may be representationally equal.
The use of the minimum role in this rule forces the nominal equality judgment (i.e. when is ), to compare all subterms using nominal equality while allowing representational equality when both the context and the argument are representational.
The last premise of the rule is a subtle aspect of the combination of roles and dependent types—it ensures that definitional equality is homogeneous. In the conclusion of the rule, we want to ensure that both terms have the same type, even when the type may be dependent. In this case, we know that at role , but this does not necessarily imply that the types and are representationally equal. For example, given a new type with dependent type , then we can show
In this case, the terms are equal at role but the types are not.888In a system that annotates parameter roles on function types, we could check that the parameter is used consistently with the role annotation on its type. This would allow us to drop this premise from the application rule. Therefore, the rule ensures that both sides have the same type by fiat.
In comparison, the (non-roled) application congruence AE-AppCong always uses nominal equality for arguments, following Breitner et al. (2016). Lacking any other source of role information about the parameters (such as roles annotating the function types), this rule defaults to requiring that they be equal using the finest equality (i.e. ).
Whether definitional equality uses AE-TAppCong or AE-AppCong depends on the application flag annotating the syntax of the term. The typing rules (AE-App and AE-TApp) ensure that the application flag is appropriate. Some arguments have a choice of application flag: they can either use the one specified by the roles in the signature, or they can use + (which defaults to for congruence). Mostly however, application flags are a technical device for our proofs as they duplicate information that is already available in the abstract syntax tree.
3.2. Type families and newtypes via axioms
This calculus uses axioms to model type families and newtypes in GHC. An axiom declaration appears in the top-level signature and has the following form.
The axiom introduces a contructor of type with parameter roles . It also declares that the pattern (which must be headed by ) can be equated at role to the right-hand side term .
Patterns come from the following subgrammar of terms and are composed of a sequence of applications.
Each variable in the pattern is bound in the right hand side of the axiom. The variables in the pattern are annotated with their roles, which are repeated in the list for convenience.
For example, compare the axiom for the type family definition F on the left side below with the one for the newtype declaration T on the right. In each case, the pattern is headed by the corresponding constant and binds the variable at role .
type family F a where
F a = Maybe a
newtype T a =
MkT (F a)
The important distinction is the role marking the in the axiom declarations: it is for type families and for newtypes. This role determines whether a definition should be treated opaquely or transparently by definitional equality. For example, at the nominal role, we have equal to and distinct from . The representational role equates all of these types.
These equalities are derived via the operational semantics. The types and are equal because the former reduces to the latter. In other words, the operational semantics matches a term of the form against a pattern in an axiom declaration , producing a substitution that is applied to the right-hand side of the axiom .
More generally, this behavior is specified using ABeta-Axiom This rule uses an auxiliary relation to determine whether the scrutinee matches the pattern , and if so, substitutes for the pattern variables in the right-hand side. This rule is also only applicable as long as the role of the axiom is less-than or equal to the role that is used for evaluation. For example, if is and is , then this rule does not apply (and is an opaque value). Alternatively, such as when is as in a type family, then this rule will replace the application headed by a constant with its definition.
Axiom constructors must be saturated in order to reduce to their right hand side. In other words, a constructor must be applied to as many arguments as specified by the pattern in the axiom. Non-saturated constructor applications are treated as values by the semantics, no matter their role.
Axioms extend the notion of definitions from Weirich et al. (2017), which were always transparent and consequently had no need of role annotations. As before, signatures are unordered and definitions may be recursive—each right hand side may refer to any name in the entire signature. As a result, axioms may be used to define a fixed point operator or other functions and types that use general recursion.
The role checking judgment is used by SSig-ConsAx, which checks the well-formedness of axioms. This rule uses the auxiliary function to determine the context and type to use to type check the right-hand side of the axiom. This function also determines , the context to use when role-checking the right-hand side. The notation converts the role-checking context into a list of roles that is used when checking application flags.
Unlike opaque constants (which are inert) role annotations for the parameters to an axiom must be checked by the system. In other words, if a newtype axiom declares that it has a representational parameter, then there are restrictions on how that parameter may be used. We check role annotations using the role-checking judgment , shown at the bottom of Figure 3.
The role-checking context assigns roles to variables. When we check the well-formedness of axioms, the role-checking context is derived from the annotations in pattern . Note, the roles declared in the pattern need not be the most permissive roles for . Even if the term would check at role , the pattern may specify role instead.
The rules of the role checking judgment appear at the bottom of Figure 3. The Srole-a-Var specifies that the role of a variable must be greater-than or equal to its role in the context. In Srole-a-TApp, the role marking an annotated, relevant argument determines how it will be checked. If the role annotation is not present, then arguments must be checked at role , as in Srole-a-App. Analogously, when role-checking an abstraction, the bound variable enters the context at role , as this is the most conservative choice.
4. Full System DR
|Definitional equality (terms)|
|Definitional equality (propositions)|
The previous section presented the complete details of a minimal calculus to provide a solid basis for understanding about the interaction between roles and dependent types in System DR. In this section, we zoom out and complete the story at a higher level, providing an overview of the remaining features of the language. The syntax of the full language appears in Figure 4 and the major judgment forms are summarized in Figure 5. For reference, the full specification of System DR is available in the appendix.
4.1. Coercion abstraction
An essential feature of internal languages capable of compiling Haskell is coercion abstraction, which is used to generalize over equality propositions (Sulzmann et al., 2007). Coercion abstraction is the basis for the implementation of generalized algebraic datatypes (Xi et al., 2003; Peyton Jones et al., 2006) in GHC. For example, a datatype definition, such as
can be encoded by supplying MkT with a constraint about its parameter.
Pattern matching an argument of type T b brings the equality constraint b ~ Int into scope.
In System DR, definitional equality is indexed by a role , so we also allow equality propositions, written , to include roles. When is , this proposition corresponds to Haskell’s equality constraint, such as above. When the role is , it corresponds to the Coercible (§2.4) constraint.
Coercion abstraction brings equality constraints into the context and coercion application discharges those assumptions when the equality can be satisfied. As in extensional type theory (Martin-Löf, 1971), equality propositions can be used implicitly by definitional equality. If an equality assumption between two types is available in the context, then those two types are defined to be equal.
For technical reasons, discussed in Weirich et al. (2017), the full judgment for definitional equality in System DR is written . The additional component is a set that tracks which coercions are actually available, used in the rule above. This set need not concern us here; feel free simply to assume that equals the domain of , written .
The extension of the System D rules with roled equality constraints is straightforward, though care must be taken to ensure that the roles are used consistently. Note that when role-checking, all variables that appear in a nominal equality constraint must have role . This corresponds to the requirement in GHC that the constrained parameters to GADTs have nominal arguments.
4.2. Irrelevant arguments
A dependently-typed intermediate language for GHC must include support for irrelevant arguments as well as relevant arguments (Miquel, 2001) in order to implement the type-erasure aspect of parametric polymorphism. In Haskell, polymorphic functions cannot dispatch on types, so these may be erased prior to runtime. In (Curry-style) System DR, irrelevant arguments are therefore elided from the abstract syntax. We extend the calculus by adding a new application flag “” to indicate that an argument is irrelevant. Furthermore, we add a flag to function types to indicate whether the argument to the function is relevant or irrelevant.
The typing rules for the introduction of an irrelevant abstraction requires that the bound variable not actually appear in the body of the term. When an irrelevant function is used in an application, the argument must be the trivial term, . Note that the argument is only elided from the term however—it is still substituted in the result type.999In System DC, the annotated version of the language with syntax-directed type checking, the argument does appear in term but is eliminated via an erasure operation, following Barras and Bernardo (2008).
Role annotations may only apply to relevant arguments, even though constants and newtypes may have both relevant and irrelevant parameters. Irrelevant arguments have their own congruence rule for applications. Because irrelevant arguments never appear in the syntax of terms, an equality between two irrelevant applications only need compare the function components—the arguments are always trivially equal.
Overall, there is little interaction between irrelevant arguments and roles. However, there is one important benefit of having both capabilities in the same system. We can use irrelevant quantification to model the phantom role from prior work; details are in Section 6.2.
4.3. Case expressions
The soundness issue described in Section 2.3 arises through the use of the Discern type family, which returns different results based on whether its argument is String or HTML. To ensure that System DR is not susceptible to a similar issue, we include a pattern matching term of the following form.101010Unlike source Haskell, patterns in System DR axiom definitions may only include variables and thus may not dispatch on their arguments.
Operationally, the pattern matching term reduces the scrutinee to a value and then compares it against the pattern specified by . If there is a match, the expression steps to . In all other cases, the expression steps to . Pattern matching is not nested—only the head constructor can be observed. In Haskell, type families do both axiom unfolding and discrimination. We separate these features in System DR for orthogonality and eventual unification of pattern matching with Haskell’s existing case expression. (The semantics of this expression is not exactly the same as that of Haskell’s case; more details are in Section 7.)
In this syntax, the scrutinee must match the pattern of arguments specified by , where is a list of application flags. Note that in the full language, these application flags can include roles, +, -, or indicate a coercion argument.111111This nameless form of pattern-matching helps with our formalization. The typing rule for case requires the branch to start with a sequence of abstractions that matches the form specified by the list of flags . We specify the behavior of with the rules shown in Figure 6. In the first rule, the judgment holds when the scrutinee matches the pattern; i.e. when the scrutinee is an application headed by with arguments specified by . The constructor must be a constant at role ; it cannot be a type family axiom. If this judgment holds, the second premise passes those arguments to the branch . In the conclusion of the rule, is further applied to an elided coercion ; this coercion witnesses the equality between the head of and the pattern, implementing dependent pattern matching.
The second rule, Beta-PatternFalse triggers when the scrutinee is a value, yet the comparison does not hold. It steps directly to .
Dependent case analysis mean that when the scrutinee matches the head constructor, not only does the expression step to the first branch, but the branch is type checked under the assumption of an equality proposition between the scrutinee and the pattern.
Simple examples of this behavior are possible in source Haskell today, using the TypeInType extension.
Because System DR is dependently-typed, and full-spectrum, the pattern matching term described in this section can also be used for run-time type analysis, as well as dispatch during type checking. We view this as a key benefit of our complete design: we retain the ability to erase (most) types during compilation by abstracting them via irrelevant quantification, but can support run-time dispatch on types when desired.
Case analysis is nominal
One part of our design that we found surprising is the fact that case analysis must use the nominal role to evaluate the scrutinee, as we see in the following rule:121212This rule belongs to a different judgment than the rules in Figure 6. We separate our primitive -reduction rules from the congruence rules for stepping in our semantics. Only the -reduction rules are used in our equality relation, relying on the equality relation’s congruence rules to correspond to the stepping relation’s congruence rules.
Indeed, our original draft of the system also allowed a form of “representational” case analysis, which first evaluated the scrutinee to a representational value before pattern matching. This case analysis could “see through” newtype definitions and would match on the underlying definition.
For example, with representational case analysis, the term
would evaluate to True.
Unfortunately, we found that representational case analysis is unsound in our system. Consider the following term, which uses a representational analysis to first match the outer structure of its argument, and then uses an inner, nominal analysis for the parameter. System DR always assigns nominal roles to variables bound in a case-match, so this axiom would role-check with a representational argument.
With this definition, we would be able to show F [HTML] representationally equal to F [String] because F’s parameter is representation. However, these two expressions evaluate to different results. Disaster!
Extending the system to include a safe version of representational case analysis requires a way to rule out the nominal case analysis of y above. This means that the type system must record y’s role as representational (as it is the argument to the list constructor) and furthermore use the role-checking judgment to ensure that y does not appear in a nominal context (such as in the scrutinee of a nominal case analysis). We want to keep role checking completely separate from type checking (cf. Section 2.5), so we have not pursued this extension.
4.4. Constructor Injectivity
System DR is a syntactic type theory. As a result, it supports equality rules for injectivity. If two types are equal, then corresponding subterms of those types should also be equal. In prior work (Weirich et al., 2017), the injectivity of function types was witnessed by rules that allowed an equality between two function types to be decomposed.
This work augments those rules with the correct role components. For example, in E-PiSnd, shown below, when we pull out an equality between the co-domain types of a function type, we must provide an equality between the arguments at role . This is because we have no knowledge about how the parameter is used inside the types and and so we must be conservative.131313If the function type were annotated with a role, we would not be limited to in this rule.
This work also extends the reasoning about injectivity to abstract types. An equality of the form can be decomposed to equalities between the corresponding arguments at the roles specified for . For example, the E-Right, shown below shows that we can extract an equality between and when we have an equality between and .
The first two premises of this rule require that the equation is between two applications, headed by the same constructor , which cannot be matched to an axiom at role . The next four premises describe the types of the components of the applications. These premises ensure that the equality relation is homogeneous, i.e. that only terms of equal types are related.
The key part of this rule is that the equality role of the conclusion is determined by both the original role of the equality and the annotated role of the application . This is the dual of AE-TAppCong, the congruence rule for applications. In that rule, we can use the fact that is representationally equivalent to to show that is representationally equal to . Here, we can invert that reasoning.
5. Properties of System DR
The main result of our Coq development is the proof of type soundness for full System DR. Given the size of the language, the delicate interactions between its features, and number of iterations we have gone through in its development, we could not have done it without mechanical assistance.
This type soundness proof follows from the usual preservation and progress lemmas. Both of these lemmas are useful properties for an intermediate language. The preservation property holds even for open terms. Therefore, it implies that simple, reduction-based optimizations, such as inlining, do not produce ill-typed terms. Our proof of the progress lemma relies on showing that a particular reduction relation is confluent, which itself provides a simplification process for terms and (semi-decidable) algorithm for showing them equivalent.
From the original design of FC (Sulzmann et al., 2007), we inherit a separation between the proofs of the preservation and progress lemmas that is unusual for dependently-typed calculi. In this system it is possible to prove preservation without relying on the consistency of the system. This means that preservation holds in any context, including ones with contradictory assumptions (such as Int ~ Bool). As a result, GHC can apply, e.g., inlining regardless of context.
In this section, we provide an overview of the main results of our Coq development. However, we omit many details. Excluding automatically generated proofs, our scripts include over 700 lemmas and 250 auxiliary definitions.
5.1. Values and reduction
We define the values of this language using the role-indexed relation , shown in Figure 7. Whether a term is a value depends on the role: a newtype HTML is a value at role but reduces at role . This relation depends on the auxiliary judgment (not shown, available in the appendix) which holds when is a path headed by the constant that cannot reduce at role . (This may be because is a constant, or if the role on ’s definition is greater than than , or if has not been applied to enough arguments.)
Note that the value relation is contravariant with respect to roles. If a term is a value at some role, it is a value at all smaller roles.
Lemma 5.1 (SubRole-Value141414ext_red_one.v:nsub_Value).
If and then .
Alternatively, if a term steps at some evaluation role, and we make some of the definitions more transparent, then it will continue to step, but it could step to a different term. This discrepancy is due to the fact that -reduction only applies when functions are values. Irrelevant functions are values only when their bodies are values—so changing to a larger role could allow an abstraction to step further.
Lemma 5.2 (SubRole-Step151515ext_red_one.v:sub_red_one).
If and then .
That said, the operational semantics is deterministic at a fixed role.
Lemma 5.3 (Deterministic161616ext_red_one.v:reduction_in_one_deterministic).
If and then .
The role-checking judgment satisfies a number of important properties. For example, we can always role-check at a larger role.
Lemma 5.4 (SubRole-ing171717ett_roleing.v:roleing_sub).
If and then .
Furthermore, the following property says that users may always downgrade the roles of the parameters to their abstract types.
Lemma 5.5 (Role assignment narrowing181818ett_roleing.v:roleing_ctx_weakening).
If and then .
Finally, well-typed terms are always well-roled at , when all free variables have role .
Lemma 5.6 (Typing/Roleing191919ett_roleing.v:Typing_roleing).
If then , where is the role-checking context that assigns to every term variable in the domain of .
5.3. Structural properties
Lemma 5.7 (Typing Regularity202020ext_invert.v:Typing_regularity).
If then .
Definition 5.8 (Context equality).
Define with the following inductive relation:
Lemma 5.9 (Context Conversion212121ext_invert.v:context_DefEq_typing).
If and then .
Lemma 5.10 (DefEq Regularity222222ext_invert.v:DefEq_regularity).
If then and .
We prove the preservation lemma simultaneously with the property that one-step reduction is contained within definitional equality. (This property is not trivial because definitional equality only includes the primitive reductions directly, and relies on congruence rules for the rest.) The reason that we need to show these results simultaneously is due to our typing rule for dependent pattern matching.
Lemma 5.11 (Preservation232323ext_red.v:reduction_preservation).
If and then and .
We prove progress by extending the proof in prior work (Weirich et al., 2017) with new rules for axiom reduction and case analysis. This proof, based on a technique of Tait and Martin-Löf, proceeds first by developing a confluent, role-indexed, parallel reduction relation for terms and then showing that equal terms must be joinable under parallel reduction (Barendregt, 1984). Furthermore, this relation also tracks the roles of free variables using a role-checking context .
We need this role checking context because of the following substitution lemma, necessary to show the confluence lemma below.
Lemma 5.12 (Parallel reduction substitution242424ett_par.v:subst1).
If and then
We know that some term reduces and we want to show that we can reconstruct that reduction after that term has been substituted into some other term . However, the variable could appear anywhere in , perhaps as the argument to a function. As a result, the role that we use to reduce may not be the same role as the one that we use for .
The parallel reduction relation must be consistent with the role-checking relation. Although our definition of parallel reduction is not typed (it is independent of the type system) it maintains a strong connection to the role-checking judgment.
Lemma 5.13 (Parallel Reduction Role-Checking252525ett_par.v:Par_roleing_tm_fstett_par.v:Par_roleing_tm_snd).
If then and .
This property explains why SSig-ConsAx, which checks axiom declarations, uses the role on the declaration to role check the right-hand side of the axiom. In other words, type families must role check at , whereas newtypes must role check at . We could imagine trying to weaken this requirement and role-check all axioms at the most permissive role . However, then the above property would not hold. We need to know that when an axiom unfolds, the term remains well-formed at that role.
Lemma 5.14 (Confluence262626ext_consist.v:confluence).
If and then there exists some such that and .
The confluence proof allows us to show the usual canonical forms lemmas, which are the key to showing the progress lemma.
Lemma 5.15 (Progress272727ext_consist.v:progress).
If then either or there exists some such that .
6.1. Constraint vs. Type
Haskell differentiates the kind Constraint from the kind Type
; the former classifies constraints that appear to the left of an=> in Haskell (thus, we have Eq a :: Constraint) while the latter classifies ordinary types, like Int. This separation between Constraint and Type is necessary for at least two reasons: we want to disallow users from confusing these two, rejecting types such as Int => Int and Bool -> Eq Char; and types in kind Constraint have special rules in Haskell (allowing definition only via classes and instances) to keep them coherent.
However, we do not want to distinguish Constraint from Type in the core language. Inhabitants of types of both kinds can be passed to and from functions freely, and we also want to allow (homogeneous) equalities between elements of Constraint and elements of Type. These equalities come up when the user defines a class with exactly one member, such as
Given that the evidence for a HasDefault a instance consists only of the deflt :: a member, GHC compiles this class declaration into a newtype definition, producing an axiom equating HasDefault a with a, at the representational role. Some packages282828Notably, the reflection package by Edward Kmett. rely on this encoding, and it would be disruptive to the Haskell ecosystem to alter this arrangement.
We are thus left with a conundrum: how can we keep Constraint distinct from Type in Haskell but identify them in the internal language? This situation clearly has parallels with the need for newtypes: a newtype is distinct from its representation in Haskell but is convertible with its representation in the internal language. We find that we can connect Constraint with Type by following the same approach, but in kinds instead of in types.
This would mean defining Constraint along with an axiom stating that Constraint is representationally equal to Type. That solves the problem: the Haskell type-checker already knows to keep representationally equal types distinct, and all of the internal language functionality over Types would now work over Constraints, too. Because the internal language—System DR—allows conversion using representational equality, an axiom relating, say, HasDefault a :: Constraint to a :: Type would be homogeneous, as required. The implementors of GHC are eager for System DR in part because it solves this thorny problem.292929See https://ghc.haskell.org/trac/ghc/ticket/11715#comment:64 and https://github.com/ghc-proposals/ghc-proposals/pull/32#issuecomment-315082072.
6.2. The Phantom Role
Prior work (Breitner et al., 2016) includes a third role, the phantom role. Consider the following newtype definition, which does not make use of its argument.
All values of type F a are representationally equal, for any a. By giving this newtype the phantom role for its parameter, Haskell programmers can show that F Int is representationally equal to F Bool even when the MkF constructor is not available.
It is attractive to think about the phantom role when thinking of roles as indexing a set of equivalence relations, but that doesn’t work out for System DR. In that interpretation, the phantom role is the coarsest relation that identifies all terms of the same type, so it should be placed at the top of the role lattice (above ). However, with this addition, we do not get the desired semantics for the phantom role.
First, we would need a special definition for evaluating at the phantom role. The difference between nominal and representational evaluation is determined solely by whether axioms are transparent or opaque. However, we cannot use this mechanism to define what it means to evaluate at the phantom role. We would need something else entirely.
Second, arguments with phantom roles require special treatment in the congruence rule. Logically, phantom would be above representational in the role hierarchy, as the corresponding equivalence is coarser. However, AE-TAppCong uses to equate arguments. But the minimum of and phantom is , not phantom, so we would need a different congruence rule for this case.
However, the most compelling reason why we do not include phantom as a role is because it is already derivable using irrelevant arguments. In the example above, we can implement the desired behavior via two levels of newtype definition. First, we define a constant, say , with an irrelevant argument; this is the representation of the newtype above.
(Note that lacks a role annotation. Only relevant arguments are annotated with roles.) We make this newtype abstract by not exporting this axiom.
Then, we define the phantom type by wrapping the irrelevant argument with a relevant one, which is ignored.
When we use in a nominal role, we will not be able to show that is equal to , as is the case in Haskell. However, at the representational role, we can unfold the definition of in both sides to , equating the two types. Furthermore, the actual definition of the type can stay hidden, just as in the example above.
6.3. An explicit coerce term
One simplifying idea we use in System DR, taken from Breitner et al. (2016), is the separation between the role-checking and type-checking judgments. This design, overall, leads to a simpler system because it limits the interactions between the type system and roles. Furthermore, it is also compatible with the current implementation of role checking in GHC.
However, one might hope for a more expressive system by combining the role-checking and type-checking judgments together, as was done in the system of Weirich et al. (2011). In fact, this was our first approach to this work, primarily because we wanted to explore a design that factored conversion into implicit and explicit parts.
In the conversion rule on the left, the role on the typing judgment (indexing the typing judgment by a role is new here) determines the equality that can be used. If this role is , then only nominal equality is permitted and coercing between representationally equal types requires an explicit use of coercion, via the rule on the right. Alternatively, if the role is then all type conversions are allowed (and using the primitive is unnecessary).
This system is attractive because it resembles the design of source Haskell. In contrast, in the current System DR, if an expression has type HTML, then it also has type String—precisely the situation newtypes were meant to avoid. We return to the question of what coerce means for source Haskell in the next subsection.
However, after struggling with various designs of the system for some time, we ultimately abandoned this approach. In particular, we were unhappy with a number of aspects of the design.
How should reduce at role ? It cannot reduce to : that would violate type preservation. The solution to this problem is “push” rules, as in System FC (Sulzmann et al., 2007). These complicate the semantics by moving uses of when they block normal reduction. For example, if we have (coerce (\\x -> x)) 5, we cannot use our normal rule for -reduction, as the intervenes between the -expression and its argument. Instead, a push rule is required to reduce the term to (\\x -> coerce x) (coerce 5), allowing -reduction to proceed. However, these push rules are complex and the complexity increases with each feature added to the language; see Weirich et al. (2013, Section 5) for a telling example of how bad they can be.
Push rules prevent from creating stuck terms, but they are not the only evaluation rules for that we could want. In particular, we would like the operational semantics to eliminate degenerate coercions, which step to in the case when the coercion does not actually change the type of
. However, this sort of reduction rule would be type-directed: it would apply only when the two types involved are definitionally equal. Such an operational behavior is at odds with our Curry-style approach and would complicate our treatment of irrelevance.
Because we are in a dependent setting, we must also consider the impact of on the equality relation. For example, what is the relationship between and ? Are they nominally equal? Are they representationally equal? In our explorations of the possibilities, none worked out well. (See also Eisenberg (2015, Section 5).)
We also hoped that working with the combined role-/type-checking system would lead to greater expressiveness in other parts of the language. In particular, the current treatment of roles in GHC was believed to be incompatible with putting the function join :: Monad m => m (m a) -> m a in the Monad type-class.303030See https://ghc.haskell.org/trac/ghc/ticket/9123, which was originally titled “Need for higher kinded roles”. However, the combined role-/type-checking does not help with this problem. Fortunately, the new QuantifiedConstraints extension (Bottu et al., 2017), available in GHC 8.6, provides a new solution,313131https://ryanglscott.github.io/2018/03/04/how-quantifiedconstraints-can-let-us-put-join-back-in-monad/ resolving the problem in a much less invasive way.
6.4. What is Source Haskell?
As described above, System DR fails to give a direct semantics for the primitive in Haskell. (This is not an issue specific to System DR; no prior work does this (Breitner et al., 2016; Weirich et al., 2011).) However, all is not lost. We propose instead that it is better to understand the term in the Haskell source language through an elaboration semantics.
More concretely, we can imagine a specification for source Haskell where source terms can automatically convert types with nominal equalities and is needed for representational equalities.