Dynamic Type Inference for Gradual Hindley--Milner Typing

10/30/2018 ∙ by Yusuke Miyazaki, et al. ∙ Kyoto University Association for Computing Machinery 0

Garcia and Cimini study a type inference problem for the ITGL, an implicitly and gradually typed language with let-polymorphism, and develop a sound and complete inference algorithm for it. Soundness and completeness mean that, if the algorithm succeeds, the input term can be translated to a well-typed term of an explicitly typed blame calculus by cast insertion and vice versa. However, in general, there are many possible translations depending on how type variables that were left undecided by static type inference are instantiated with concrete static types. Worse, the translated terms may behave differently---some evaluate to values but others raise blame. In this paper, we propose and formalize a new blame calculus λ^DTI_B that avoids such divergence as an intermediate language for the ITGL. A main idea is to allow a term to contain type variables (that have not been instantiated during static type inference) and defer instantiation of these type variables to run time. We introduce dynamic type inference (DTI) into the semantics of λ^DTI_B so that type variables are instantiated along reduction. The DTI-based semantics not only avoids the divergence described above but also is sound and complete with respect to the semantics of fully instantiated terms in the following sense: if the evaluation of a term succeeds (i.e., terminates with a value) in the DTI-based semantics, then there is a fully instantiated version of the term that also succeeds in the explicitly typed blame calculus and vice versa. Finally, we prove the gradual guarantee, which is an important correctness criterion of a gradually typed language, for the ITGL.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1. Introduction

1.1. Gradual Typing

Statically and dynamically typed languages have complementary strengths. On the one hand, static typing provides early detection of bugs, but the enforced programming style can be too constrained, especially when the type system is not very expressive. On the other hand, dynamic typing is better suited for rapid prototyping and fast adaption to changing requirements, but error detection is deferred to run time.

There has been much work to integrate static and dynamic typing in a single programming language (Abadi et al., 1991; Cartwright and Fagan, 1991; Thatte, 1990; Bracha and Griswold, 1993; Flanagan and Felleisen, 1999; Siek and Taha, 2006; Tobin-Hochstadt and Felleisen, 2008). Gradual typing (Siek and Taha, 2006) is a framework to enable seamless code evolution from a fully dynamically typed program to a fully statically typed one within a single language. The notion of gradual typing has been applied to various language features such as objects (Siek and Taha, 2007), generics (Ina and Igarashi, 2011), effects (Bañados Schwerter et al., 2014, 2016), ownership (Sergey and Clarke, 2012), parametric polymorphism (Ahmed et al., 2011; Igarashi et al., 2017; Xie et al., 2018) and so on. More recently, even methodologies to “gradualize” existing statically typed languages systematically, i.e., to generate gradually typed languages, are also studied (Garcia et al., 2016; Cimini and Siek, 2016, 2017).

The key notions in gradual typing are the dynamic type and type consistency. The dynamic type, denoted by , is the type for the dynamically typed code. For instance, a function that accepts an argument of type can use it in any way and the function can be applied to any value. So, both and are well-typed programs in a gradually typed language. To formalize such loose static typechecking, a type consistency relation, denoted by , on types replaces some use of type equality. In the typing rule for applications, the function argument type and the type of an actual argument are required not to be equal but to be consistent; also, both and hold, making the two terms above well typed.

The semantics of a gradually typed language is usually defined by a “cast-inserting” translation into an intermediate language with explicit casts, which perform run-time typechecking. For example, the two examples above can be translated into terms of the blame calculus (Wadler and Findler, 2009) as follows.

Here, denotes cast-inserting translation. The term of the form is a cast of from type to and appears where typechecking was loosened due to type consistency.111The symbol is called a blame label and is used to identify a cast. Following Siek et al. (2015a), we use for the intermediate language and save for the surface language ITGL. In these examples, actual arguments and true are cast to and of type is cast to int before being passed to , which expects an integer. In what follows, a sequence of casts is often abbreviated to .

The former term evaluates to whereas the latter to an uncatchable exception, called blame (Findler and Felleisen, 2002), , indicating that the cast on fails:

The terms and are values in the blame calculus and they can be understood as an integer and Boolean, respectively, tagged with its type. Being values, they are passed to the function as they are. The cast from to int investigates if the tag of the target is int; if it is, the tag is removed and the untagged integer is passed to ; otherwise, blame is raised with information on which cast has failed.

1.2. Type Inference for Gradual Typing

Type inference (a.k.a. type reconstruction) for languages with the dynamic type has been studied. Siek and Vachharajani (2008) proposed a unification-based type inference algorithm for a gradually typed language with simple types. Garcia and Cimini (2015) later proposed a type inference algorithm with a principal type property for the Implicitly Typed Gradual Language (ITGL) with and without let-polymorphism. More recently, Xie et al. (2018) studied an extension of the Odersky–Läufer type system for higher-rank polymorphism (Odersky and Läufer, 1996) with the dynamic type and bidirectional algorithmic typing for it. Also, Henglein and Rehof (1995) studied a very close problem of translation from an untyped functional language to an ML-like language with coercions (Henglein, 1994), using a constraint-based type inference.

The key idea in Garcia and Cimini’s work (inherited by Xie et al.) is to infer only static types—that is, types not containing —for where type annotations are omitted. For example, for

the type inference algorithm outputs the following fully annotated term:

The idea of inferring only static types is significant for the principal type property because it excludes terms like , , and , which are all well typed in the gradual type system but not obtained by applying type substitution to . Based on this idea, they showed that the ITGL enjoys the principal type property, which means that if there are type annotations to make a given term well typed, the type inference succeeds and its output subsumes all other type annotations that make it well typed—in the sense that they are obtained by applying some type substitution.

1.3. Incoherence Problem

Unlike ordinary typed -calculi, however, the behavior of a term depends on concrete types chosen for missing type annotations. For example, for the following term

it is appropriate to recover any static type for if we are interested only in obtaining a well-typed term because but the evaluation of the resulting term significantly differs depending on the choice for . To see this, let’s translate (of type ). It is translated to

regardless of . If , then it reduces to value as follows:

but, if , it reduces to as follows:

(the shaded subterm is the source of blame and the blame label , which is the negation of , means that a functional cast labeled has failed due to type mismatch on the argument, not on the return value). Xie et al. (2018) face the same problem in a slightly different setting of a higher-rank polymorphic type system with the dynamic type and point out that their type system is not coherent (Breazu-Tannen et al., 1991) in the sense that the run-time behavior of the same source program depends on the particular choice of types.

Garcia and Cimini (2015) do not clearly discuss how to deal with this problem. Given term , their type reconstruction algorithm outputs , where is a type variable that is left undecided and they suggest those undecided variables be replaced by type parameters but their semantics is left unclear. One possibility would be to understand type parameters as distinguished base types (without constants) but it would also make the execution fail because . The problem here is that the only choice for that makes execution successful is int but it is hard to see statically.

An alternative, which is close to what Henglein and Rehof (1995) do and also what Xie et al. (2018) suggest, is to substitute the dynamic type for these undecided type variables. If we replace with in the example above, we will get

As far as this example is concerned, substitution of sounds like a good idea as, if there is static-type substitution that makes execution successful (i.e., terminate at a value), substitution of is expected to make execution also successful—this is part of the gradual guarantee property (Siek et al., 2015a).

However, substitution of can make execution successful, even when there is no static type that makes execution successful. For example, let’s consider the ITGL term

where requires and to have the same type and is a (statically typed) Boolean term that refers to neither nor . For this term, the type inference algorithm outputs and, if we substitute for , then it executes as follows:

but, if we substitute a static type for , then it results in blame due to a failure of either of the shaded casts:

Substitution of is not only against the main idea that only static types are inferred for missing type annotations but also not very desirable from the viewpoint of program evolution and early bug detection: Substitution of inadvertently hides the fact that there is no longer a hope for a given term (in this case ) to be able to evolve to a well-typed term by replacing occurrences of with static types. This concealment of potential errors would be disappointing for languages where it is hoped to detect as early as possible programs that the underlying static type system does not accept after evolution.

1.4. Our Work: Dynamic Type Inference

In this work, we propose and formalize a new blame calculus that avoids both blame caused by wrong choice of static types and the problem caused by substituting . A main idea is to allow a term to contain type variables (that represent undecided ones during static—namely, usual compile-time—type inference) and defer instantiation of these type variables to run time. Specifically, we introduce dynamic type inference (DTI) into the semantics of so that type variables are instantiated along reduction. For example, the term

(obtained from ) reduces to

and then, instead of raising blame, it reduces to

by instantiating with int. (Shaded parts denote where instantiation took place.) In general, when a tagged value (where is a base type) meets a cast to a type variable , the cast succeeds and is instantiated to . Similarly, if a tagged function value is cast to , two fresh type variables and are generated and is instantiated to , expecting further instantiation later.

Unlike the semantics based on substitution of , the DTI-based semantics raises blame if a type variable appears in conflicting contexts. For example, the term

(obtained from ) reduces in a few steps to

which corresponds to in the source term. The cast on succeeds and this term reduces to by instantiating with int (where the shaded parts are the results of instantiation). Then, in the next step, reduction reaches the application of to true (in the source term):

However, the shaded cast on true fails. In short, is required to be both int and bool at the same time, which is impossible. As this example shows, DTI is not as permissive as the semantics based on substitution of and detects a type error early.

DTI is sound and complete. Intuitively, soundness means that, if a program evaluates to a value under the DTI semantics, then the program obtained by applying—in advance—the type instantiation that DTI found results in the same value and if a program results in blame under the DTI semantics, then all type substitutions make the program result also in blame. Completeness means that, if some type substitution makes the program evaluate to a value, then execution with DTI also results in a related value. Soundness also means that the semantics is not too permissive: it is not the case that a program evaluates to a value under the DTI semantics but no type substitution makes the program evaluate to a value. The semantics based on substituting is complete but not sound; the semantics based on “undecided type variables as base types” as in Garcia and Cimini (2015) is neither sound nor complete (because it just raises blame too often).

We equip with ML-style let-polymorphism (Milner, 1978). Actually, Garcia and Cimini have already proposed two ways to implement the ITGL with let-polymorphism: by translating it to the Polymorphic Blame Calculus (Ahmed et al., 2011, 2017) or by expanding let before translating it to the (monomorphic) blame calculus. However, they have left a detailed comparison of the two to future work. Our semantics is very close to the latter, although we do not statically expand definitions by let. Perhaps surprisingly, the semantics is not quite parametric; we argue that translation to the Polymorphic Blame Calculus, which dynamically enforces parametricity (Reynolds, 1983; Ahmed et al., 2017), has an undesirable consequence and our semantics (which is close to the one based on expanding let) is better suited for languages in which type abstraction and application are implicit.

Other than soundness and completeness of DTI, we also study the gradual guarantee property (Siek et al., 2015a) for the ITGL. The gradual guarantee formalizes an informal expectation for gradual typing systems that adding more static types to a program only exposes type errors—whether they are static or dynamic—but should not change the behavior of the program otherwise. To deal with the ITGL, where bound variables come with optional type annotations, we extend the notion of “more static types” (formalized as precision relations over types and terms) so that an omitted type annotation is more precise than the annotation with but less precise than an annotation with a static type. So, for example,

Intuitively, omitted type annotations are considered (fresh) type variables, which are less specific than concrete types but they are more precise than because they range only over static types. We prove the gradual guarantee for the ITGL. To our knowledge, the gradual guarantee is proved for a language with let-polymorphism for the first time.

Finally, we have implemented an interpreter of the ITGL, including an implementation of Garcia and Cimini’s type inference algorithm, a translator to and an evaluator of , in OCaml. It supports integer, Boolean, and unit types as base types, standard arithmetic, comparison, and Boolean operators, conditional expressions, and recursive definitions. The source code is available at https://github.com/ymyzk/lambda-dti/.

Contributions.

Our contributions are summarized as follows:

  • We propose DTI as a basis for new semantics of (an intermediate language for) the ITGL, an implicitly typed language with a gradual type system and Hindley–Milner polymorphism;

  • We define a blame calculus with its syntax, type system, and operational semantics with DTI;

  • We prove properties of , including type safety, soundness and completeness of DTI, and the gradual guarantee;

  • We also prove the gradual guarantee for the ITGL; and

  • We have implemented an interpreter of the ITGL.

The organization of the paper.

We define in Section 2 and state its basic properties, including type safety and conservative extension, in Section 3. Then, we show soundness and completeness of DTI in Section 4 and the gradual guarantee in Section 5. Finally, we discuss related work in Section 6 and conclude in Section 7. Proofs of the stated properties are given in Appendix.

2. : A Blame Calculus with Dynamic Type Inference

In this section, we develop a new blame calculus with dynamic type inference (DTI). We start with a simply typed fragment and add let-polymorphism in Section 2.3. The core of the calculus is based on the calculus by Siek et al. (2015a), which is a simplified version of the blame calculus by Wadler and Findler (2009) without refinement types. We augment its type system with type variables in a fairly straightforward manner and operational semantics as described in the last section.

Our blame calculus is designed to be used as an intermediate language to give semantics of the ITGL by Garcia and Cimini (2015). We will discuss the ITGL and how ITGL programs are translated to programs in Section 5 in more detail, but as far as the simply typed fragment is concerned it is very similar to previous work (Siek et al., 2015a; Garcia and Cimini, 2015).

2.1. Static Semantics

We show the syntax of in Figure 1. The syntax of the calculus extends that of the simply typed lambda calculus with the dynamic type, casts, and type variables.

Figure 1. Syntax of .

Gradual types, denoted by , consist of base types (such as int and bool), type variables , the dynamic type , and function types . Static types, denoted by , are the subset of gradual types without the dynamic type. Ground types, which are used as tags to inject values into the dynamic type, contain base types and the function type . We emphasize that type variables are not in ground types, because no value inhabits them (as we show in the canonical forms lemma (Lemma 2)) and we do not need to use a type variable as a tag to inject values.

Terms, denoted by , consist of variables , constants , primitive operations , lambda abstractions (which bind in ), applications , casts , and blame . Since this is an intermediate language, variables in abstractions are explicitly typed. Casts from to are inserted when translating a term in the ITGL, and used for checking whether of type can behave as type at run time. Casts are annotated also with blame labels, denoted by , to indicate which cast has failed; blame is used to denote a run-time failure of a cast with . They have polarity to indicate which side of a cast to be blamed (Findler and Felleisen, 2002). For each blame label, there is a negated blame label , which denotes the opposite side to , and . As we did in the introduction, we often abbreviate a sequence of casts to .

Values, denoted by , consist of constants, lambda abstractions, wrapped functions, and injections. A wrapped function is a function value enclosed in the cast between function types. Results, denoted by , are values and blame. Evaluation contexts, denoted by , are standard and they mean that a term is evaluated from left to right, under call-by-value.

We show the static semantics of in Figure 2. It consists of type consistency and typing.

Type consistency rules define the type consistency relation , a key notion in gradual typing, over gradual types. Intuitively, means that it is possible for a cast from to to succeed. The rules C_Base and C_TyVar mean that a base type and a type variable, respectively, is consistent with itself. The rules C_DynL and C_DynR mean that all gradual types are consistent with the dynamic type. The rule C_Arrow means that two function types are consistent if their domain types are consistent and so are their range types. The type consistency relation is reflexive and symmetric but not transitive.

Typing rules of the extend those of the simply typed lambda calculus. The rules T_Var, T_Const, T_Op, T_Abs, and T_App are standard. used in T_Const assigns a base type to each constant , and used in T_Op assigns a first-order static type without type variables to each operator . The rule T_Cast allows a term to be cast to a consistent type.

Type consistency:

[1ex]

Typing rules:

     

[1ex]   

[1ex]   

Figure 2. Static semantics of .

A type substitution, denoted by , is a finite mapping from type variables to static types. The empty mapping is denoted by ; and the composition of two type substitutions and is by . Application and of type substitution to term and type are defined in the usual way respectively. We write for a type substitution that maps type variables to types respectively and and for and . Note that the codomain of a type substitution is static types, following Garcia and Cimini (2015), in which a type variable represents a placeholder for a static type.

As expected, type substitution preserves consistency and typing:

Lemma 0 (name=Type Substitution Preserves Consistency and Typing,restate=lemTySubstConsistency).
  1. If , then for any .

  2. If , then for any .

2.2. Dynamic Semantics

We show the dynamic semantics of in Figure 3. It is given in a small-step style by using two relations over terms. One is the reduction relation , which represents a basic computation step, including dynamic checking by casts and DTI. The other is the evaluation relation , which represents top-level execution. Both relations are annotated with a type substitution , which is generated by DTI.

Reduction rules:

Evaluation rules:

Figure 3. Dynamic semantics of .

2.2.1. Basic Reduction Rules

We first explain rules from the basic blame calculus, where type substitutions are empty. The rule R_Op is for primitive operations; the meta-function gives a meaning to the primitive operation and we assume that returns a value of the right type, i.e., the return type of . The rule R_Beta performs the standard -reduction. We write for the term obtained by substituting for in ; term substitution is defined in a capture-avoiding manner as usual. The rules R_IdBase and R_IdStar discard identity casts on a base type and the dynamic type, respectively. The rules R_Succeed and R_Fail check two casts where an injection (a cast from a ground type to the dynamic type) meets a projection (a cast from the dynamic type to a ground type). If both ground types are equal, then the projection succeeds and these casts are discarded. Otherwise, these casts fail and reduce to blame to abort execution of the program with blame label (the one on the projection). The rule R_AppCast reduces an application of a wrapped function by breaking the cast into two. One is a cast on the argument, and the other is a cast on the return value. We negate the blame label for the cast on the argument because the direction of the cast is swapped from that of the function cast (Findler and Felleisen, 2002). The rules R_Ground and R_Expand decompose a cast between a non-ground type and the dynamic type into two casts. These rules cannot be used if is a type variable because type variables are never consistent with any ground type. The ground type in the middle of the resulting two casts is uniquely determined:

Lemma 0 (name=Ground Types,restate=lemGroundTypes).
  1. If is neither a type variable nor the dynamic type, then there exists a unique such that .

  2. if and only if .

2.2.2. Reduction Rules for Dynamic Type Inference

As discussed in the introduction, our idea is to infer the “value” of a type variable when it is projected from the dynamic type, as in . The value typed at the dynamic type is always tagged with a ground type and is of the form . If the ground type is a base type, then the type variable will be instantiated with it. We revisit the example used in Section 1 and show evaluation steps below.

The subterm on the third line reduces to —this is where DTI is performed; this reduction step is annotated with a type substitution , which roughly means “ must be int for the execution of the program to proceed further without blame.” As we will explain soon, the type substitution is applied to the whole term, since the type variable may appear elsewhere in the term. As a result, the occurrences of in the type annotation and the last cast are also instantiated (as the shade on int indicates) and so evaluates to in one step.

Now, we explain formal reduction rules for DTI. The rule R_InstBase instantiates a type variable with the base type and generates a type substitution . The rule R_InstArrow instantiates a type variable with a function type for fresh type variables and .222We use the term “fresh” here to mean that and occur nowhere in the whole program before reduction. At this point, we know that is a (possibly wrapped) function, but the domain and range types are still unknown. We defer the decision about these types by generating fresh type variables, which will be instantiated in the future evaluation.

Finally, we explain the evaluation rules. The rule E_Step reduces the subterm in an evaluation context, then apply the generated type substitution. The substitution is applied to the whole term so that the other occurrences of the same variables are replaced at once. The rule E_Abort aborts execution of a program if it raises blame.

We write if , , …, and and (where ) and similarly for (where ).

One may wonder that the rule R_InstArrow is redundant because a term always reduces to in the next step. Actually, we could define this reduction rule in the following two different manners:

  • ; or

  • .

Using these rules does not change the semantics of the language, but we choose R_InstArrow for ease of proofs.

We show how the rule R_InstArrow works using the following (somewhat contrived333In fact, this term is not in the image of the cast-inserting translation. An example in the image would be much more complicated.) example:

The type variable is instantiated with when the value tagged with is projected to . Then, and are instantiated with int by R_InstBase, as we have already explained, by the time the term evaluates to the final value .

Perhaps surprisingly, our reduction rules to instantiate type variables are not symmetric: There are no rules to reduce terms such as , , , and , even though a cast expression such as does appear during reduction. This is because a value is not typed at a type variable as we will show in the canonical forms lemma (Lemma 2) and we do not need the rules to reduce these terms. The in will be instantiated during evaluation of .

Before closing this subsection, we revisit an example from the introduction. It raises blame because one type variable is used in contexts that expect different types:

As for the first example, at the third step, the subterm reduces to by R_InstBase and a substitution is yielded. Then, by application of E_Step, in the evaluation context also gets replaced with int. (Again, the shaded types indicate which parts are affected.) This term eventually evaluates to blame because the cast fails. After all, the function cannot be used as both and at the same time. It is important to ensure that is instantiated at most once.

2.3. Let-Polymorphism

In this section, we extend to let-polymorphism (Milner, 1978) by introducing type schemes and explicit type abstraction and application , as Core-XML (Harper and Mitchell, 1993)

. (Here, we abbreviate sequences by using vector notations.) Explicit type abstraction/application is needed because we need type information at run time.

Our formulation is fairly standard but we will find a few twists that are motivated by our working hypothesis that (in the surface language) should behave, both statically and dynamically, the same as where is substituted for —especially in languages where type abstractions and applications are implicit. Consider the following expression in the surface language (extended with pairs):

where stands for ascription, which is translated to a cast. By making casts, type abstraction, and type application explicit, one would obtain something like

in . Note that, due to the use of , different type variables are assigned to and and so is bound to a two-argument type abstraction. Now, what should type arguments be at the two uses of ? It should be obvious that and have to be int and bool, respectively, from arguments to . At first, it may seem that and should be int and bool, respectively, in order to avoid blame. However, it is hard for a type system to see it—in fact, is given type scheme and, since is bound but not referenced, there is no clue. We assign a special symbol to and ; each occurrence of is replaced with a fresh type variable when a (polymorphic) value is substituted for during reduction. These fresh type variables are expected to be instantiated by DTI—to int and bool in this example—as reduction proceeds.

We do not assign fresh type variables when type abstractions/applications are made explicit during cast insertion. It is because an expression, such as , including type applications may be duplicated during reduction and sharing a type variable among duplicated expressions might cause unwanted blame.

Syntax:

Typing rules:

Reduction rule:

Substitution:

Figure 4. with polymorphic let.

Figure 4 shows the definition of the extension. A type scheme, denoted by , is a gradual type abstracted over a (possibly empty) finite sequence of type variables, denoted by . A type environment is changed so that it maps variables to type schemes, instead of gradual types. Terms are extended with let-expressions of the form , which bind in value ,444A monomorphic definition (where is not necessarily a value) can be expressed as as usual. and variables , which represent type application if is let-bound, are now annotated with a sequence of type arguments.555A variable introduced by a lambda abstraction is always monomorphic, so is empty for such a variable. A type argument is either a static type or the special symbol . Type substitution is defined in a capture-avoiding manner.

We adopt value restriction (Wright, 1995), i.e., the body of a type abstraction has to be a syntactic value, for avoiding a subtle issue in the dynamic semantics. If we did not adopt value restriction, we would have to deal with cast applications where a target type is a bound type variable. For example, consider the term , which is actually ill formed in our language because value restriction is violated. It has a cast with bound as its target type. The question is how the cast is evaluated. We should not apply R_InstBase here because the type of would change from to . It appears that there is no reasonable way to reduce this cast further—after all, the semantics of the cast depends on what type is substituted for , which may be instantiated in many ways in . Value restriction resolves this issue:666Making let call-by-name (Leroy, 1993) may be another option. a cast with its target type being a bound type variable is executed only after the type variable is instantiated.

The typing rules of are also updated. We replace the rule for variables with the rule T_VarP and add the rule T_LetP. The rule T_LetP is standard; it allows generalization by type variables that do not appear free in . Note that it allows abstraction by a type variable even when does not appear in . As we have already seen such abstraction can be significant in . The second premise of the rule T_VarP, which represents type application, means that extra type variables that do not appear in have to be instantiated by (and other type variables by static types). The type expression is notational abuse but the result will not contain because the corresponding type variables do not appear in .

The rule R_LetP is an additional rule to reduce let-expressions. Roughly speaking, reduces to in which is substituted for as usual but, due to explicit type abstraction/application, the definition of substitution is slightly peculiar: When a variable is replaced with a type-abstracted value, type arguments are also substituted, after fresh type variable generation, for in the value. We formalize this idea as substitution of the form , shown in the lower half of Figure 4; in the case for variables, the length of a sequence or is denoted by . This nonstandard substitution makes correspondence to usual reduction for let (that is, ) easier to see. Other reduction and evaluation rules, including R_InstBase and R_InstArrow, remain unchanged.

2.4. Discussion about the semantics of let-polymorphism

Before proceeding further, we give a brief comparison with Garcia and Cimini (2015), who have suggested that the Polymorphic Blame Calculus (PBC) (Ahmed et al., 2011, 2017) can be used to give the semantics of let in the ITGL. Using the PBC-style semantics for type abstraction/application would, however, raise more of blame—because the PBC enforces parametricity at run time—even when we just give subterms names by let.

The difference between our approach and the PBC-based one is exemplified by the following program in the ITGL:

This program is translated into and evaluated as follows: