Formalising Mathematics In Simple Type Theory

04/20/2018
by   Lawrence C. Paulson, et al.
University of Cambridge
0

Despite the considerable interest in new dependent type theories, simple type theory (which dates from 1940) is sufficient to formalise serious topics in mathematics. This point is seen by examining formal proofs of a theorem about stereographic projections. A formalisation using the HOL Light proof assistant is contrasted with one using Isabelle/HOL. Harrison's technique for formalising Euclidean spaces is contrasted with an approach using Isabelle/HOL's axiomatic type classes. However, every formal system can be outgrown, and mathematics should be formalised with a view that it will eventually migrate to a new formalism.

READ FULL TEXT VIEW PDF

Authors

page 1

page 2

page 3

page 4

11/11/2021

Should Type Theory replace Set Theory as the Foundation of Mathematics

We discuss why Type Theory is preferable as foundation of Mathematics co...
02/17/2021

Formalizing relations in type theory

Type theory plays an important role in foundations of mathematics as a f...
11/01/2019

A Formal Proof of PAC Learnability for Decision Stumps

We present a machine-checked, formal proof of PAC learnability of the co...
12/16/2019

Quantum GestART: Identifying and Applying Correlations between Mathematics, Art, and Perceptual Organization

Mathematics can help analyze the arts and inspire new artwork. Mathemati...
05/05/2019

Interaction with Formal Mathematical Documents in Isabelle/PIDE

Isabelle/PIDE has emerged over more than 10 years as the standard Prover...
02/03/2022

Formal Mathematics Statement Curriculum Learning

We explore the use of expert iteration in the context of language modeli...
01/01/2021

Formalizing Hall's Marriage Theorem in Lean

We formalize Hall's Marriage Theorem in the Lean theorem prover for incl...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Let’s begin with Dana Scott:

No matter how much wishful thinking we do, the theory of types is here to stay. There is no other way to make sense of the foundations of mathematics. Russell (with the help of Ramsey) had the right idea, and Curry and Quine are very lucky that their unmotivated formalistic systems are not inconsistent.111Italics in original (scott93, , p.413)

The foundations of mathematics is commonly understood as referring to philosophical conceptions such as logicism (mathematics reduced to logic), formalism (mathematics as “a combinatorial game played with the primitive symbols”) (neumann-foundations, , p.62), Platonism (“mathematics describes a non-sensual reality, which exists independently … of the human mind”) (goedel-basic-foundations, , p.323) and intuitionism (mathematics as “a production of the human mind”) (heyting-foundations, , p.52). Some of these conceptions, such as logicism and formalism, naturally lend themselves to the idea of doing mathematics in a formal deductive system. Whitehead and Russell’s magnum opus, Principia Mathematica principia , is the quintessential example of this. Other conceptions are hostile to formalisation. However, a tremendous amount of mathematics has been formalised in recent years, and this work is largely indifferent to those philosophical debates.

This article is chiefly concerned with the great body of analysis and topology formalised by John Harrison, using higher-order logic as implemented in his HOL Light proof assistant hol-light-tutorial . The original motive for this work was to verify implementations of computer arithmetic, such as the calculation of the exponential function harrison-exp , prompted by the 1994 floating-point division bug that forced Intel to recall millions of Pentium chips at a cost of $475 million nicely-pentium-fdiv . Another great body of mathematics was formalised by Georges Gonthier using Coq: the four colour theorem gonthier-4ct

, and later, the odd order theorem 

gonthier-oot . Here the motive was to increase confidence in the proofs: the first four colour proof involved thousands of cases checked by a computer program, while the proof of the odd order theorem originally appeared as a 255-page journal article. Finally there was the Flyspeck project, to formalise Thomas Hales’s proof of the Kepler conjecture, another gigantic case analysis; this formalisation task was carried out by many collaborators using HOL Light and Isabelle/HOL, so again higher-order logic.

Higher-order logic is based on the work of Church church40 , which can be seen as a simplified version of the type theory of Whitehead and Russell. But while they were exponents of logicism, today’s HOL Light and Isabelle/HOL users clearly aren’t, or at least, keep their views secret.

Superficially, Coq users are indeed exponents of intuitionism: they regularly refer to constructive proofs and stress their rejection of the excluded middle. However, this sort of discussion is not always convincing. For example, the abstract announcing the Coq proof of the odd order theorem declares “the formalized proof is constructive” (gonthier-oot, , p.163). This theorem states that every finite group of odd order is solvable, and therefore a constructive proof should provide, for a given group  of odd order, evidence that is solvable. However, the solvability of a finite group can be checked in finite time, so no evidence is required. So does the constructive nature of the proof embody anything significant? It turns out that some results in the theory of group modules could only be proved in double-negation form (gonthier-oot, , p.174).

Analysis changes everything. Constructive analysis looks utterly different from classical analysis. As formulated by Bishop bishop-bridges , we may not assume that a real number  satisfies , and does not guarantee that for some real . In their Coquelicot analysis library, Boldo et al. assume these classical principles, while resisting the temptation to embrace classical logic in full (boldo-coquelicot, , §3.2).

The sort of constructivism just described therefore seems to lack an overarching philosophical basis or justification. In contrast, Martin-Löf’s type theory was intended from the start to support Bishop-style constructive analysis martin-lof-itt-predicative ; this formal calculus directly embodies Heyting’s intuitionistic interpretation of the logical constants martin-lof-meanings . It is implemented as the Agda bove-agda programming language and proof assistant.

It’s worth remarking that the very idea of fixing a formalism as the foundation of intuitionistic mathematics represents a sharp deviation from its original conception. As Heyting wrote,

The intuitionistic mathematician … uses language, both natural and formalised, only for communicating thoughts, i.e., to get others or himself to follow his own mathematical ideas. Such a linguistic accompaniment is not a representation of mathematics; still less is it mathematics itself.(heyting-foundations, , p.52–3)

Constructive logic is well supported on the computer. However, the choice of proof assistant is frequently dictated by other considerations, including institutional expectations, the availability of local expertise and the need for specific libraries. The popularity of Coq in France is no reason to imagine that intuitionism is the dominant philosophy there.

Someone wishing to formalise mathematics today has three main options:

  • Higher-order logic (also known as simple type theory), where types are built inductively from certain base types, and variables have fixed types. Generalising this system through polymorphism adds considerable additional expressiveness.

  • Dependent type theories, where types are parameterised by terms, embodying the propositions-as-types principle. This approach was first realised in NG de Bruijn’s AUTOMATH debruijn-survey . Such systems are frequently but not necessarily constructive: AUTOMATH was mainly used to formalise classical mathematics.

  • Set theories can be extremely expressive. The Mizar system has demonstrated that set theory can be a foundation for mathematics in practice as well as in theory bancerek-lattices . Recent work by Zhan zhan-fundamental confirms this point independently, with a high degree of automation.

All three options have merits. While this paper focuses on higher-order logic, I make no claim that this formalism is the best foundation for mathematics. It is certainly less expressive than the other two. And a mathematician can burst free of any formalism as quickly as you can say “the category of all sets”. I would prefer to see a situation where formalised mathematics could be made portable: where proofs could be migrated from one formal system to another through a translation process that respects the structure of the proof.

2 Higher-Order Logic on the Computer

A succinct way to describe higher-order logic is as a predicate calculus with simple types, including functions and sets, the latter seen as truth-valued functions.

Logical types evolved rapidly during the 20th century. For Whitehead and Russell, types were a device to forestall the paradoxes, in particular by enforcing the distinction between sets and individuals. But they had no notation for types and never wrote them in formulas. They even proved (the modern equivalent of) , concealing the type symbols that prevent Russell’s paradox here feferman-typical-ambiguity . Their omission of type symbols, which they termed typical ambiguity, was a precursor to today’s polymorphism. It seems that they preferred to keep types out of sight.

Church church40 provided a type notation including a type  of individuals and a separate type of truth values, with which one could express sets of individuals (having type ), sets of sets of individuals (type ) etc., analogously to the cumulative hierarchy of sets, but only to finite levels. Church assigned all individuals the same type.

Other people wanted to give types a much more prominent role. The mathematician NG de Bruijn devoted much of his later career, starting in the 1960s, to developing type theories for mathematics:

I believe that thinking in terms of types and typed sets is much more natural than appealing to untyped set theory. … In our mathematical culture we have learned to keep things apart. If we have a rational number and a set of points in the Euclidean plane, we cannot even imagine what it means to form the intersection. The idea that both might have been coded in ZF with a coding so crazy that the intersection is not empty seems to be ridiculous. If we think of a set of objects, we usually think of collecting things of a certain type, and set-theoretical operations are to be carried out inside that type. Some types might be considered as subtypes of some other types, but in other cases two different types have nothing to do with each other. That does not mean that their intersection is empty, but that it would be insane to even talk about the intersection. (debruijn-types, , p.31)222Italics in original

De Bruijn also made the case for polymorphism:

Is there the drawback that working with typed sets is much less economic then with untyped ones? If things have been said for sets of apples, and if these same things hold, mutatis mutandis, for sets of pears, does one have to repeat all what had been said before? No. One just takes a type variable, say, and expresses all those generalities for sets of things of type . Later one can apply all this by means of a single instantiation, replacing either by apple or by pear. (debruijn-types, , p.31)

His work included the first computer implementations of dependent type theories. However, his view that apples and pears should have different types, using type variables to prevent duplication, is universally accepted even with simple type theory.

2.1 Why Simple Type Theory?

What is the point of choosing simple type theory when powerful dependent type theories exist? One reason is that so much can be done with so little. HOL Light “sets a very exacting standard of correctness” and “compared with other HOL systems, … uses a much simpler logical core.”333http://www.cl.cam.ac.uk/~jrh13/hol-light/ Thanks to this simplicity, fully verified implementations now appear to be in reach kumar-self-formalisation . Isabelle/HOL’s logical core is larger, but nevertheless, concepts such as quotient constructions kaliszyk-quotients , inductive and coinductive definitions blanchette-datatypes ; paulson-coind

, recursion, pattern-matching and termination checking

krauss-partial-recursive are derived from Church’s original HOL axioms; with dependent type theories, such features are generally provided by extending the calculus itself gimenez-recursive .

The other reason concerns automation. Derivations in formal calculi are extremely long. Whitehead and Russell needed hundreds of pages to prove 1+1=2 (principia, , p.360).444In fact the relevant proposition,, is a statement about sets. Many of the propositions laboriously worked out here are elementary identities that are trivial to prove with modern automation. Proof assistants must be capable of performing lengthy deductions automatically. But more expressive formalisms are more difficult to automate. Even at the most basic level, technical features of constructive type theories interfere with automation. Term rewriting refers to the use of a set of identities to perform algebraic simplification. It has been a staple of automated theorem proving since the 1970s bm79 . Isabelle/HOL has over 2800 rewrite rules pre-installed, and the full battery can be applied with the single word auto. The rewriting tactics of Coq (coq-refman, , §8.6) — the most advanced implementation of dependent types — apply a single, explicitly named, rewrite rule. Recent versions of the Lean proof assistant (which implements the same calculus as Coq) finally provide strong simplification avigad-lean .

It is also striking to consider the extent to which the Ssreflect proof language and library has superseded the standard Coq libraries. Gonthier and Mahboubi write

Small-scale reflection is a formal proof methodology based on the pervasive use of computation with symbolic representations. … The statements of many top-level lemmas, and of most proof subgoals, explicitly contain symbolic representations; translation between logical and symbolic representations is performed under the explicit, fine-grained control of the proof script. The efficiency of small-scale reflection hinges on the fact that fixing a particular symbolic representation strongly directs the behaviour of a theorem-prover. (gonthier-ssreflect, , p.96)

So Ssreflect appears to sacrifice a degree of mathematical abstraction, though nobody can deny its success gonthier-4ct ; gonthier-oot . The Coquelicot analysis library similarly shies away from the full type system:

The Coq system comes with an axiomatization of standard real numbers and a library of theorems on real analysis. Unfortunately, … the definitions of integrals and derivatives are based on dependent types, which make them especially cumbersome to use in practice.” (boldo-coquelicot, , p.41)

In the sequel, we should be concerned with two questions:

  • whether simple type theory is sufficient for doing significant mathematics, and

  • whether we can avoid getting locked into any one formalism.

The latter, because it would be absurd to claim that any one formalism is all that we could ever need.

2.2 Simple Type Theory

Higher-order logic as implemented in proof assistants such as HOL Light hol-light-tutorial and Isabelle/HOL isa-tutorial borrows the syntax of types in the programming language ML paulson-ml2 . It provides

  • atomic types, in particular bool, the type of truth values, and nat, the type of natural numbers.

  • function types, denoted by  .

  • compound types, such as list for lists whose elements have type , similarly set for typed sets. (Note the postfix notation.)

  • type variables, denoted by ’a, ’b etc. They give rise to polymorphic types like ’a  ’a, the type of the identity function.

Implicit in Church, and as noted above by de Bruijn, type variables and polymorphism must be included in the formalism implemented by a proof assistant. For already when we consider elementary operations such as the union of two sets, the type of the sets’ elements is clearly a parameter and we obviously expect to have a single definition of union. Polymorphism makes that possible.

The terms of higher-order logic are precisely those of the typed -calculus: identifiers (which could be variables or constants), -abstractions and function applications. On this foundation a full predicate calculus is built, including equality. Note that while first-order logic regards terms and formulas as distinct syntactic categories, higher-order logic distinguishes between terms and formulas only in that the latter have type bool.

Overloading is the idea of using type information to disambiguate expressions. In a mathematical text, the expression could stand for any number of things: might be the Cartesian product of two sets, the direct product of two groups and the arithmetic product of two natural numbers. Most proof assistants make it possible to assign an operator such as multiple meanings, according to the types of its operands. In view of the huge ambiguity found in mathematical notations— consider for example , , , , , —the possibility of overloading is a strong argument in favour of a typed formalism.

2.3 Higher-Order Logic as a Basis for Mathematics

The formal deductive systems in HOL Light and Isabelle/HOL closely follow Church church40 . However, no significant applications can be tackled from this primitive starting point. It is first necessary to develop, at least, elementary theories of the natural numbers and lists (finite sequences). General principles of recursive/inductive definition of types, functions and sets are derived, by elaborate constructions, from the axioms. Even in the minimalistic HOL Light, this requires more than 10,000 lines of machine proofs; it requires much more in Isabelle, deriving exceptionally powerful recursion principles blanchette-datatypes . This foundation is already sufficient for studying many problems in functional programming and hardware verification, even without negative integers.

To formalise analysis requires immensely more effort. It is necessary to develop the real numbers (as Cauchy sequences for example), but that is just the beginning. Basic topology including limits, continuity, derivatives, power series and the familiar transcendental functions must also be formalised. And all that is barely a foundation for university-level mathematics. In addition to the sheer bulk of material that must be absorbed, there is the question of duplication. The process of formalisation gives rise to several number systems: natural numbers, integers, rationals, reals and complex numbers. This results in great duplication, with laws such as existing in five distinct forms. Overloading, by itself, doesn’t solve this problem.

The need to reason about -dimensional spaces threatens to introduce infinite duplication. Simple type theory does not allow dependent types, and yet the parameter  (the dimension) is surely a natural number. The theory of Euclidean spaces concerns for any , and it might appear that such theorems cannot even be stated in higher-order logic. John Harrison found an ingenious solution harrison-euclidean : to represent the dimension by a type of the required cardinality. It is easy to define types in higher-order logic having any specified finite number of elements. Then can be represented by the type , where the dimension is a type. Through polymorphism, can be a variable, and the existence of sum and product operations on types even allow basic arithmetic to be performed on dimensions. It must be admitted that things start to get ugly at this point. Other drawbacks include the need to write as

in order to access topological results in the one-dimensional case. Nevertheless, this technique is flexible enough to support the rapidly expanding HOL Light multivariate analysis library, which at the moment covers much complex analysis and algebraic topology, including the Cauchy integral theorem, the prime number theorem, the Riemann mapping theorem, the Jordan curve theorem and much more. It is remarkable what can be accomplished with such a simple foundation.

It’s important to recognise that John Harrison’s approach is not the only one. An obvious alternative is to use polymorphism and explicit constraints (in the form of sets or predicates) to identify domains of interest. Harrison rejects this because

it seems disappointing that the type system then makes little useful contribution, for example in ‘automatically’ ensuring that one does not take dot products of vectors of different lengths or wire together words of different sizes. All the interesting work is done by set constraints, just as if we were using an untyped system like set theory.

(harrison-euclidean, , p.115)

Isabelle/HOL provides a solution to this dilemma through an extension to higher-order logic: axiomatic type classes wenzel-type . This builds on the idea of polymorphism, which in its elementary form is merely a mechanism for type schemes: a definition or theorem involving type variables stands for all possible instances where types are substituted for the type variables. Polymorphism can be refined by introducing classes of types, allowing a type variable to be constrained by one or more type classes, and allowing a type to be substituted for a type variable only if it belongs to the appropriate classes. A type class is defined by specifying a suite of operations together with laws that they must satisfy, for example, a partial ordering with the operation satisfying reflexivity, antisymmetry and transitivity or a ring with the operations 0, 1, , satisfying the usual axioms. The type class mechanism can express a wide variety of constraints using types themselves, addressing Harrison’s objection quoted above. Type classes can also be extended and combined with great flexibility to create specification hierarchies: partial orderings, but also linear and well-founded orderings; rings, but also groups, integral domains, fields, as well as linearly ordered fields, et cetera. Type classes work equally well at specifying concepts from analysis such as topological spaces of various kinds, metric spaces and Euclidean spaces hoelzl-filters .

Type classes also address the issue of duplication of laws such as . That property is an axiom for the type class of groups, which is inherited by rings, fields, etc. As a new type is introduced (for example, the rationals), operations can be defined and proved to satisfy the axioms of some type class; that being done, the new type will be accepted as a member of that type class (for example, fields). This step can be repeated for other type classes (for example, linear orderings). At this point, it is possible to forget the explicit definitions (for example, addition of rational numbers) and refer to axioms of the type classes, such as . Type classes also allow operators such as  to be overloaded in a principled manner, because all of those definitions satisfy similar properties. Recall that overloading means assigning an operator multiple meanings, but when this is done through type classes, the multiple meanings will enjoy the same axiomatic properties, and a single type class axiom can replace many theorems paulson-numerical .

An intriguing aspect of type classes is the possibility of recursive definitions over the structure of types. For example, the lexicographic ordering on type  list is defined in terms of on type . But this introduces the question of circular definitions. More generally, it becomes clear that the introduction of type classes goes well beyond the naïve semantic foundation of simple type theory as a notation for a fragment of set theory. Recently, Kunčar and Popescu kuncar-consistent-foundation have published an analysis including sufficient conditions for overloaded constant definitions to be sound, along with a new semantics for higher-order logic with type classes. It remains much simpler than the semantics of any dependent type theory.

2.4 A Personal Perspective

In the spring of 1977, as a mathematics undergraduate at Caltech, I had the immense privilege of attending a lecture series on AUTOMATH given by de Bruijn himself, and of meeting him privately to discuss it. I studied much of the AUTOMATH literature, including Jutting’s famous thesis jutting77 on the formalisation of Landau’s Foundations of Analysis.

In the early 1980s, I encountered Martin-Löf’s type theory through the group at Chalmers University in Sweden. Again I was impressed with the possibilities of this theory, and devoted much of my early career to it. I worked on the derivation of well-founded recursion in Martin-Löf’s type theory paulson-cons , and created Isabelle originally as an implementation of this theory paulson-natural . Traces of this are still evident in everything from Isabelle’s representation of syntax to the rules for , and constructions in Isabelle/ZF. The logic CTT (constructive type theory) is still distributed with Isabelle,555http://isabelle.in.tum.de including an automatic type checker and simplifier.

My personal disenchantment with dependent type theories coincides with the decision to shift from extensional to intensional equality nordstrom90 . This meant for example that and would henceforth be regarded as fundamentally different assertions, one an identity holding by definition and the other a mere equality proved by induction. Of course I was personally upset to see several years of work, along with Constable’s Nuprl project constable86 , suddenly put beyond the pale. But I also had the feeling that this decision had been imposed on the community rather than arising from a rational discussion. And I see the entire homotopy type theory effort as an attempt to make equality reasonable again.

3 Example: Stereographic Projections

An example will serve to demonstrate how mathematics can be formalised using the techniques described in §2.3 above. We shall compare two formalisations of a theorem: the HOL Light original and the new version after translation to Isabelle/HOL using type classes.

The theorem concerns stereographic projections, including the well-known special case of mapping a punctured666Punctured means that one point is removed. sphere onto a plane (Fig.1). In fact, it holds under rather general conditions. In the two-dimensional case, a punctured circle is flattened onto a line. The line or plane is infinite, and points close to the puncture are mapped “out towards infinity”. The theorem holds in higher dimensions with the sphere generalised to the surface of an -dimensional convex bounded set and the plane generalised to an affine set of dimension . The mappings are continuous bijections between the two sets: the sets are homeomorphic.

Figure 1: 3D illustration of a stereographic projection from the north pole onto a plane below the sphere

The theorem we shall examine is the generalisation of the case for the sphere to the case for a bounded convex set. The proof of this theorem is formalised in HOL Light777File https://github.com/jrh13/hol-light/blob/master/Multivariate/paths.ml as shown in Fig.3. At 51 lines, it is rather short for such proofs, which can be thousands of lines long.

=0pt

let HOMEOMORPHIC_PUNCTURED_SPHERE_AFFINE_GEN = prove
 (‘!s:real^N->bool t:real^M->bool a.
        convex s /\ bounded s /\ a IN relative_frontier s /\
        affine t /\ aff_dim s = aff_dim t + &1
        ==> (relative_frontier s DELETE a) homeomorphic t‘,
  REPEAT GEN_TAC THEN ASM_CASES_TAC ‘s:real^N->bool = {}‘ THEN
  ASM_SIMP_TAC[AFF_DIM_EMPTY; AFF_DIM_GE; INT_ARITH
   ‘--(&1):int <= s ==> ~(--(&1) = s + &1)‘] THEN
  MP_TAC(ISPECL [‘(:real^N)‘; ‘aff_dim(s:real^N->bool)‘]
    CHOOSE_AFFINE_SUBSET) THEN REWRITE_TAC[SUBSET_UNIV] THEN
  REWRITE_TAC[AFF_DIM_GE; AFF_DIM_LE_UNIV; AFF_DIM_UNIV; AFFINE_UNIV] THEN
  DISCH_THEN(X_CHOOSE_THEN ‘t:real^N->bool‘ STRIP_ASSUME_TAC) THEN
  SUBGOAL_THEN ‘~(t:real^N->bool = {})‘ MP_TAC THENL
   [ASM_MESON_TAC[AFF_DIM_EQ_MINUS1]; ALL_TAC] THEN
  GEN_REWRITE_TAC LAND_CONV [GSYM MEMBER_NOT_EMPTY] THEN
  DISCH_THEN(X_CHOOSE_TAC ‘z:real^N‘) THEN STRIP_TAC THEN
  MP_TAC(ISPECL
   [‘s:real^N->bool‘; ‘ball(z:real^N,&1) INTER t‘]
        HOMEOMORPHIC_RELATIVE_FRONTIERS_CONVEX_BOUNDED_SETS) THEN
  MP_TAC(ISPECL [‘t:real^N->bool‘; ‘ball(z:real^N,&1)‘]
        (ONCE_REWRITE_RULE[INTER_COMM] AFF_DIM_CONVEX_INTER_OPEN)) THEN
  MP_TAC(ISPECL [‘ball(z:real^N,&1)‘; ‘t:real^N->bool‘]
        RELATIVE_FRONTIER_CONVEX_INTER_AFFINE) THEN
  ASM_SIMP_TAC[CONVEX_INTER; BOUNDED_INTER; BOUNDED_BALL; CONVEX_BALL;
               AFFINE_IMP_CONVEX; INTERIOR_OPEN; OPEN_BALL;
               FRONTIER_BALL; REAL_LT_01] THEN
  SUBGOAL_THEN ‘~(ball(z:real^N,&1) INTER t = {})‘ ASSUME_TAC THENL
   [REWRITE_TAC[GSYM MEMBER_NOT_EMPTY; IN_INTER] THEN
    EXISTS_TAC ‘z:real^N‘ THEN ASM_REWRITE_TAC[CENTRE_IN_BALL; REAL_LT_01];
    ASM_REWRITE_TAC[] THEN REPEAT(DISCH_THEN SUBST1_TAC) THEN SIMP_TAC[]] THEN
  REWRITE_TAC[homeomorphic; LEFT_IMP_EXISTS_THM] THEN
  MAP_EVERY X_GEN_TAC [‘h:real^N->real^N‘; ‘k:real^N->real^N‘] THEN
  STRIP_TAC THEN REWRITE_TAC[GSYM homeomorphic] THEN
  TRANS_TAC HOMEOMORPHIC_TRANS
    ‘(sphere(z,&1) INTER t) DELETE (h:real^N->real^N) a‘ THEN
  CONJ_TAC THENL
   [REWRITE_TAC[homeomorphic] THEN
    MAP_EVERY EXISTS_TAC [‘h:real^N->real^N‘; ‘k:real^N->real^N‘] THEN
    FIRST_X_ASSUM(MP_TAC o GEN_REWRITE_RULE I [HOMEOMORPHISM]) THEN
    REWRITE_TAC[HOMEOMORPHISM] THEN STRIP_TAC THEN REPEAT CONJ_TAC THENL
     [ASM_MESON_TAC[CONTINUOUS_ON_SUBSET; DELETE_SUBSET];
      ASM SET_TAC[];
      ASM_MESON_TAC[CONTINUOUS_ON_SUBSET; DELETE_SUBSET];
      ASM SET_TAC[];
      ASM SET_TAC[];
      ASM SET_TAC[]];
    MATCH_MP_TAC HOMEOMORPHIC_PUNCTURED_AFFINE_SPHERE_AFFINE THEN
    ASM_REWRITE_TAC[REAL_LT_01; GSYM IN_INTER] THEN
    FIRST_X_ASSUM(MP_TAC o GEN_REWRITE_RULE I [HOMEOMORPHISM]) THEN
    ASM SET_TAC[]]);;

The HOL Light proof begins with the statement of the desired theorem. We see logical syntax coded as ASCII characters: ! = and /\ = . Moreover, the DELETE operator refers to the removal of a set element (). Words such as convex and bounded denote predicates defined elsewhere. Infix syntax is available, as in the symbol homeomorphic. We see John Harrison’s representation of in the type real^N->bool and in particular, !s:real^N->bool abbreviates “for all ”. Note that the constraint on the dimensions is expressed through the concept of affine dimension rather than some constraint on M and N. This statement is legible enough, and yet the notation leaves much to be desired, for example in the necessity to write &1 (the ampersand converting the natural number 1 into an integer).

!s:real^N->bool t:real^M->bool a.
     convex s /\ bounded s /\ a IN relative_frontier s /\
     affine t /\ aff_dim s = aff_dim t + &1
     ==> (relative_frontier s DELETE a) homeomorphic t

We have to admit that the proof itself is unintelligible. Even a HOL Light user can only spot small clues in the proof text, such as the case analysis on whether the set is empty or not, which we see in the first line, or the references to previous lemmas. If we look carefully, we might notice intermediate statements being proved, such as

~(t:real^N->bool = {})

or

~(ball(z:real^N,&1) INTER t = {})

though in the latter case it is unclear what is. The formal proof consists of program code, written in a general-purpose programming language (OCaml) equipped with a library of proof procedures and supporting functions, for that is what HOL Light is. A HOL Light proof is constructed by calling the its proof primitives at the OCaml command line, but one could type in any desired OCaml code. Users sometimes write such code in order to extend the functionality of HOL Light. Even if their code is incorrect,888Malicious code is another matter. In HOL Light, one can use ocaml’s String.set primitive to replace T (true) by F. Given the variety of loopholes in programming languages and systems, not to mention notational trickery, we must be content with defences against mere incompetence. they cannot cause HOL Light to generate false theorems. All LCF-style proof assistants employ a similar kernel architecture.

Figure 2: The stereographic projection theorem in Isabelle/HOL

In recent years, I have been embarked on a project to translate the most fundamental results of the HOL Light multivariate analysis library into Isabelle. The original motivation was to obtain the Cauchy integral theorem harrison-complex , which is the gateway to the prime number theorem harrison-pnt among many other results. I was in a unique position to carry out this work as a developer of both Isabelle and HOL. The HOL family of provers descends from my early work on LCF paulson87 , and in particular the proof tactic language, which is perfectly preserved in HOL Light. The 51 lines of HOL Light presented above are among the several tens of thousands that I have translated Isabelle/HOL. Figure 2 presents my version of the HOL Light proof above, as shown in a running Isabelle session. Proof documents can also be typeset with the help of LaTeX, but here we have colour to distinguish the various syntactic elements of the proof: keywords, local variables, global variables, constants, etc.

The theorem statement resembles the HOL Light one but uses the Isabelle fixes / assumes / shows keywords to declare the premises and conclusion. (It is typical Isabelle usage to minimise the use of explicit logical connectives in theorem statements.) Harrison’s construction real^N isn’t used here; instead the variable is declared to belong to some arbitrary Euclidean space. An advantage of this approach is that types such as real and complex can be proved to be Euclidean spaces despite not having the explicit form real^N.

The proof is written in the Isar structured language, and much of it is legible. An affine set  is somehow obtained, with the same dimension as , which we note to be nonempty, therefore obtaining some element . Then we obtain a homeomorphism between rel_frontier S and sphere z 1  U, using a previous result.999Because the HOL Light libraries were ported en masse, corresponding theorems generally have similar names and forms. Then an element is removed from both sides, yielding a new homeomorphism, which is chained with the homeomorphism theorem for the sphere to yield the final result. And thus we get an idea how the special case for a punctured sphere intersected with an affine set can be generalised to the present result.

The Isar proof language wenzel-isabelle/isar , inspired by that of the Mizar system trybulec-features , encourages the explicit statement of intermediate results and obtained quantities. The notation also benefits from Isabelle’s use of mathematical symbols, and a further benefit of type classes is that a number like 1 belongs to all numeric types without explicit conversion between them.

4 Discussion and Conclusions

The HOL Light and Isabelle proofs illustrate how mathematical reasoning is done in simple type theory. They also show what mathematics looks like in these systems. The Isabelle proof demonstrates that simple type theory can deliver a degree of legibility, though the syntax is a far cry from normal mathematics. The greater expressiveness of dependent type theories has not given them any advantage in the domain of analysis: the leading development boldo-coquelicot is not constructive and downgrades the role of dependent types.

As I have remarked elsewhere paulson-computational-logic , every formal calculus is ultimately a prison. It will do some things well, other things badly and many other things not at all. Mathematics write their proofs using a combination of prose and beautiful but highly ambiguous notations, such as . Formal proofs are code and look like it, even if they are allowed to contain special symbols and Greek letters. The various interpretations of anomalous expressions such as are also foundational, and each formalism must adopt a clear position when one might prefer a little ambiguity. (Both HOL Light and Isabelle define , which some people find shocking.) Develop our proof tools as we may, such issues will never go away. But if past generations of mathematicians could get used to REDUCE and FORTRAN, they can get used to this.

The importance of legibility can hardly be overstated. A legible proof is more likely to convince a sceptical mathematician: somebody who doesn’t trust a complex software system, especially if it says . While much research has gone into the verification of proof procedures kumar-self-formalisation ; schlichtkrull-resolution , all such work requires trusting similar software. But a mathematician may believe a specific formal proof if it can be inspected directly, breaking this vicious cycle. Ideally, the mathematician would then gain the confidence to construct new formal proofs, possibly reusing parts of other proofs. Legibility is crucial for this.

These examples, and the great proof translation effort from which they were taken, have much to say about the process of porting mathematics from one system to another. Users of one system frequently envy the libraries of a rival system. There has been much progress on translating proofs automatically kaliszyk-scalable-lcf ; obua-importing-hol , but such techniques are seldom used. Automatic translation typically works via a proof kernel that has been modified to generate a trace, so it starts with an extremely low-level proof. Such an approach can never deliver legible proofs, only a set of mechanically verified assertions. Manual translation, while immensely more laborious, yields real proofs and allows the statements of theorems to be generalised to take advantage of Isabelle/HOL’s type classes.

All existing proof translation techniques work by emulating one calculus within another at the level of primitive inferences. Could proofs instead be translated at the level of a mathematical argument? I was able to port many proofs that I did not understand: despite the huge differences between the two proof languages, it was usually possible to guess what had to be proved from the HOL Light text, along with many key reasoning steps. Isabelle’s automation was generally able to fill the gaps. This suggests that in the future, if we start with structured proofs, they could be translated to similarly structured proofs for a new system. If the new system supports strong automation (and it must!), the translation process could be driven by the text alone, even if the old system was no longer available. The main difficulty would be to translate statements from the old system so that they look natural in the new one.

The huge labour involved in creating a library of formalised mathematics is not in vain if the library can easily be moved on. The question “is simple type theory the right foundation for mathematics?” then becomes irrelevant. Let’s give Gödel the last word (italics his):

Thus we are led to conclude that, although everything mathematical is formalisable, it is nevertheless impossible to formalise all of mathematics in a single formal system, a fact that intuitionism has asserted all along. (goedel35b, , p.389)

Acknowledgements.
Dedicated to Michael J C Gordon FRS, 1948–2017. The development of HOL and Isabelle has been supported by numerous EPSRC grants. The ERC project ALEXANDRIA supports continued work on the topic of this paper. Many thanks to Jeremy Avigad, Johannes Hölzl, Andrew Pitts and the anonymous referee for their comments.

Bibliography

  • [1] J. Avigad, L. de Moura, and S. Kong. Theorem proving in Lean. Online at https://leanprover.github.io/theorem_proving_in_lean/theorem_proving_in_lean.pdf, Nov. 2017. Release 3.3.0.
  • [2] G. Bancerek and P. Rudnicki. A compendium of continuous lattices in Mizar.

    Journal of Automated Reasoning

    , 29(3-4):189–224, 2002.
  • [3] P. Benacerraf and H. Putnam, editors. Philosophy of Mathematics: Selected Readings. Cambridge University Press, 2nd edition, 1983.
  • [4] E. Bishop and D. Bridges. Constructive Analysis. Springer, 1985.
  • [5] J. C. Blanchette, J. Hölzl, A. Lochbihler, L. Panny, A. Popescu, and D. Traytel. Truly modular (co)datatypes forIsabelle/HOL. In G. Klein and R. Gamboa, editors, Interactive Theorem Proving — 5th International Conference, ITP 2014, LNCS 8558, pages 93–110. Springer, 2014.
  • [6] S. Blazy, C. Paulin-Mohring, and D. Pichardie, editors. Interactive Theorem Proving — 4th International Conference, LNCS 7998. Springer, 2013.
  • [7] S. Boldo, C. Lelay, and G. Melquiond. Coquelicot: A user-friendly library of real analysis for Coq. Mathematics in Computer Science, 9(1):41–62, 2015.
  • [8] A. Bove, P. Dybjer, and U. Norell. A brief overview of Agda — a functional language with dependent types. In S. Berghofer, T. Nipkow, C. Urban, and M. Wenzel, editors, TPHOLs, LNCS 5674, pages 73–78. Springer-Verlag, 2009.
  • [9] R. S. Boyer and J. S. Moore. A Computational Logic. Academic Press, 1979.
  • [10] A. Church. A formulation of the simple theory of types. Journal of Symbolic Logic, 5:56–68, 1940.
  • [11] R. L. Constable et al. Implementing Mathematics with the Nuprl Proof Development System. Prentice-Hall, 1986.
  • [12] N. G. de Bruijn. A survey of the project AUTOMATH. In J. Seldin and J. Hindley, editors, To H.B. Curry: Essays in Combinatory Logic, Lambda Calculus and Formalism, pages 579–606. Academic Press, 1980.
  • [13] N. G. de Bruijn. On the roles of types in mathematics. In P. de Groote, editor, The Curry-Howard isomorphism, pages 27–54. Academia, 1995.
  • [14] S. Feferman. Typical ambiguity: Trying to have your cake and eat it too. In G. Link, editor, 100 years of Russell’s Paradox, pages 131–151. Walter de Gruyter, 2004.
  • [15] E. Giménez. Codifying guarded definitions with recursive schemes. In P. Dybjer, B. Nordström, and J. Smith, editors, Types for Proofs and Programs: International Workshop TYPES ’94, pages 39–59. Springer, 1995.
  • [16] K. Gödel. Review of Carnap 1934: The antinomies and the incompleteness of mathematics. In S. Feferman, editor, Kurt Gödel: Collected Works, volume I, page 389. Oxford University Press, 1986.
  • [17] K. Gödel. Some basic theorems on the foundations of mathematics and their implications. In S. Feferman, editor, Kurt Gödel: Collected Works, volume III, pages 304–323. Oxford University Press, 1995. Originally published in 1951.
  • [18] G. Gonthier. The four colour theorem: Engineering of a formal proof. In D. Kapur, editor, Computer Mathematics, LNCS 5081, pages 333–333. Springer, 2008.
  • [19] G. Gonthier, A. Asperti, J. Avigad, Y. Bertot, C. Cohen, F. Garillot, S. Le Roux, A. Mahboubi, R. O’Connor, S. Ould Biha, I. Pasca, L. Rideau, A. Solovyev, E. Tassi, and L. Théry. A machine-checked proof of the odd order theorem. In Blazy et al. [6], pages 163–179.
  • [20] G. Gonthier and A. Mahboubi. An introduction to small scale reflection in Coq. Journal of Formalized Reasoning, 3(2), 2010.
  • [21] J. Harrison. HOL Light: A tutorial introduction. In M. K. Srivas and A. J. Camilleri, editors, Formal Methods in Computer-Aided Design: FMCAD ’96, LNCS 1166, pages 265–269. Springer, 1996.
  • [22] J. Harrison. Floating point verification in HOL Light: the exponential function. Formal Methods in System Design, 16:271–305, 2000.
  • [23] J. Harrison. A HOL theory of Euclidean space. In J. Hurd and T. Melham, editors, Theorem Proving in Higher Order Logics: TPHOLs 2005, LNCS 3603, pages 114–129. Springer, 2005.
  • [24] J. Harrison. Formalizing basic complex analysis. In R. Matuszewski and A. Zalewska, editors, From Insight to Proof: Festschrift in Honour of Andrzej Trybulec, volume 10(23) of Studies in Logic, Grammar and Rhetoric, pages 151–165. University of Białystok, 2007.
  • [25] J. Harrison. Formalizing an analytic proof of the prime number theorem. Journal of Automated Reasoning, 43(3):243–261, 2009.
  • [26] A. Heyting. The intuitionist foundations of mathematics. In Benacerraf and Putnam [3], pages 52–61. First published in 1944.
  • [27] J. Hölzl, F. Immler, and B. Huffman. Type classes and filters for mathematical analysis in Isabelle/HOL. In Blazy et al. [6], pages 279–294.
  • [28] L. Jutting. Checking Landau’s “Grundlagen” in the AUTOMATH System. PhD thesis, Eindhoven University of Technology, 1977.
  • [29] C. Kaliszyk and A. Krauss. Scalable LCF-style proof translation. In Blazy et al. [6], pages 51–66.
  • [30] C. Kaliszyk and C. Urban. Quotients revisited for Isabelle/HOL. In W. C. Chu, W. E. Wong, M. J. Palakal, and C.-C. Hung, editors, SAC ’11: Proceedings of the 2011 ACM Symposium on Applied Computing, pages 1639–1644. ACM, 2011.
  • [31] A. Krauss. Partial and nested recursive function definitions in higher-order logic. Journal of Automated Reasoning, 44(4):303–336, 2010.
  • [32] R. Kumar, R. Arthan, M. O. Myreen, and S. Owens. Self-formalisation of higher-order logic: Semantics, soundness, and a verified implementation. J. Autom. Reasoning, 56(3):221–259, 2016.
  • [33] O. Kunčar and A. Popescu. A consistent foundation for Isabelle/HOL. In C. Urban and X. Zhang, editors, Interactive Theorem Proving — 6th International Conference, ITP 2015, LNCS 9236, pages 234–252. Springer, 2015.
  • [34] P. Martin-Löf. An intuitionistic theory of types: Predicative part. In H. Rose and J. Shepherdson, editors, Logic Colloquium ’73, Studies in Logic and the Foundations of Mathematics 80, pages 73–118. North-Holland, 1975.
  • [35] P. Martin-Löf. On the meanings of the logical constants and the justifications of the logical laws on the meanings of the logical constants and the justifications of the logical laws. Nordic Journal of Philosophical Logic, 1(1):11–60, 1996.
  • [36] T. R. Nicely. Pentium FDIV flaw, 2011. FAQ page online at http://www.trnicely.net/pentbug/pentbug.html.
  • [37] T. Nipkow, L. C. Paulson, and M. Wenzel. Isabelle/HOL: A Proof Assistant for Higher-Order Logic. Springer, 2002. Online at http://isabelle.in.tum.de/dist/Isabelle/doc/tutorial.pdf.
  • [38] B. Nordström, K. Petersson, and J. Smith. Programming in Martin-Löf’s Type Theory. An Introduction. Oxford University Press, 1990.
  • [39] S. Obua and S. Skalberg. Importing HOL into Isabelle/HOL. In U. Furbach and N. Shankar, editors, Automated Reasoning: Third International Joint Conference, IJCAR 2006, Seattle, WA, USA, August 17-20, 2006. Proceedings, LNAI 4130, pages 298–302. Springer, 2006.
  • [40] L. C. Paulson. Constructing recursion operators in intuitionistic type theory. Journal of Symbolic Computation, 2:325–355, 1986.
  • [41] L. C. Paulson. Natural deduction as higher-order resolution.

    Journal of Logic Programming

    , 3:237–258, 1986.
  • [42] L. C. Paulson. Logic and Computation: Interactive proof with Cambridge LCF. Cambridge University Press, 1987.
  • [43] L. C. Paulson. ML for the Working Programmer. Cambridge University Press, 2nd edition, 1996.
  • [44] L. C. Paulson. Mechanizing coinduction and corecursion in higher-order logic. Journal of Logic and Computation, 7(2):175–204, Mar. 1997.
  • [45] L. C. Paulson. Organizing numerical theories using axiomatic type classes. Journal of Automated Reasoning, 33(1):29–49, 2004.
  • [46] L. C. Paulson. Computational logic: Its origins and applications. Proceedings of the Royal Society of London A: Mathematical, Physical and Engineering Sciences, 474(2210), 2018.
  • [47] A. Schlichtkrull. Formalization of the resolution calculus for first-order logic. In J. C. Blanchette and S. Merz, editors, Interactive Theorem Proving: 7th International Conference, ITP 2016, Nancy, France, August 22-25, 2016, Proceedings, LNCS 9807, pages 341–357. Springer, 2016.
  • [48] D. S. Scott. A type-theoretical alternative to ISWIM, CUCH, OWHY. Theoretical Comput. Sci., 121:411–440, 1993. Annotated version of the 1969 manuscript.
  • [49] The Coq Development Team. The Coq Proof Assistant Reference Manual. Inria, 2016. Online at https://coq.inria.fr/refman/.
  • [50] A. Trybulec. Some features of the Mizar language. http://mizar.org/project/trybulec93.pdf/, 1993.
  • [51] J. von Neumann. The formalist foundations of mathematics. In Benacerraf and Putnam [3], pages 61–65. First published in 1944.
  • [52] M. Wenzel. Type classes and overloading in higher-order logic. In E. L. Gunter and A. Felty, editors, Theorem Proving in Higher Order Logics: TPHOLs ’97, LNCS 1275, pages 307–322. Springer, 1997.
  • [53] M. Wenzel. Isabelle/Isar — a generic framework for human-readable proof documents. Studies in Logic, Grammar, and Rhetoric, 10(23):277–297, 2007. From Insight to Proof — Festschrift in Honour of Andrzej Trybulec.
  • [54] A. N. Whitehead and B. Russell. Principia Mathematica. Cambridge University Press, 1962. Paperback edition to *56, abridged from the 2nd edition (1927).
  • [55] B. Zhan. Formalization of the fundamental group in untyped set theory using auto2. In M. Ayala-Rincón and C. A. Muñoz, editors, Interactive Theorem Proving —- 8th International Conference, ITP 2017, pages 514–530. Springer, 2017.