1 Introduction
Cutelimination is perhaps the most fundamental operation in proof theory, first introduced by Gentzen in [16]. Its importance is underlined by a wide variety of its applications; one application in particular motivates our interest in cutelimination: cutfree proofs directly contain Herbrand disjunctions.
Herbrand’s theorem [21, 8] captures the insight that the validity of a quantified formula is characterized by the existence of a tautological finite set of quantifierfree instances. In its simplest case, the validity of a purely existential formula is characterized by the existence of a tautological disjunction of instances , a Herbrand disjunction. Expansion proofs generalize this result to higherorder logic in the form of elementary type theory [31].
A computational implementation of Herbrand’s theorem as provided by cutelimination lies at the foundation of many applications in computational proof theory: if we can compress the Herbrand disjunction extracted from a proof using a special kind of tree grammar, then we can introduce a cut into the proof which reduces the number of quantifier inferences—in practice this method finds interesting nonanalytic lemmas [26, 24, 23, 11]. A similar approach can be used for automated inductive theorem proving, where the tree grammar generalizes a finite sequence of Herbrand disjunctions [10]. By comparing the Herbrand disjunctions of proofs, we obtain a notion of proof equality that identifies proofs which use the same quantifier instances [4]. Automated theorem provers typically use Skolem functions; expansion proofs admit a particularly elegant transformation that eliminates these Skolem functions and turns a proof of a Skolemized formula into a proof of the original statement in linear time [5]. Herbrand disjunctions directly contain witnesses for the existential quantifiers and hence capture a certain computational interpretation of classical proofs. Furthermore, Luckhardt used Herbrand disjunctions to give a polynomial bound on the number of solutions in Roth’s theorem [30] in the area of Diophantine approximation.
Our GAPT system for proof transformations contains implementations of many of these Herbrandbased algorithms [13], as well as various proofs formalized in the sequent calculus LK and several cutelimination procedures. However in practice we have proofs where none of these procedures are successful, due to multiple reasons: the performance may be insufficient, higherorder cuts cannot be treated, induction cannot be unfolded, or specialpurpose inferences such as proof links are not supported.
The normalization procedure described in this paper has all of these features: it is fast, supports higherorder cuts, can unfold induction inferences, and does not fail in the presence of specialpurpose inference rules. This procedure is based on a term calculus for LK described by Urban and Bierman [35]. It is selfevident that proof normalization can be implemented more efficiently using the CurryHoward correspondence to compute with proof terms instead of trees of sequents, as this significantly reduces the bureaucracy required during reduction. We also considered other calculi such as the  [32] or the calculus [7]. In the end we decided on the present calculus because of its close similarity to LK, as it allows us to straightforwardly integrate specialpurpose inferences.
In Section 2 we present the syntax and typing rules for the calculus as implemented in GAPT. We then briefly describe the implementation of the normalization procedure in Section 3. Its performance is then empirically evaluated on both artificial and realworld proofs in Section 4. Finally, potential future work is discussed in Section 5.
One of the proofs on which we evaluate this normalization procedure in Section 4 is Furstenberg’s famous proof of the infinitude of primes [14]. Cutelimination was also used by Girard [19, annex 7.E] to analyze another proof of Furstenberg that shows van der Waerden’s theorem using ergodic theory [15].
2 Calculus
The proof system is modeled closely after the calculus described in the paper by Urban and Bierman [35]. Since the paper does not give a name to the introduced calculus, we call our variant as an abbreviation for “LK with terms”. Proofs in operate on hypotheses (called names and conames in [35]), which name formula occurrences in the current sequent. We found it useful to have a single type that combines both the names and conames of [35] since it reduces code duplication. Each formula in a sequent is labelled by a hypothesis:
Expressions in the object language are lambda expressions with simple types: an expression is either a variable, a constant, a lambda abstraction, or a function application. Connectives and quantifiers such as and are represented as constants of the type indicated in the superscript. Formulas are expressions of type , which is the type of Booleans. We identify equal expressions. A substitution is a typepreserving map from variables to expressions. Given an expression , we write for the (captureavoiding) application of the substitution to .
This language can express impredicative quantification over types of arbitrary rank, such as predicates on predicates on functions: for example is syntactic sugar for where are predicates of type and the quantifiers range over the variables , and . This formula expresses a form of extensionality for such predicates, and is not provable in .
The proof terms are almost untyped: in contrast to [35], we include the cut formula in the proof term for the cut inference to perform typechecking without higherorder unification. A typing judgement then tells us what sequent a proof proves. Figure 1 shows the syntax for the proof terms. Hypothesis arguments that are not bound are called main formulas: for example is a main formula of .
We use named variables as a binding strategy for the hypotheses in consistency with the implementation of the lambda expressions (as opposed to de Bruijn indices or a locally nameless representation). Hypotheses are stored as machine integers. A negative hypothesis refers to a formula in the antecedent, and a positive hypothesis refers to a formula in the succedent of the sequent. The notation means that is a bound variable in , c.f. the notation of abstract binding trees in [20]. This encoding of LK is also very similar to the encoding commonly used in logical frameworks (LF), see [33] for a description of such an approach.
Notably, there are no terms for weakening and contraction. These are implicit: we can use the same hypothesis zero or multiple times. The proof terms only contain new information that is not contained in the endsequent; only cut formulas, weak quantifier instance terms, and eigenvariables are stored. We do not repeat the formulas or atoms of the endsequent.
( fresh)
( fresh)
Let us now define the typing judgment. A local context is a finite map from hypotheses to formulas. We write as a suggestive notation for the map where is negative and is positive. Outer occurrences overwrite inner ones, that is means .
Given an (expression) substitution , we can apply it to a proof term in the natural way to obtain . The judgment means that is a valid proof in the local context , that is, proves that the sequent corresponding to is valid. We may omit if it is the identity substitution, in this special case corresponds to the notation used in [35].
The reason for parameterizing the typing judgement by a substitution is twofold: due to our use of named variables, we may need to rename bound eigenvariables (in and ) when traversing a term. However, we do not want to apply a substitution to the proof term to ensure that the eigenvariable is fresh. This would be both costly and also introduces an unnecessary dependency on the local context in operations that would otherwise not require any typing information.
The proof terms and corresponding typing rules are chosen in such a way that they correspond as much as possible to the already implemented sequent calculus LK, see [12, Appendix B.1] for a detailed description of that calculus. The implementation also contains further inferences for special applications, such as proof links for schematic proofs [9], definition rules [3], and Skolem inferences to represent Skolemized proofs in higherorder logic [25]. The implemented inference rule for induction is also more general than the one shown here: it supports structural induction over other types than natural numbers.
Equational reasoning is implemented using the inference for reflexivity, and an inference to rewrite in arbitrary contexts and on both sides of the sequent. The third argument indicates whether we rewrite from lefttoright or righttoleft. Syntactically, we support equations between terms of arbitrary type, however cutelimination can fail with equations between functions or Booleans as quantified cuts can remain.
In our version of higherorder logic, the connectives , and are also primitive. By a heavy abuse of notation, we simply reuse the proof terms for and . This representation causes no confusion, since the intended connective is always clear from the polarities of the hypotheses, and many operations are defined identically for the different connectives. The corresponding typing rules are derived in the natural way, as an example we show the case where is used to prove an implication on the right side:
3 Cutnormalization
Normalization is performed in a bigstep evaluation approach using 3 mutually recursive functions , , and ^{1}^{1}1In the implementation these are called normalize, evalCut, and ProofSubst, resp.. All of these functions return fully normalized proof terms. We do not create temporary terms, all produced terms are irreducible (for example because they are “stuck” on or ). Figure 3 shows the definition of the functions , and . Note that since contraction is implicit, the cut rule behaves more like Gentzen’s mix rule [16].

The function takes a proof term as input and returns a normal form .

If and are already in normal form, then computes a normal form of .

Let and be again in normal form, then performs a proof substitution, which corresponds to the rankreduction step of cutelimination in LK. The function takes one side of the cut and directly moves it to all inferences in the other side where the cut formula occurs as the main formula. This operation is symmetric in the side of the cut, and only needs to be implemented once.
Given a term , we write if terminates on the input ; similarly for and .
Lemma 1 (Subject reduction).
Proof.
Routine induction on the length of the computation of , , , resp. ∎
We expect that terminates on all welltyped proofs, including higherorder quantifier inferences. Urban and Bierman showed strong normalization for their firstorder calculus without equality using reducibility methods [35]. is more general as it is higherorder, and the firstorder fragment is slightly different due to the use of the skipping constructors , which skip unnecessary inferences.
Conjecture 2 (Termination).
Let , then .
Note that for our applications it is often not necessary to have completely cutfree proofs. Cuts on quantifierfree formulas are for example unproblematic for the extraction of Herbrand disjunctions.
Lemma 3 (Cutelimination).
Let such that . If does not contain , , or , then is cutfree.
Proof.
Cuts are only produced by , and by case analysis this does not happen in this class. ∎
We perform a few noteworthy optimizations:

Every term stores the set of its free hypotheses and free (expression) variables. These are fields in the Scala classes implementing the proof terms. We can hence efficiently (in logarithmic time) check whether a given hypothesis or variable is free in a proof term.

Due to this extra data, we can effectively skip many calls of the normalization procedure. We do not need to substitute or evaluate cuts if the hypothesis for the cut formula is not free in the subterm, in this case we can immediately return the subterm.

When producing the resulting proof terms, we check whether we can skip any inferences. For example, instead of we can directly return if is not free in . In Fig. 3 we denote these “skipping” constructors with the superscript. This optimization is extremely important from a practical point of view, since it effectively prevents a common blowup in proof size.
The cutnormalization in [35] is presented as a singlestep reduction relation. The strong normalization of that relation depends on the fact that all cuts can be eliminated in their calculus. In however, cuts can be irreducible—for example because they are stuck on an induction or on . This has the unfortunate consequence that the natural singlestep reduction relation for is not strongly normalizing. Since multiple cuts can be stuck on the same inference we have the traditional counterexample of two commuting cuts, where for example :
3.1 Induction unfolding
We typically consider proofs with induction of sequents such as for example where is quantifierfree (or maybe existentially quantified), and the antecedent contains recursive definitions for all contained function symbols such as (but not limited to) , etc. If contains free variables or strong quantifiers, then we can in general not eliminate all inductions—however the quantifier instances of a normalized proof may still provide valuable insights. In particular we are interested in the quantifier instances of formulas in the antecedent, as their structure plays an important role in our approach to inductive theorem proving [10]. The language always contains the constructors and . Injectivity of these constructors is included as an explicit formula in the antecedent when necessary. We consider arbitrary recursively defined functions, also on other data types such as lists.
Elimination of induction inferences is handled in a similar way to Gentzen’s proof of the consistency of Peano Arithmetic [17]. Induction inferences whose terms are constructor applications are unfolded:
The full inductionelimination procedure then alternates between cutnormalization and full induction unfolding until we can no longer unfold any induction inferences. We also rewrite the term in the induction inference using the universally quantified equations representing the recursive definitions to bring the term into constructor form. The generated proof is then added via a cut, where is the simplified term which is now in constructor form:
We can perform this induction reduction even if the problem contains function symbols that are not recursively defined. In this case inductions can remain in the output. We conjecture that the full inductionelimination procedure (alternating inductionunfolding and cutnormalization) always terminates.
3.2 Equational reduction
As noted in Section 3, cuts on equational inferences are stuck. Consider for example the following term, which cannot be reduced further:
This is clearly a problem since we cannot obtain Herbrand disjunctions from proofs with such quantified cuts. On the other hand, cuts on atoms would pose no problem since we can still obtain Herbrand disjunctions by examining the weak quantifier inferences. We hence reduce quantified equational inferences to atomic equational inferences—then only atomic cuts can be stuck.
Concretely, we define a function such that for any terms and , where only uses inferences on atoms. This function hence simulates inferences using only inferences on atoms, and is straightforwardly defined by recursion on . We only show the case for conjunction as an example:
We then replace inferences on nonatoms using the translation . Note that this translation depends on the typing derivation (to obtain the term for the cut formula) and can fail if we have equations between predicates.
4 Empirical evaluation
4.1 Artificial examples
The calculus and normalization procedure presented in this paper has been implemented in the open source GAPT system^{2}^{2}2available at https://logic.at/gapt for proof transformations [13], version 2.10. We now compare the performance of several cutnormalization procedures implemented in GAPT on benchmarks used in [29].

LK: Gentzenstyle reductive cutelimination in LK. The proofs in LK are treelike data structure where every node has a (formula) sequent. The output is again a proof in LK, atomic cuts can appear directly below equational inferences.

CERES (LK): Cutelimination by resolution [6] reduces the problem of cutelimination in LK to finding a resolution refutation of a firstorder clause set. The output is a proof in LK with at most atomic cuts.

CERES (expansion): a variant of CERES that takes proofs with cuts in LK, and directly produces expansion proofs [29]. This uses the same firstorder clause sets as CERES (LK).

semantic: by “semantic cutelimination”, we refer to the procedure that throws away the input proofs, and generates a cutfree proof from scratch. GAPT contains interfaces to several resolution provers, including the builtin Escargot prover. Here we used Escargot to obtain a cutfree expansion proof of the endsequent of the input proof.

expansion proof: the expansion proofs implemented in GAPT support cuts—such cuts corresponds to cuts in LK and are simply expansions of the formula . Firstorder cuts in expansion proofs can eliminated using a procedure described in [27], which operates just on the quantifier instances of the proof, and is similar to the proofs of the epsilon theorems [28]. Both the input and output formats are expansion proofs, the resulting expansion proof is cutfree.

LKt: the normalization procedure shown in Section 3.

LKt (until atomic): same as LKt, but we do not reduce atomic cuts. The resulting proof may still contain cuts on atoms, but this is sufficient for the extraction of Herbrand disjunctions. We can directly extract Herbrand disjunctions from proofs as long as all cut formulas are propositional.

LKt (until quant.free): same as LKt, but we do not reduce quantifierfree cuts.
The graphs in Fig. 4 show the runtime for each of these procedures on several artificial example proofs. The runtime is measured in seconds of wall clock time; we used a logarithmic scale for the time since the performance of the procedures differs by several orders of magnitude. In one case, LKt (until quant.free) is 1000000 times faster than LK. All of the example proofs are parameterized by a natural number (the xaxis of the plot), the size of the input proofs is polynomially bounded in .
Linear example after cutintroduction (ci_linear)
The name “linear example” refers to the sequence of (proofs of) the
sequent . We
take natural cutfree proofs of this sequent and then use an automated
method that introduces universally quantified
cuts [11] to obtain a proof with cut. In GAPT, these
proofs with universally quantified cuts are produced with
CutIntroduction(LinearExampleProof(n)).get
.
In this example, all of the normalization procedures are faster than the CERES variants by a factor of about 100x. Even semantic cutelimination is faster. normalization is also faster than expansion proof cutelimination by a factor of about 10x. We also see that not eliminating atomic cuts is a bit faster than full cutelimination, and not eliminating quantifierfree cuts is even faster.
Linear example proof with manual cuts (linear)
Cutintroduction often produces unnecessarily complicated lemmas,
resulting in irregularity when used in proof sequences. It is also
limited to small proofs. To produce a more regular sequence and obtain
larger proofs, we manually formalized natural proofs of the linear
example for using cuts with the cut formulas for . These proofs can be
obtained with LinearCutExampleProof(n)
. (Note that this sequence
of proofs produces exponentially larger cutfree proofs than the other
sequences.)
The results are similar to the proofs obtained with cutintroduction, although we observe new phenomena at both ends of the sequence: for , the proofs consist of a single axiom. Here, the based procedures produce a cutfree proof in about 15 nanoseconds. On the other end, at , we finally see CERES becoming slightly faster than semantic cutelimination.
Linear example proof with atomic cuts (linearacnf)
To complete the discussion of the linear example, we also consider a proof sequence in atomic cutnormal form (ACNF). In these proofs, the quantifier and propositional inferences are on the top of the proof, and the bottom part consists only of atomic cuts—very much like a ground resolution refutation. Interestingly, atomic cutelimination is surprisingly cheap in this example: the based normalization only takes 10 microseconds. On the other hand, the CERESbased methods require as much time as they do for the proofs with universally quantified cuts: they refute a clause set whose size is linear in .
Square diagonal proof after cutintroduction (ci_sqdiag)
Just as in the linear example, we take cutfree proofs of and then automatically introduce universally quantified cuts. normalization until quantifierfree cuts is an order of magnitude faster than expansion proof cutelimination, and two orders of magnitude faster than CERES.
Linear equality example proof after cutintroduction (ci_lineareq)
These proofs are generated using CutIntroduction(LinearEqExampleProof(n)). Note that we replaced the equality predicate by a binary E relation to prevent accidental introduction of equational inferences. Again, normalization until propositional cuts is 10x faster than expansion proof cut elimination, which is 10x faster than CERES.
The astute reader will have noticed the spikes in the runtime of the reductive cutelimination procedures at . These spikes are due to convoluted cut formulas produced by cutintroduction. For example at , the cut formula is and we use it to prove —this proof is almost as complicated on the propositional level as the cutfree proof, even though it has a lower quantifier complexity.
4.2 Mathematical proofs
GAPT contains a small library of formalized proofs for testing. These are mostly basic properties of natural numbers and lists. The biggest formalized result is the fundamental theorem of arithmetic, showing the existence and uniqueness of prime decomposition for natural numbers. We evaluated the performance of as well as other procedures (see Section 4.1) on several of these proofs. Figure 5 shows the runtime of the induction and cutelimination. We tested instances of proofs of the following statements:
add0l  

mul1  
filterrev  
divmodgtot  
primedecex 
The proofs contain the primitive recursive definitions in the antecedent. For example, add0l is a proof with induction of the sequent . As before, inductionelimination in LK is several orders of magnitude slower than in . Semantic cutelimination is surprisingly fast, it is as fast as for small instances of filterrev.
4.3 Furstenberg proof
Furstenberg’s wellknown proof of the infinitude of primes [14] equips the integers with a topology generated by arithmetic progressions, and uses this machinery to show that there are infinitely many primes. For every natural number we hence get a secondorder proof showing that there are more than prime numbers. Cutelimination of then extracts the computational content of Furstenberg’s argument: we get a new prime number as a witness.
CERES was used to perform this extraction manually [4]. The key step in cutelimination using CERES consists of the refutation of a socalled characteristic clause set. In the case of Furstenberg’s proof automated theorem provers could only refute this firstorder clause set for , that is, to show that there is more than one prime number. The authors hence manually constructed a sequence of refutations, taking Euclid’s proof of the infinitude of primes as a guideline to obtain a prime divisor of as a witness. The authors also present another refutation for , which yields more than one witness: one of the numbers , , or contains the third prime as a divisor.
Using , GAPT can now perform the cutelimination and extract the witness term automatically. Figure 6 shows the performance of the normalization on instances of Furstenberg’s proof. The concrete formalization closely resembles the one described in [4], however there have been minor changes to account for subtle differences in the LK calculus currently implemented in GAPT. Now that we could cuteliminate this particular formalization for the first time, we were excited to find an interesting feature. We expected that the cutelimination of Furstenberg’s proof would compute the same witness as Euclid’s proof: a prime divisor of . However we got the following witness instead, which contains as an additional factor:
This constant factor seems to depend on the concrete way in which we formalize the lemma that nonempty open sets are infinite (this lemma is called in [4]). With a slightly different quantifier instance there, we can also get instead of .
5 Future work
As our focus here lies in the practical applications of cutelimination, termination of the normalization procedure is only of secondary concern. For an actual implementation, there is little difference between an algorithm that does not terminate or one that terminates after a thousand years—as long as it quickly terminates on the instances we apply it to. For some classes of proofs, it is straightforward to see that normalization indeed always terminates. Due to the direct correspondence with the traditional presentation of LK, we can reuse termination arguments. Whenever we observe nontermination in the normalization, we get a corresponding nonterminating reduction sequence in LK with an uppermostfirst strategy. We believe that induction unfolding can be shown to terminate via a similar argument as used in [34] for the proof of the consistency of Peano Arithmetic. It remains open whether normalization terminates for proofs with higherorder (or even just secondorder) quantifier inferences.
The current handling of equational and induction inferences as described in Sections 3.1 and 3.2 is unsatisfactory as they are not integrated in the main normalization function but require a separate pass over the proof. Furthermore, the normalized proof may contain inferences on atoms. We are not aware of any terminating procedure using local rewrite rules that eliminates unary equational inferences such as the ones used in .
Renaming hypotheses and applying expression substitutions incurs a significant cost in the benchmarks. An obvious solution is to introduce an explicit substitution inference to implement these operations without the need to traverse the proof term. In fact, one of the motivations behind the substitution parameter in the typing judgment was the support for explicit substitution inferences.
As a cheap optimization, we could gradereduce blocks of quantifier inferences in a single substitution. This should speed up the common case of eliminating lemmas with many universal quantifiers. Another possible optimization is the use of caching: all of the functions and are pure, making it easy to cache their results. However in practice caching seems to degrade performance: simply caching the result of causes a 1020% increase in runtime on the benchmarks of Section 4.1. Normalization problems in do not seem to repeat often enough to warrant a cache.
We used named variables as a binding strategy since this is traditionally used in GAPT. As expected, this choice has resulted in a number of overbindingrelated bugs, which were difficult to debug. However, with named variables we can often avoid renaming when traversing and substituting proofs—where other approaches such as de Bruijn indices or locally nameless would always require renaming, or instantiation and abstraction, resp. Since every term in contains (multiple) binders, it seems prudent to avoid renaming in the common case. It may be possible to implement an efficient binding strategy using de Bruijn indices or a locally nameless representation by adding explicit renaming inferences.
Proof assistants such as Lean, Coq, or Minlog also provide functions to normalize proofs. It would be interesting to compare their performance to the approaches implemented in GAPT.
6 Conclusion
Term assignments to proofs provide an elegant implementation technique for the efficient computation and transformation of proofs. We have obtained a speedup of several orders of magnitude just by switching the representation from trees of sequents to untyped proof terms. The normalization procedure implemented in this paradigm and described in this paper is fast, supports higherorder cuts, can unfold induction inferences, and can normalize cuts in the presence of all inference rules supported by GAPT. As shown in Section 4.3, we can now practically cuteliminate proofs which were out of reach before.
However our ultimate interest lies in the quantifier structure of (cutfree) proofs as captured by Herbrand disjunctions or (in general) expansion proofs. From this point of view, we are not restricted to cutelimination in LK or inessential variations like . Another option that is radically different from what we have considered so far is to use functional interpretation to compute expansion proofs as described by Gerhardy and Kohlenbach in [18].
For proofs with only universally quantified firstorder cuts, a certain type of tree grammar describes the quantifier inferences [22], and the language generated by such a grammar then directly corresponds to a Herbrand sequent. We plan to develop and implement extensions of this grammarbased approach to general firstorder cuts for an efficient extraction of Herbrand disjunctions, see [2] for grammars describing general prenex cuts.
References
 [1]
 [2] Bahareh Afshari, Stefan Hetzl & Graham Leigh (2018): Herbrand’s Theorem as Higher Order Recursion. doi:10.14760/OWP201801.
 [3] Matthias Baaz, Stefan Hetzl, Alexander Leitsch, Clemens Richter & Hendrik Spohr (2006): Proof Transformation by CERES. In Jonathan M. Borwein & William M. Farmer, editors: 5th International Conference on Mathematical Knowledge Management, MKM, Lecture Notes in Computer Science 4108, Springer, pp. 82–93, doi:10.1007/118122898.
 [4] Matthias Baaz, Stefan Hetzl, Alexander Leitsch, Clemens Richter & Hendrik Spohr (2008): CERES: An analysis of Fürstenberg’s proof of the infinity of primes. Theoretical Computer Science 403(23), pp. 160–175, doi:10.1016/j.tcs.2008.02.043.
 [5] Matthias Baaz, Stefan Hetzl & Daniel Weller (2012): On the complexity of proof deskolemization. Journal of Symbolic Logic 77(2), pp. 669–686, doi:10.2178/jsl/1333566645.
 [6] Matthias Baaz & Alexander Leitsch (2000): Cutelimination and Redundancyelimination by Resolution. Journal of Symbolic Computation 29(2), pp. 149–177, doi:10.1006/jsco.1999.0359.
 [7] Franco Barbanera & Stefano Berardi (1996): A Symmetric Lambda Calculus for Classical Program Extraction. Information and Computation 125(2), pp. 103–117, doi:10.1006/inco.1996.0025.
 [8] Samuel R. Buss (1995): On Herbrand’s Theorem. In: Logic and Computational Complexity, Lecture Notes in Computer Science 960, Springer, pp. 195–209, doi:10.1007/354060178385.
 [9] David Cerna & Anela Lolic (2018): System Description: GAPT for schematic proofs. RISC Report Series. Available at http://www.risc.jku.at/publications/download/risc_5591/schematicGapt.pdf.
 [10] Sebastian Eberhard & Stefan Hetzl (2015): Inductive theorem proving based on tree grammars. Annals of Pure and Applied Logic 166(6), pp. 665–700, doi:10.1016/j.apal.2015.01.002.

[11]
Gabriel Ebner,
Stefan Hetzl,
Alexander Leitsch,
Giselle Reis &
Daniel Weller
(2018): On the generation of quantified
lemmas.
Journal of Automated Reasoning
, pp. 1–32, doi:10.1007/s1081701894628.  [12] Gabriel Ebner, Stefan Hetzl, Bernhard Mallinger, Giselle Reis, Martin Riener, Marielle Louise Rietdijk, Matthias Schlaipfer, Christoph Spörk, Janos Tapolczai, Jannik Vierling, Daniel Weller, Simon Wolfsteiner & Sebastian Zivota (2018): GAPT user manual, version 2.10. Available at https://logic.at/gapt/downloads/gaptusermanual.pdf.
 [13] Gabriel Ebner, Stefan Hetzl, Giselle Reis, Martin Riener, Simon Wolfsteiner & Sebastian Zivota (2016): System Description: GAPT 2.0. In Nicola Olivetti & Ashish Tiwari, editors: International Joint Conference on Automated Reasoning (IJCAR), Lecture Notes in Computer Science 9706, Springer, pp. 293–301, doi:10.1007/978331940229120.
 [14] Harry Furstenberg (1955): On the infinitude of primes. The American Mathematical Monthly 62(5), p. 353, doi:10.2307/2307043.
 [15] Harry Furstenberg (1981): Recurrence in ergodic theory and combinatorial number theory. Princeton University Press, doi:10.1515/9781400855162.
 [16] Gerhard Gentzen (1935): Untersuchungen über das logische Schließen I. Mathematische Zeitschrift 39(1), pp. 176–210, doi:10.1007/BF01201353.
 [17] Gerhard Gentzen (1936): Die Widerspruchsfreiheit der reinen Zahlentheorie. Mathematische Annalen 112, pp. 493–565, doi:10.1007/BF01565428.
 [18] Philipp Gerhardy & Ulrich Kohlenbach (2005): Extracting Herbrand disjunctions by functional interpretation. Archive for Mathematical Logic 44(5), pp. 633–644, doi:10.1007/s0015300502751.
 [19] JeanYves Girard (1987): Proof theory and logical complexity. Vol. 1, Bibliopolis.
 [20] Robert Harper (2016): Practical foundations for programming languages. Cambridge University Press, doi:10.1017/CBO9781316576892.
 [21] Jacques Herbrand (1930): Recherches sur la théorie de la démonstration. Ph.D. thesis, Université de Paris.
 [22] Stefan Hetzl (2012): Applying Tree Languages in Proof Theory. In AdrianHoria Dediu & Carlos MartínVide, editors: Language and Automata Theory and Applications, Lecture Notes in Computer Science 7183, Springer, pp. 301–312, doi:10.1007/978364228332126.
 [23] Stefan Hetzl, Alexander Leitsch, Giselle Reis, Janos Tapolczai & Daniel Weller (2014): Introducing Quantified Cuts in Logic with Equality. In Stéphane Demri, Deepak Kapur & Christoph Weidenbach, editors: 7^{th} International Joint Conference on Automated Reasoning, IJCAR, Lecture Notes in Computer Science 8562, Springer, pp. 240–254, doi:10.1007/978331908587617.
 [24] Stefan Hetzl, Alexander Leitsch, Giselle Reis & Daniel Weller (2014): Algorithmic introduction of quantified cuts. Theoretical Computer Science 549, pp. 1–16, doi:10.1016/j.tcs.2014.05.018.
 [25] Stefan Hetzl, Alexander Leitsch & Daniel Weller (2011): CERES in higherorder logic. Annals of Pure and Applied Logic 162(12), pp. 1001–1034, doi:10.1016/j.apal.2011.06.005.

[26]
Stefan Hetzl,
Alexander Leitsch &
Daniel Weller
(2012): Towards Algorithmic
CutIntroduction.
In:
Logic for Programming, Artificial Intelligence and Reasoning (LPAR18)
, Lecture Notes in Computer Science 7180, Springer, pp. 228–242, doi:10.1007/978364228717619.  [27] Stefan Hetzl & Daniel Weller (2013): Expansion Trees with Cut. CoRR abs/1308.0428. Available at https://arxiv.org/abs/1308.0428.
 [28] David Hilbert & Paul Bernays (1939): Grundlagen der Mathematik II. Springer.
 [29] Alexander Leitsch & Anela Lolic (2018): Extraction of Expansion Trees. Journal of Automated Reasoning, pp. 1–38, doi:10.1007/s1081701894539.
 [30] Horst Luckhardt (1989): HerbrandAnalysen zweier Beweise des Satzes von Roth: Polynomiale Anzahlschranken. Journal of Symbolic Logic 54(1), pp. 234–263, doi:10.2307/2275028.
 [31] Dale A. Miller (1987): A compact representation of proofs. Studia Logica 46(4), pp. 347–370, doi:10.1007/BF00370646.
 [32] Michel Parigot (1992): Calculus: An Algorithmic Interpretation of Classical Natural Deduction. In Andrei Voronkov, editor: Logic for Programming, Artificial Intelligence and Reasoning (LPAR), Lecture Notes in Computer Science 624, Springer, pp. 190–201, doi:10.1007/BFb0013061.
 [33] Frank Pfenning (1995): Structural Cut Elimination. In: Logic in Computer Science, IEEE Computer Society, pp. 156–166, doi:10.1109/LICS.1995.523253.
 [34] Gaisi Takeuti (1987): Proof theory, second edition. Studies in Logic and the Foundations of Mathematics 81, NorthHolland Publishing Co., Amsterdam.
 [35] Christian Urban & Gavin M. Bierman (2001): Strong Normalisation of CutElimination in Classical Logic. Fundamenta informaticae 45(12), pp. 123–155, doi:10.1007/354048959226.
Comments
There are no comments yet.