Cut-elimination is perhaps the most fundamental operation in proof theory, first introduced by Gentzen in . Its importance is underlined by a wide variety of its applications; one application in particular motivates our interest in cut-elimination: cut-free proofs directly contain Herbrand disjunctions.
Herbrand’s theorem [21, 8] captures the insight that the validity of a quantified formula is characterized by the existence of a tautological finite set of quantifier-free instances. In its simplest case, the validity of a purely existential formula is characterized by the existence of a tautological disjunction of instances , a Herbrand disjunction. Expansion proofs generalize this result to higher-order logic in the form of elementary type theory .
A computational implementation of Herbrand’s theorem as provided by cut-elimination lies at the foundation of many applications in computational proof theory: if we can compress the Herbrand disjunction extracted from a proof using a special kind of tree grammar, then we can introduce a cut into the proof which reduces the number of quantifier inferences—in practice this method finds interesting non-analytic lemmas [26, 24, 23, 11]. A similar approach can be used for automated inductive theorem proving, where the tree grammar generalizes a finite sequence of Herbrand disjunctions . By comparing the Herbrand disjunctions of proofs, we obtain a notion of proof equality that identifies proofs which use the same quantifier instances . Automated theorem provers typically use Skolem functions; expansion proofs admit a particularly elegant transformation that eliminates these Skolem functions and turns a proof of a Skolemized formula into a proof of the original statement in linear time . Herbrand disjunctions directly contain witnesses for the existential quantifiers and hence capture a certain computational interpretation of classical proofs. Furthermore, Luckhardt used Herbrand disjunctions to give a polynomial bound on the number of solutions in Roth’s theorem  in the area of Diophantine approximation.
Our GAPT system for proof transformations contains implementations of many of these Herbrand-based algorithms , as well as various proofs formalized in the sequent calculus LK and several cut-elimination procedures. However in practice we have proofs where none of these procedures are successful, due to multiple reasons: the performance may be insufficient, higher-order cuts cannot be treated, induction cannot be unfolded, or special-purpose inferences such as proof links are not supported.
The normalization procedure described in this paper has all of these features: it is fast, supports higher-order cuts, can unfold induction inferences, and does not fail in the presence of special-purpose inference rules. This procedure is based on a term calculus for LK described by Urban and Bierman . It is self-evident that proof normalization can be implemented more efficiently using the Curry-Howard correspondence to compute with proof terms instead of trees of sequents, as this significantly reduces the bureaucracy required during reduction. We also considered other calculi such as the -  or the -calculus . In the end we decided on the present calculus because of its close similarity to LK, as it allows us to straightforwardly integrate special-purpose inferences.
In Section 2 we present the syntax and typing rules for the calculus as implemented in GAPT. We then briefly describe the implementation of the normalization procedure in Section 3. Its performance is then empirically evaluated on both artificial and real-world proofs in Section 4. Finally, potential future work is discussed in Section 5.
One of the proofs on which we evaluate this normalization procedure in Section 4 is Furstenberg’s famous proof of the infinitude of primes . Cut-elimination was also used by Girard [19, annex 7.E] to analyze another proof of Furstenberg that shows van der Waerden’s theorem using ergodic theory .
The proof system is modeled closely after the calculus described in the paper by Urban and Bierman . Since the paper does not give a name to the introduced calculus, we call our variant as an abbreviation for “LK with terms”. Proofs in operate on hypotheses (called names and co-names in ), which name formula occurrences in the current sequent. We found it useful to have a single type that combines both the names and co-names of  since it reduces code duplication. Each formula in a sequent is labelled by a hypothesis:
Expressions in the object language are lambda expressions with simple types: an expression is either a variable, a constant, a lambda abstraction, or a function application. Connectives and quantifiers such as and are represented as constants of the type indicated in the superscript. Formulas are expressions of type , which is the type of Booleans. We identify -equal expressions. A substitution is a type-preserving map from variables to expressions. Given an expression , we write for the (capture-avoiding) application of the substitution to .
This language can express impredicative quantification over types of arbitrary rank, such as predicates on predicates on functions: for example is syntactic sugar for where are predicates of type and the quantifiers range over the variables , and . This formula expresses a form of extensionality for such predicates, and is not provable in .
The proof terms are almost untyped: in contrast to , we include the cut formula in the proof term for the cut inference to perform type-checking without higher-order unification. A typing judgement then tells us what sequent a proof proves. Figure 1 shows the syntax for the proof terms. Hypothesis arguments that are not bound are called main formulas: for example is a main formula of .
We use named variables as a binding strategy for the hypotheses in consistency with the implementation of the lambda expressions (as opposed to de Bruijn indices or a locally nameless representation). Hypotheses are stored as machine integers. A negative hypothesis refers to a formula in the antecedent, and a positive hypothesis refers to a formula in the succedent of the sequent. The notation means that is a bound variable in , c.f. the notation of abstract binding trees in . This encoding of LK is also very similar to the encoding commonly used in logical frameworks (LF), see  for a description of such an approach.
Notably, there are no terms for weakening and contraction. These are implicit: we can use the same hypothesis zero or multiple times. The proof terms only contain new information that is not contained in the end-sequent; only cut formulas, weak quantifier instance terms, and eigenvariables are stored. We do not repeat the formulas or atoms of the end-sequent.
Let us now define the typing judgment. A local context is a finite map from hypotheses to formulas. We write as a suggestive notation for the map where is negative and is positive. Outer occurrences overwrite inner ones, that is means .
Given an (expression) substitution , we can apply it to a proof term in the natural way to obtain . The judgment means that is a valid proof in the local context , that is, proves that the sequent corresponding to is valid. We may omit if it is the identity substitution, in this special case corresponds to the notation used in .
The reason for parameterizing the typing judgement by a substitution is twofold: due to our use of named variables, we may need to rename bound eigenvariables (in and ) when traversing a term. However, we do not want to apply a substitution to the proof term to ensure that the eigenvariable is fresh. This would be both costly and also introduces an unnecessary dependency on the local context in operations that would otherwise not require any typing information.
The proof terms and corresponding typing rules are chosen in such a way that they correspond as much as possible to the already implemented sequent calculus LK, see [12, Appendix B.1] for a detailed description of that calculus. The implementation also contains further inferences for special applications, such as proof links for schematic proofs , definition rules , and Skolem inferences to represent Skolemized proofs in higher-order logic . The implemented inference rule for induction is also more general than the one shown here: it supports structural induction over other types than natural numbers.
Equational reasoning is implemented using the inference for reflexivity, and an inference to rewrite in arbitrary contexts and on both sides of the sequent. The third argument indicates whether we rewrite from left-to-right or right-to-left. Syntactically, we support equations between terms of arbitrary type, however cut-elimination can fail with equations between functions or Booleans as quantified cuts can remain.
In our version of higher-order logic, the connectives , and are also primitive. By a heavy abuse of notation, we simply reuse the proof terms for and . This representation causes no confusion, since the intended connective is always clear from the polarities of the hypotheses, and many operations are defined identically for the different connectives. The corresponding typing rules are derived in the natural way, as an example we show the case where is used to prove an implication on the right side:
Normalization is performed in a big-step evaluation approach using 3 mutually recursive functions , , and 111In the implementation these are called normalize, evalCut, and ProofSubst, resp.. All of these functions return fully normalized proof terms. We do not create temporary terms, all produced terms are irreducible (for example because they are “stuck” on or ). Figure 3 shows the definition of the functions , and . Note that since contraction is implicit, the cut rule behaves more like Gentzen’s mix rule .
The function takes a proof term as input and returns a normal form .
If and are already in normal form, then computes a normal form of .
Let and be again in normal form, then performs a proof substitution, which corresponds to the rank-reduction step of cut-elimination in LK. The function takes one side of the cut and directly moves it to all inferences in the other side where the cut formula occurs as the main formula. This operation is symmetric in the side of the cut, and only needs to be implemented once.
Given a term , we write if terminates on the input ; similarly for and .
Lemma 1 (Subject reduction).
Routine induction on the length of the computation of , , , resp. ∎
We expect that terminates on all well-typed proofs, including higher-order quantifier inferences. Urban and Bierman showed strong normalization for their first-order calculus without equality using reducibility methods . is more general as it is higher-order, and the first-order fragment is slightly different due to the use of the skipping constructors , which skip unnecessary inferences.
Conjecture 2 (Termination).
Let , then .
Note that for our applications it is often not necessary to have completely cut-free proofs. Cuts on quantifier-free formulas are for example unproblematic for the extraction of Herbrand disjunctions.
Lemma 3 (Cut-elimination).
Let such that . If does not contain , , or , then is cut-free.
Cuts are only produced by , and by case analysis this does not happen in this class. ∎
We perform a few noteworthy optimizations:
Every term stores the set of its free hypotheses and free (expression) variables. These are fields in the Scala classes implementing the proof terms. We can hence efficiently (in logarithmic time) check whether a given hypothesis or variable is free in a proof term.
Due to this extra data, we can effectively skip many calls of the normalization procedure. We do not need to substitute or evaluate cuts if the hypothesis for the cut formula is not free in the subterm, in this case we can immediately return the subterm.
When producing the resulting proof terms, we check whether we can skip any inferences. For example, instead of we can directly return if is not free in . In Fig. 3 we denote these “skipping” constructors with the superscript. This optimization is extremely important from a practical point of view, since it effectively prevents a common blow-up in proof size.
The cut-normalization in  is presented as a single-step reduction relation. The strong normalization of that relation depends on the fact that all cuts can be eliminated in their calculus. In however, cuts can be irreducible—for example because they are stuck on an induction or on . This has the unfortunate consequence that the natural single-step reduction relation for is not strongly normalizing. Since multiple cuts can be stuck on the same inference we have the traditional counterexample of two commuting cuts, where for example :
3.1 Induction unfolding
We typically consider proofs with induction of sequents such as for example where is quantifier-free (or maybe existentially quantified), and the antecedent contains recursive definitions for all contained function symbols such as (but not limited to) , etc. If contains free variables or strong quantifiers, then we can in general not eliminate all inductions—however the quantifier instances of a normalized proof may still provide valuable insights. In particular we are interested in the quantifier instances of formulas in the antecedent, as their structure plays an important role in our approach to inductive theorem proving . The language always contains the constructors and . Injectivity of these constructors is included as an explicit formula in the antecedent when necessary. We consider arbitrary recursively defined functions, also on other data types such as lists.
Elimination of induction inferences is handled in a similar way to Gentzen’s proof of the consistency of Peano Arithmetic . Induction inferences whose terms are constructor applications are unfolded:
The full induction-elimination procedure then alternates between cut-normalization and full induction unfolding until we can no longer unfold any induction inferences. We also rewrite the term in the induction inference using the universally quantified equations representing the recursive definitions to bring the term into constructor form. The generated proof is then added via a cut, where is the simplified term which is now in constructor form:
We can perform this induction reduction even if the problem contains function symbols that are not recursively defined. In this case inductions can remain in the output. We conjecture that the full induction-elimination procedure (alternating induction-unfolding and cut-normalization) always terminates.
3.2 Equational reduction
As noted in Section 3, cuts on equational inferences are stuck. Consider for example the following term, which cannot be reduced further:
This is clearly a problem since we cannot obtain Herbrand disjunctions from proofs with such quantified cuts. On the other hand, cuts on atoms would pose no problem since we can still obtain Herbrand disjunctions by examining the weak quantifier inferences. We hence reduce quantified equational inferences to atomic equational inferences—then only atomic cuts can be stuck.
Concretely, we define a function such that for any terms and , where only uses inferences on atoms. This function hence simulates inferences using only inferences on atoms, and is straightforwardly defined by recursion on . We only show the case for conjunction as an example:
We then replace inferences on non-atoms using the translation . Note that this translation depends on the typing derivation (to obtain the term for the cut formula) and can fail if we have equations between predicates.
4 Empirical evaluation
4.1 Artificial examples
The calculus and normalization procedure presented in this paper has been implemented in the open source GAPT system222available at https://logic.at/gapt for proof transformations , version 2.10. We now compare the performance of several cut-normalization procedures implemented in GAPT on benchmarks used in .
LK: Gentzen-style reductive cut-elimination in LK. The proofs in LK are tree-like data structure where every node has a (formula) sequent. The output is again a proof in LK, atomic cuts can appear directly below equational inferences.
CERES (LK): Cut-elimination by resolution  reduces the problem of cut-elimination in LK to finding a resolution refutation of a first-order clause set. The output is a proof in LK with at most atomic cuts.
CERES (expansion): a variant of CERES that takes proofs with cuts in LK, and directly produces expansion proofs . This uses the same first-order clause sets as CERES (LK).
semantic: by “semantic cut-elimination”, we refer to the procedure that throws away the input proofs, and generates a cut-free proof from scratch. GAPT contains interfaces to several resolution provers, including the built-in Escargot prover. Here we used Escargot to obtain a cut-free expansion proof of the end-sequent of the input proof.
expansion proof: the expansion proofs implemented in GAPT support cuts—such cuts corresponds to cuts in LK and are simply expansions of the formula . First-order cuts in expansion proofs can eliminated using a procedure described in , which operates just on the quantifier instances of the proof, and is similar to the proofs of the epsilon theorems . Both the input and output formats are expansion proofs, the resulting expansion proof is cut-free.
LKt: the normalization procedure shown in Section 3.
LKt (until atomic): same as LKt, but we do not reduce atomic cuts. The resulting proof may still contain cuts on atoms, but this is sufficient for the extraction of Herbrand disjunctions. We can directly extract Herbrand disjunctions from proofs as long as all cut formulas are propositional.
LKt (until quant.-free): same as LKt, but we do not reduce quantifier-free cuts.
The graphs in Fig. 4 show the runtime for each of these procedures on several artificial example proofs. The runtime is measured in seconds of wall clock time; we used a logarithmic scale for the time since the performance of the procedures differs by several orders of magnitude. In one case, LKt (until quant.-free) is 1000000 times faster than LK. All of the example proofs are parameterized by a natural number (the x-axis of the plot), the size of the input proofs is polynomially bounded in .
Linear example after cut-introduction (ci_linear)
The name “linear example” refers to the sequence of (proofs of) the
sequent . We
take natural cut-free proofs of this sequent and then use an automated
method that introduces universally quantified
cuts  to obtain a proof with cut. In GAPT, these
proofs with universally quantified cuts are produced with
In this example, all of the normalization procedures are faster than the CERES variants by a factor of about 100x. Even semantic cut-elimination is faster. normalization is also faster than expansion proof cut-elimination by a factor of about 10x. We also see that not eliminating atomic cuts is a bit faster than full cut-elimination, and not eliminating quantifier-free cuts is even faster.
Linear example proof with manual cuts (linear)
Cut-introduction often produces unnecessarily complicated lemmas,
resulting in irregularity when used in proof sequences. It is also
limited to small proofs. To produce a more regular sequence and obtain
larger proofs, we manually formalized natural proofs of the linear
example for using cuts with the cut formulas for . These proofs can be
LinearCutExampleProof(n). (Note that this sequence
of proofs produces exponentially larger cut-free proofs than the other
The results are similar to the proofs obtained with cut-introduction, although we observe new phenomena at both ends of the sequence: for , the proofs consist of a single axiom. Here, the -based procedures produce a cut-free proof in about 15 nanoseconds. On the other end, at , we finally see CERES becoming slightly faster than semantic cut-elimination.
Linear example proof with atomic cuts (linearacnf)
To complete the discussion of the linear example, we also consider a proof sequence in atomic cut-normal form (ACNF). In these proofs, the quantifier and propositional inferences are on the top of the proof, and the bottom part consists only of atomic cuts—very much like a ground resolution refutation. Interestingly, atomic cut-elimination is surprisingly cheap in this example: the -based normalization only takes 10 microseconds. On the other hand, the CERES-based methods require as much time as they do for the proofs with universally quantified cuts: they refute a clause set whose size is linear in .
Square diagonal proof after cut-introduction (ci_sqdiag)
Just as in the linear example, we take cut-free proofs of and then automatically introduce universally quantified cuts. normalization until quantifier-free cuts is an order of magnitude faster than expansion proof cut-elimination, and two orders of magnitude faster than CERES.
Linear equality example proof after cut-introduction (ci_lineareq)
These proofs are generated using CutIntroduction(LinearEqExampleProof(n)). Note that we replaced the equality predicate by a binary E relation to prevent accidental introduction of equational inferences. Again, normalization until propositional cuts is 10x faster than expansion proof cut elimination, which is 10x faster than CERES.
The astute reader will have noticed the spikes in the runtime of the reductive cut-elimination procedures at . These spikes are due to convoluted cut formulas produced by cut-introduction. For example at , the cut formula is and we use it to prove —this proof is almost as complicated on the propositional level as the cut-free proof, even though it has a lower quantifier complexity.
4.2 Mathematical proofs
GAPT contains a small library of formalized proofs for testing. These are mostly basic properties of natural numbers and lists. The biggest formalized result is the fundamental theorem of arithmetic, showing the existence and uniqueness of prime decomposition for natural numbers. We evaluated the performance of as well as other procedures (see Section 4.1) on several of these proofs. Figure 5 shows the runtime of the induction- and cut-elimination. We tested instances of proofs of the following statements:
The proofs contain the primitive recursive definitions in the antecedent. For example, add0l is a proof with induction of the sequent . As before, induction-elimination in LK is several orders of magnitude slower than in . Semantic cut-elimination is surprisingly fast, it is as fast as for small instances of filterrev.
4.3 Furstenberg proof
Furstenberg’s well-known proof of the infinitude of primes  equips the integers with a topology generated by arithmetic progressions, and uses this machinery to show that there are infinitely many primes. For every natural number we hence get a second-order proof showing that there are more than prime numbers. Cut-elimination of then extracts the computational content of Furstenberg’s argument: we get a new prime number as a witness.
CERES was used to perform this extraction manually . The key step in cut-elimination using CERES consists of the refutation of a so-called characteristic clause set. In the case of Furstenberg’s proof automated theorem provers could only refute this first-order clause set for , that is, to show that there is more than one prime number. The authors hence manually constructed a sequence of refutations, taking Euclid’s proof of the infinitude of primes as a guideline to obtain a prime divisor of as a witness. The authors also present another refutation for , which yields more than one witness: one of the numbers , , or contains the third prime as a divisor.
Using , GAPT can now perform the cut-elimination and extract the witness term automatically. Figure 6 shows the performance of the normalization on instances of Furstenberg’s proof. The concrete formalization closely resembles the one described in , however there have been minor changes to account for subtle differences in the LK calculus currently implemented in GAPT. Now that we could cut-eliminate this particular formalization for the first time, we were excited to find an interesting feature. We expected that the cut-elimination of Furstenberg’s proof would compute the same witness as Euclid’s proof: a prime divisor of . However we got the following witness instead, which contains as an additional factor:
This constant factor seems to depend on the concrete way in which we formalize the lemma that nonempty open sets are infinite (this lemma is called in ). With a slightly different quantifier instance there, we can also get instead of .
5 Future work
As our focus here lies in the practical applications of cut-elimination, termination of the normalization procedure is only of secondary concern. For an actual implementation, there is little difference between an algorithm that does not terminate or one that terminates after a thousand years—as long as it quickly terminates on the instances we apply it to. For some classes of proofs, it is straightforward to see that normalization indeed always terminates. Due to the direct correspondence with the traditional presentation of LK, we can reuse termination arguments. Whenever we observe non-termination in the normalization, we get a corresponding non-terminating reduction sequence in LK with an uppermost-first strategy. We believe that induction unfolding can be shown to terminate via a similar argument as used in  for the proof of the consistency of Peano Arithmetic. It remains open whether normalization terminates for proofs with higher-order (or even just second-order) quantifier inferences.
The current handling of equational and induction inferences as described in Sections 3.1 and 3.2 is unsatisfactory as they are not integrated in the main normalization function but require a separate pass over the proof. Furthermore, the normalized proof may contain inferences on atoms. We are not aware of any terminating procedure using local rewrite rules that eliminates unary equational inferences such as the ones used in .
Renaming hypotheses and applying expression substitutions incurs a significant cost in the benchmarks. An obvious solution is to introduce an explicit substitution inference to implement these operations without the need to traverse the proof term. In fact, one of the motivations behind the substitution parameter in the typing judgment was the support for explicit substitution inferences.
As a cheap optimization, we could grade-reduce blocks of quantifier inferences in a single substitution. This should speed up the common case of eliminating lemmas with many universal quantifiers. Another possible optimization is the use of caching: all of the functions and are pure, making it easy to cache their results. However in practice caching seems to degrade performance: simply caching the result of causes a 10-20% increase in runtime on the benchmarks of Section 4.1. Normalization problems in do not seem to repeat often enough to warrant a cache.
We used named variables as a binding strategy since this is traditionally used in GAPT. As expected, this choice has resulted in a number of overbinding-related bugs, which were difficult to debug. However, with named variables we can often avoid renaming when traversing and substituting proofs—where other approaches such as de Bruijn indices or locally nameless would always require renaming, or instantiation and abstraction, resp. Since every term in contains (multiple) binders, it seems prudent to avoid renaming in the common case. It may be possible to implement an efficient binding strategy using de Bruijn indices or a locally nameless representation by adding explicit renaming inferences.
Proof assistants such as Lean, Coq, or Minlog also provide functions to normalize proofs. It would be interesting to compare their performance to the approaches implemented in GAPT.
Term assignments to proofs provide an elegant implementation technique for the efficient computation and transformation of proofs. We have obtained a speed-up of several orders of magnitude just by switching the representation from trees of sequents to untyped proof terms. The normalization procedure implemented in this paradigm and described in this paper is fast, supports higher-order cuts, can unfold induction inferences, and can normalize cuts in the presence of all inference rules supported by GAPT. As shown in Section 4.3, we can now practically cut-eliminate proofs which were out of reach before.
However our ultimate interest lies in the quantifier structure of (cut-free) proofs as captured by Herbrand disjunctions or (in general) expansion proofs. From this point of view, we are not restricted to cut-elimination in LK or inessential variations like . Another option that is radically different from what we have considered so far is to use functional interpretation to compute expansion proofs as described by Gerhardy and Kohlenbach in .
For proofs with only universally quantified first-order cuts, a certain type of tree grammar describes the quantifier inferences , and the language generated by such a grammar then directly corresponds to a Herbrand sequent. We plan to develop and implement extensions of this grammar-based approach to general first-order cuts for an efficient extraction of Herbrand disjunctions, see  for grammars describing general prenex cuts.
-  Bahareh Afshari, Stefan Hetzl & Graham Leigh (2018): Herbrand’s Theorem as Higher Order Recursion. doi:10.14760/OWP-2018-01.
-  Matthias Baaz, Stefan Hetzl, Alexander Leitsch, Clemens Richter & Hendrik Spohr (2006): Proof Transformation by CERES. In Jonathan M. Borwein & William M. Farmer, editors: 5th International Conference on Mathematical Knowledge Management, MKM, Lecture Notes in Computer Science 4108, Springer, pp. 82–93, doi:10.1007/118122898.
-  Matthias Baaz, Stefan Hetzl, Alexander Leitsch, Clemens Richter & Hendrik Spohr (2008): CERES: An analysis of Fürstenberg’s proof of the infinity of primes. Theoretical Computer Science 403(2-3), pp. 160–175, doi:10.1016/j.tcs.2008.02.043.
-  Matthias Baaz, Stefan Hetzl & Daniel Weller (2012): On the complexity of proof deskolemization. Journal of Symbolic Logic 77(2), pp. 669–686, doi:10.2178/jsl/1333566645.
-  Matthias Baaz & Alexander Leitsch (2000): Cut-elimination and Redundancy-elimination by Resolution. Journal of Symbolic Computation 29(2), pp. 149–177, doi:10.1006/jsco.1999.0359.
-  Franco Barbanera & Stefano Berardi (1996): A Symmetric Lambda Calculus for Classical Program Extraction. Information and Computation 125(2), pp. 103–117, doi:10.1006/inco.1996.0025.
-  Samuel R. Buss (1995): On Herbrand’s Theorem. In: Logic and Computational Complexity, Lecture Notes in Computer Science 960, Springer, pp. 195–209, doi:10.1007/3-540-60178-385.
-  David Cerna & Anela Lolic (2018): System Description: GAPT for schematic proofs. RISC Report Series. Available at http://www.risc.jku.at/publications/download/risc_5591/schematicGapt.pdf.
-  Sebastian Eberhard & Stefan Hetzl (2015): Inductive theorem proving based on tree grammars. Annals of Pure and Applied Logic 166(6), pp. 665–700, doi:10.1016/j.apal.2015.01.002.
Giselle Reis &
(2018): On the generation of quantified
Journal of Automated Reasoning, pp. 1–32, doi:10.1007/s10817-018-9462-8.
-  Gabriel Ebner, Stefan Hetzl, Bernhard Mallinger, Giselle Reis, Martin Riener, Marielle Louise Rietdijk, Matthias Schlaipfer, Christoph Spörk, Janos Tapolczai, Jannik Vierling, Daniel Weller, Simon Wolfsteiner & Sebastian Zivota (2018): GAPT user manual, version 2.10. Available at https://logic.at/gapt/downloads/gapt-user-manual.pdf.
-  Gabriel Ebner, Stefan Hetzl, Giselle Reis, Martin Riener, Simon Wolfsteiner & Sebastian Zivota (2016): System Description: GAPT 2.0. In Nicola Olivetti & Ashish Tiwari, editors: International Joint Conference on Automated Reasoning (IJCAR), Lecture Notes in Computer Science 9706, Springer, pp. 293–301, doi:10.1007/978-3-319-40229-120.
-  Harry Furstenberg (1955): On the infinitude of primes. The American Mathematical Monthly 62(5), p. 353, doi:10.2307/2307043.
-  Harry Furstenberg (1981): Recurrence in ergodic theory and combinatorial number theory. Princeton University Press, doi:10.1515/9781400855162.
-  Gerhard Gentzen (1935): Untersuchungen über das logische Schließen I. Mathematische Zeitschrift 39(1), pp. 176–210, doi:10.1007/BF01201353.
-  Gerhard Gentzen (1936): Die Widerspruchsfreiheit der reinen Zahlentheorie. Mathematische Annalen 112, pp. 493–565, doi:10.1007/BF01565428.
-  Philipp Gerhardy & Ulrich Kohlenbach (2005): Extracting Herbrand disjunctions by functional interpretation. Archive for Mathematical Logic 44(5), pp. 633–644, doi:10.1007/s00153-005-0275-1.
-  Jean-Yves Girard (1987): Proof theory and logical complexity. Vol. 1, Bibliopolis.
-  Robert Harper (2016): Practical foundations for programming languages. Cambridge University Press, doi:10.1017/CBO9781316576892.
-  Jacques Herbrand (1930): Recherches sur la théorie de la démonstration. Ph.D. thesis, Université de Paris.
-  Stefan Hetzl (2012): Applying Tree Languages in Proof Theory. In Adrian-Horia Dediu & Carlos Martín-Vide, editors: Language and Automata Theory and Applications, Lecture Notes in Computer Science 7183, Springer, pp. 301–312, doi:10.1007/978-3-642-28332-126.
-  Stefan Hetzl, Alexander Leitsch, Giselle Reis, Janos Tapolczai & Daniel Weller (2014): Introducing Quantified Cuts in Logic with Equality. In Stéphane Demri, Deepak Kapur & Christoph Weidenbach, editors: 7th International Joint Conference on Automated Reasoning, IJCAR, Lecture Notes in Computer Science 8562, Springer, pp. 240–254, doi:10.1007/978-3-319-08587-617.
-  Stefan Hetzl, Alexander Leitsch, Giselle Reis & Daniel Weller (2014): Algorithmic introduction of quantified cuts. Theoretical Computer Science 549, pp. 1–16, doi:10.1016/j.tcs.2014.05.018.
-  Stefan Hetzl, Alexander Leitsch & Daniel Weller (2011): CERES in higher-order logic. Annals of Pure and Applied Logic 162(12), pp. 1001–1034, doi:10.1016/j.apal.2011.06.005.
Alexander Leitsch &
(2012): Towards Algorithmic
Logic for Programming, Artificial Intelligence and Reasoning (LPAR-18), Lecture Notes in Computer Science 7180, Springer, pp. 228–242, doi:10.1007/978-3-642-28717-619.
-  Stefan Hetzl & Daniel Weller (2013): Expansion Trees with Cut. CoRR abs/1308.0428. Available at https://arxiv.org/abs/1308.0428.
-  David Hilbert & Paul Bernays (1939): Grundlagen der Mathematik II. Springer.
-  Alexander Leitsch & Anela Lolic (2018): Extraction of Expansion Trees. Journal of Automated Reasoning, pp. 1–38, doi:10.1007/s10817-018-9453-9.
-  Horst Luckhardt (1989): Herbrand-Analysen zweier Beweise des Satzes von Roth: Polynomiale Anzahlschranken. Journal of Symbolic Logic 54(1), pp. 234–263, doi:10.2307/2275028.
-  Dale A. Miller (1987): A compact representation of proofs. Studia Logica 46(4), pp. 347–370, doi:10.1007/BF00370646.
-  Michel Parigot (1992): -Calculus: An Algorithmic Interpretation of Classical Natural Deduction. In Andrei Voronkov, editor: Logic for Programming, Artificial Intelligence and Reasoning (LPAR), Lecture Notes in Computer Science 624, Springer, pp. 190–201, doi:10.1007/BFb0013061.
-  Frank Pfenning (1995): Structural Cut Elimination. In: Logic in Computer Science, IEEE Computer Society, pp. 156–166, doi:10.1109/LICS.1995.523253.
-  Gaisi Takeuti (1987): Proof theory, second edition. Studies in Logic and the Foundations of Mathematics 81, North-Holland Publishing Co., Amsterdam.
-  Christian Urban & Gavin M. Bierman (2001): Strong Normalisation of Cut-Elimination in Classical Logic. Fundamenta informaticae 45(1-2), pp. 123–155, doi:10.1007/3-540-48959-226.