1 Introduction
A core technique in mathematical reasoning is that of induction. This is especially true in computer science, where it plays a central role in reasoning about recursive data and computations. Formal systems for mathematical reasoning usually capture the notion of inductive reasoning via one or more inference rules that express the general induction schemes, or principles, that hold for the elements being reasoned over.
Increasingly, we are concerned with not only being able to formalise as much mathematical reasoning as possible, but also with doing so in an effective way. In other words, we seek to be able to automate such reasoning. Transitive closure () logic has been identified as a potential candidate for a minimal, ‘most general’ system for inductive reasoning, which is also very suitable for automation [AvronTC03, Cohen2014AL, cohen2015middle]. adds to first order logic a single operator for forming binary relations: specifically, the transitive closures of arbitrary formulas.^{1}^{1}1In this work, for simplicity, we use a reflexive form of the operator. This modest addition affords enormous expressive power: namely it provides a uniform way of capturing inductive principles. If an induction scheme is expressed by a formula , then the elements of the inductive collection it defines are those ‘reachable’ from the base elements via the iteration of the induction scheme. That is, those ’s for which is in the transitive closure of . Thus, bespoke induction principles do not need to be added to, or embedded within, the logic; instead, all induction schemes are available within a single, unified language.
Although the expressiveness of logic renders any effective proof system for it incomplete for the standard semantics, a natural, effective proof system which is sound for logic was shown to be complete with respect to a generalized form of Henkinsemantics [Cohen2017Henkin]. In this paper, following similar developments in other formalizations for fixed point logics and inductive reasoning (see e.g. [Brotherston07, BrotherstonBC08, brotherston2010sequent, Santocanale2002, Sprenger2003]), we present an infinitary proof theory for logic which, as far as we know, is the first system that is (cutfree) complete with respect to the standard semantics. The soundness of such infinitary proof theories is underpinned by the principle of infinite descent: proofs are permitted to be infinite (i.e. nonwellfounded) trees, but subject to the restriction that every infinite path in the proof admits some infinite descent. The descent is witnessed by tracing terms or formulas for which we can give a correspondence with elements of a wellfounded set. In the context of formalised induction, we can use formulas interpreted by the elements of inductive collections. For this reason, such theories are considered systems of implicit induction, as opposed to those which employ explicit rules for applying induction principles. While a full infinitary proof theory is clearly not effective, in the aforementioned sense, such a system can be obtained by restricting consideration to only the regular infinite proofs. These are precisely those proofs that can be finitely represented as (possibly cyclic) graphs.
These infinitary proof theories generally subsume systems of explicit induction in expressive power, but also offer a number of advantages. Most notably, they can ameliorate the primary challenge for inductive reasoning: finding an induction invariant. In explicit induction systems, this must be provided a priori, and is often much stronger than the goal one is ultimately interested in proving. However, in implicit systems the inductive arguments and hypotheses are encoded in the cycles of a proof, so cyclic proof systems seem better for automation. The cyclic approach has also been used to provide an optimal cutfree complete proof system for Kleene algebra [DasP17], providing further evidence of its utility for automation.
In the setting of logic, we observe some further benefits over more traditional formal systems of inductive definitions and their infinitary proof theories (cf. [brotherston2010sequent, MartinLof71]). As previously mentioned, (with a pairing function) has all inductive definitions immediately ‘available’ within the language of the logic: as with inductive hypotheses, one does not need to ‘know’ in advance which induction schemes will be required. Moreover, the use of a single transitive closure operator provides a uniform treatment of all induction schemes. That is, instead of having a proof system parameterized by a set of inductive predicates and rules for them (as is the case in ), offers a single proof system with a single rule scheme for induction. This has immediate benefits in developing the metatheory: the proofs of completeness w.r.t. standard semantics and adequacy (i.e. subsumption of explicit induction) for the infinitary system presented in this paper are simpler and more straightforward. Furthermore, it allows a simple syntactic criterion (which we call normality) to define a cyclic subsystem that is complete for Henkin semantics. This suggests the possibility of more focussed proofsearch strategies, further enhancing the potential for automation. logic seems more expressive in other ways, too. The transitive closure operator may be applied to any formula, thus we are not restricted to induction principles corresponding only to monotone generation schemes (as in, e.g., [Brotherston07, brotherston2010sequent]).
We show that the explicit and cyclic systems are equivalent under arithmetic, as is the case for [Berardi2017b, Simpson2017]. However, there are cases in which the cyclic system for is strictly more expressive than the explicit induction system [Berardi2017]. To obtain a similar result for , the fact that all induction schemes are available poses a serious challenge. For one, the counterexample used in [Berardi2017] does not serve to show this result holds for . If this strong inequivalence indeed holds also for , it must be witnessed by a more subtle and complex counterexample. Conversely, it may be that the explicit and cyclic systems do coincide for . In either case, this points towards fundamental aspects that require further investigation.
The rest of the paper is organised as follows. In Section 2 we reprise the definition of transitive closure logic and both its standard and Henkinstyle semantics. Section 3 presents the existing explicit induction proof system for logic, and also our new infinitary proof system. We prove the latter sound and complete for the standard semantics, and also derive cutadmissibility. In LABEL:sec:ProofSystemComparisons we compare the expressive power of the infinitary system (and its cyclic subsytem) with the explicit system. LABEL:sec:FutureWork concludes and examines the remaining open questions for our system as well as future work.
Due to lack of space, proofs are omitted but can be found in the extended version of the paper (available online at http://kar.kent.ac.uk/65886/).
2 Transitive Closure Logic and its Semantics
In this section we review the language of transitive closure logic, as well as two possible semantics for it: a standard one, and a Henkinstyle one.
Definition 1 (The language )
Let be some firstorder signature, and let be the corresponding firstorder language. The language is obtained from by the addition of the reflexive transitive closure operator (), together with the following clause concerning the definition of a formula:

is a formula in for any formula in , distinct variables , , and terms , . (The free occurrences of and in become bound in this formula.)
Definition 2 (Standard Semantics)
Let be a structure for , and an assignment in . The (standard) semantics of is defined as classical firstorder logic, with the following additional clause for the satisfaction relation:

The pair is said to satisfy the formula , denoted by , if , or there exist () s.t. , , and for .
We next recall the concepts of frames and Henkin structures (see, e.g. [Henkin50Completeness]). A frame is a relational structure together with some subset of the powerset of its domain (called its set of admissible subsets).
Definition 3 (Frames)
Let be a firstorder signature. A frame is a triple , where is a nonempty domain, is an interpretation function on in , and .
Note that if , the frame is identified with a standard structure.
Definition 4 (Frame Semantics)
Let be the language based on the firstorder signature . formulas are interpreted in frames as in standard structures, except for the following clause concerning the operator:

if for every , if and for every : and implies , then .
We now restrict our set of structures to Henkin structures, which are frames whose set of admissible subsets satisfies some closure conditions.
Definition 5 (Henkin structures)
Let be the language based on . A Henkin structure is a frame closed under parametric definability, i.e., for each formula and assignment in , .
We refer to the logical validity induced by considering only Henkin structures as the Henkin semantics. Note that under both the standard semantics and Henkin semantics for the operator, equality is definable as follows:
(1) 
Thus, we do not include it explicitly in our logical languages.
To obtain the full inductive expressivity we must allow the formation of the transitive closure of not only binary relations, but any ary relation. In [AvronTC03] it was shown that taking such a operator for every (instead of just for ) results in a more expressive logic, namely one that captures all finitary inductive definitions and relations. Since having just one
operator is more convenient from a proof theoretical point of view, we here instead incorporate the notion of ordered pairs and use it to encode such operators. For example, writing
for the application of the pairing function , the formula can be encoded by:Accordingly, we assume languages that explicitly contain a pairing function, and restrict to structures that interpret it as such (i.e. the admissible structures).
Definition 6 (Admissible Structures)
Let be a signature that contains some constant , and a binary function symbol, denoted by . Let be a language based on . A structure for is called admissible if:
(2)  
(3) 
For such languages we consider two induced semantics: admissible standard semantics and admissible Henkin semantics, obtained by restricting the (firstorder part of the) structures to be admissible.
3 Proof Systems for
In this section, we define two proof systems for . The first is a finitary proof system with an explicit induction rule for formulas. The second is an infinitary proof system, in which formulas are simply unfolded, and inductive arguments are represented via infinite descentstyle constructions. We show the soundness and completeness of these proof systems, and also compare their provability relations.
The systems for defined below are extensions of the sequent calculus for classical firstorder logic, (see, e.g., [Gentzen1935, takeuti2013proof]). A sequent is an expression of the form , where we here take and to be finite sets of formulas. In [Cohen2017Henkin] are multisets, whereas in [brotherston2010sequent] they are sets.Here we also take to include the substitution rule, but we do not include the following standard rules for equality since, as we show below, they are admissible in our systems:
3.1 The Finitary Proof System
We here present the finitary proof system for . For more details see [Cohen2014AL, cohen2015middle].
Definition 7
The proof system for is defined by adding to the following inference rules:
(4)  
(5)  
(6) 
In all the rules we assume that the terms which are substituted are free for substitution, and that no forbidden capturing occurs. In Rule (6), should not occur free in or , and should not occur free in or .
3.2 Infinitary Proof Systems
Definition 8
The infinitary proof system for is defined like , but replacing Rule (6) by:
(7) 
where is fresh, i.e. does not occur free in , , or ; and and are such that and . We call the formula in the righthand premise the ancestor of the principle formula, , in the conclusion.
There is an asymmetry between Rule (5), in which the intermediary is an arbitrary term , and Rule (7), where we use a variable . This is necessary to obtain the soundness of the cyclic proof system. It is used to show that when there is a countermodel for the conclusion of a rule, then there is also a countermodel for one of its premises that is, in a sense that we make precise below, ‘smaller’. Using a fresh allows us to pick from all countermodels of the premise, whereas if we allowed an arbitrary term instead, this might restrict the countermodels we can choose from, i.e. it might only leave ones larger than the smallest one for the conclusion. See Lemma 1 below for more details.
As for the finitary system, the rules for equality are admissible. The following equivalent formulation of the paramodulation rule (used in, e.g., [Brotherston07, BrotherstonBC08])
(L) 
is subsumed by Rule (7), of which it can easily be seen to be a special case:
() 
Proofs in this system are possibly infinite derivation trees. However, not all infinite derivations are proofs: only those that admit an infinite descent argument. Thus we use the terminology ‘preproof’ for derivations.
Definition 9 (Preproofs)
An preproof is a possibly infinite (i.e. nonwellfounded) derivation tree formed using the inference rules. A path in a preproof is a possibly infinite sequence of sequents such that is the root sequent of the proof, and is a premise of for each .
The following definitions tell us how to track formulas through a preproof, and allow us to formalize inductive arguments via infinite descent.
Definition 10 (Trace Pairs)
Let and be formulas occurring in the lefthand side of the conclusion and a premise , respectively, of (an instance of) an inference rule. is said to be a trace pair for if the rule is:

the (Subst) rule, and where is the substitution associated with the rule instance;

Rule (7), and either:

[label=)]

is the principle formula of the rule instance and is the ancestor of , in which case we say that the trace pair is progressing;

is the lefthand premise and for some ; ^{2}^{2}2Here, , , , and refer to the instantiations of these same metavariables appearing in the schema of Rule (7). or

is the righthand premise and .


any other rule, and .
Definition 11 (Traces)
A trace is a (possibly infinite) sequence of formulas. We say that a trace follows a path in a preproof if, for some , each consecutive pair of formulas is a trace pair for . If is a progressing pair then we say that the trace progresses at , and we say that the trace is infinitely progressing if it progresses at infinitely many points.
Proofs, then, are preproofs which satisfy a global trace condition.
Definition 12 (Infinite Proofs)
A proof is a preproof in which every infinite path is followed by some infinitely progressing trace.
Clearly, we cannot reason effectively about such infinite proofs in general. In order to do so we need to restrict our attention to those proof trees which are finitely representable. These are the regular infinite proof trees, which contain only finitely many distinct subtrees. They can be finitely represented as systems of recursive equations or, alternatively, as cyclic graphs [Courcelle83]. Note that a given regular infinite proof may have many different graph representations. One possible way of formalizing such proof graphs is as standard proof trees containing open nodes (called buds), to each of which is assigned a syntactically equal internal node of the proof (called a companion). Due to space limitation, we elide a formal definition of cyclic proof graphs and rely on the reader’s basic intuitions.
Definition 13 (Cyclic Proofs)
The cyclic proof system for is the subsystem of comprising of all and only the finite and regular infinite proofs (i.e. those proofs that can be represented as finite, possibly cyclic, graphs).
Note that it is decidable whether a cyclic preproof satisfies the global trace condition, using a construction involving Büchi automata (see, e.g., [Brotherston07, Simpson2017]).
3.3 Soundness and Completeness
The rich expressiveness of logic entails that the effective system which is sound w.r.t. the standard semantics, cannot be complete (much like the case for ). It is however both sound and complete w.r.t. Henkin semantics.
Theorem 3.1 (Soundness and Completeness of [Cohen2017Henkin])
is sound for standard semantics, and also sound and complete for Henkin semantics. this completeness result is not for the cutfree part. Not sure it matters here. – We can get a cutfree completeness result the proof in [Cohen2017Henkin]
is for closed formulas. can probably derive for open. I think working with closed sequent is the standard. (is LKID proof in
[brotherston2010sequent] for open sequents?yes!)We note that the soundness proof of is rather complex since it must handle different types of mutual dependencies between the inductive predicates. For the proof is much simpler due to the uniform rules for the operator.
The infinitary system , in contrast to the finitary system , is both sound and complete w.r.t. the standard semantics. To prove soundness, we make use of the following notion of measure for formulas.
Definition 14 (Degree of Formulas)
For , we define if , and if and is a minimallength sequence of elements in the semantic domain such that , , and for . We call the degree of with respect to the model and valuation .
Soundness then follows from the following fundamental lemma.
Lemma 1 (Descending Countermodels)
If there exists a standard model and valuation that invalidates the conclusion of (an instance of) an inference rule, then there exists a standard model and valuation that invalidates some premise of the rule; and if is a trace pair for then . Moreover, if is a progressing trace pair then .
Comments
There are no comments yet.