# Action Logic is Undecidable

Action logic is the algebraic logic (inequational theory) of residuated Kleene lattices. This logic involves Kleene star, axiomatized by an induction scheme. For a stronger system which uses an ω-rule instead (infinitary action logic) Buszkowski and Palka (2007) have proved Π_1^0-completeness (thus, undecidability). Decidability of action logic itself was an open question, raised by D. Kozen in 1994. In this article, we show that it is undecidable, more precisely, Σ_1^0-complete. We also prove the same complexity results for all recursively enumerable logics between action logic and infinitary action logic; for fragments of those only one of the two lattice (additive) connectives; for action logic extended with the law of distributivity.

## Authors

• 9 publications
• ### Complete Intuitionistic Temporal Logics in Topological Dynamics

The language of linear temporal logic can be interpreted over the class ...
10/02/2019 ∙ by Joseph Boudou, et al. ∙ 0

• ### A geometrical view of I/O logic

We describe a geometrical account of the I/O logic put forth by Makinson...
11/28/2019 ∙ by D. Gabbay, et al. ∙ 0

• ### The Logic of Collective Action Revisited

Mancur Olson's "Logic of Collective Action" predicts that voluntary acti...
05/05/2021 ∙ by Ian Benson, et al. ∙ 0

• ### Complexity of the Infinitary Lambek Calculus with Kleene Star

We consider the Lambek calculus, or non-commutative multiplicative intui...
05/01/2020 ∙ by Stepan Kuznetsov, et al. ∙ 0

• ### A logic-algebraic tool for reasoning with Knowledge-Based Systems

A detailed exposition of foundations of a logic-algebraic model for reas...
09/03/2018 ∙ by José A. Alonso-Jiménez, et al. ∙ 0

• ### (Dual) Hoops Have Unique Halving

Continuous logic extends the multi-valued Lukasiewicz logic by adding a ...
03/02/2012 ∙ by Rob Arthan, et al. ∙ 0

• ### Restricted Rules of Inference and Paraconsistency

We present here two logical systems - intuitionistic paraconsistent weak...
01/04/2020 ∙ by Sankha S. Basu, et al. ∙ 0

##### This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

## 1 Introduction

### 1.1 Action Lattices and Their Logics

Residuated Kleene lattices (RKLs), or action lattices, are lattice structures extended simultaneously with residuals (division operations w.r.t. a pre-order) and iteration (Kleene star). Residuals originate in abstract algebra [30, 47]; then they were introduced to logic as the central component of the Lambek calculus [35] for syntactic analysis of natural language. Nowadays residuals are viewed as a natural algebraic interpretation of implication in substructural logics [38, 18, 14, 1].

The story of iteration comes from the seminal work of S.C. Kleene [21], thus its second name “Kleene star.” Kleene star is one of the most interesting algebraic operations in theoretical computer science. Being of inductive nature, it extends a purely propositional, algebraic logic setting with features usually found in more expressive systems, like arithmetic or higher order type theories.

The notion of residuated Kleene algebra (RKA), or action algebra, was introduced by V. Pratt [43]. Action algebras lack one of the lattice connectives (meet), this was added by D. Kozen [24], who actually gave the definition of action lattices.

The formal definition is as follows:111We use the following notations: and for lattice operations, for product, and for residuals. In literature, notations vary: for example, can be replaced by (like in regular expressions), and can be written as directed implications ( and ), etc. The Kleene star, however, is always denoted by .

###### Definition 1.

An action lattice is a structure , where:

1. is a lattice, is its minimal element ( for any );

2. is a monoid;

3. and are residuals of w.r.t. , i.e.,

 a⪯c/b⟺a⋅b⪯c⟺b⪯a∖c;
4. is the least element such that (in other words: , ; if and , then ).

The presence of residuals makes many desired properties of our algebras automatically true, so we do not need to postulate them explicitly. These include:

• monotonicity of w.r.t.  (shown by J. Lambek [35]); residuals are monotone by one argument and anti-monotone by the other one;

• despite the asymmetry of the condition for Kleene star, the dual one also holds: is also the least such that (shown by Pratt [43]); without residuals, there exist left and right Kleene algebras [22];

• the zero element is the annihilator w.r.t. : for any .

Kleene [21] informally interpreted elements of a Kleene algebra as types of events. This interpretation gives an intuition of the Kleene algebra operations: means event followed by event ; means an event which is either or ; is repeated several times (maybe zero222Wishing to avoid the empty event (“nothing happened”), Kleene considered a compound connective , meaning “several times followed by .”); means that is a more specific type of events, than . Residuals also fit this paradigm. Namely, (resp., ) could be interpreted as follows: this is an event which, if preceded (resp., followed) by an event of type , becomes an event of type .

The original setting of Kleene algebras included only three connectives: , , and . Adding residuals and meet was motivated by the fact that the classes of algebras in the extended setting happened to have better properties than the original ones. Namely, residuated Kleene algebras form a finitely based variety [43], while Kleene algebras without residuals do not [44, 10]. For RKLs, the algebra of matrices over such a lattice is also an RKL, while this does not hold for RKAs (without meet) [24].

In computer science, the usage of Kleene algebras and their extensions is connected to reasoning about program correctness. Remarkable examples include Kleene algebras with tests [26], concurrent Kleene algebras [16], nominal Kleene algebras [13, 6, 28]. Residuated Kleene algebras or lattices could also theoretically have such applications; however, there are undecidability results which make this problematic. One of such negative results is presented in this article.

Standard examples of action lattices include the algebra of languages over an alphabet and the algebra of binary relations on a set (with being the reflexive-transitive closure). Action lattices of these two classes are *-continuous in the sense of the following definition:

###### Definition 2.

An action lattice is *-continuous, if, for any , (where denotes the set of all natural numbers, including 0).

In the presence of residuals, we do not need the context in the definition of *-continuity (for Kleene algebras without residuals, the condition is as follows: ); *-continuity makes other conditions on the Kleene star (item 4 in Definition 1) redundant. Non-*-continuous action lattices also do exist; concrete examples are given in [32].

We are interested in (in)equational theories, or algebraic logics, of action lattices. Statements of such theories are of the form , where and are terms (formulae) constructed from variables and constants and , using the operations of action lattices: , , , , , . Statements which are true under all interpretations of variables over arbitrary action lattices form action logic, denoted by . If we consider only *-continuous action lattices, we get , as an extension of . Logics for weaker structures, which lack some of the operations, are obtained naturally as fragments as or .

The motivation for considering only inequational theories is as follows. If one tries to raise the expressive power a little bit and considers Horn theories, which operate statements of the form , complexity immediately rises up to the highest possible level. Even for the language of Kleene algebras (, , ), the Horn theory is -complete in the *-continuous case and -complete in the general case [27]. On the other side, in the language of residuated semigroups (, , ), without Kleene star and even lattice operations, the Horn theory also happens to be -complete [7]. Thus, only for inequational theories we could expect interesting complexity results. As mentioned above, we consider these inequational as substructural propositional logics, see [14], sound and complete w.r.t. given algebraic semantics.

For the *-continuous case, Buszkowski [9] and Palka [40]

prove undecidability and establish an exact complexity estimation of the inequational theory:

###### Theorem 1 (W. Buszkowski, E. Palka, 2007).

is -complete.

Here the lower bound is due to Buszkowski and the upper one is due to Palka; recently A. Das and D. Pous [11] gave another proof of the upper bound for , based on non-well-founded proofs.

Notice that only the combination of residuals and meet gives this undecidability effect. The logic of residuated lattices without iteration (that is, the multiplicative-additive Lambek calculus) is decidable and PSPACE-complete [19, 20]; without and it is NP-complete [41]. For Kleene algebras (in the language of , , ), the logic of *-continuous Kleene algebras coincides with the logic of all Kleene algebras (but the classes of algebras do not), and this logic is also PSPACE-complete [23, 29]. For Kleene lattices (, , , ), complexity is, to the best of the author’s knowledge, an open problem. However, there are decidability results on more specific classes of Kleene lattices [3, 5, 37, 12] (lattices in all these classes are distributive, which is not generally true for Kleene lattices), which makes it plausible that the logic of Kleene lattices is also decidable. In contrast, lattice operations are not crucial for undecidability: the logic of *-continuous residuated monoids with iteration (, , , ) is also -complete [33].

The question of decidability of , the logic of the whole class of action lattices, remained open, first raised by D. Kozen in 1994 [24]. We give a negative answer: is undecidable.

This undecidability result for was presented at LICS 2019 and published in its proceedings [34]. This article features, besides undecidability, the following new results:

1. -completeness for and all recursively enumerable logics in the range between and ;

2. analogous results for fragments without and for fragments without ;

3. analogous results for distributive versions of and its extensions up to the distributive version of .

### 1.2 Calculi: MALC, ACTω, and ACT

Let us start with axiomatizing the logics introduced semantically in the previous subsection. Both and are extensions of the multiplicative-additive Lambek calculus (), which is the logic of residuated lattices without the Kleene star [38].

We present in the form of a Gentzen-style sequent calculus. Sequents of are expressions of the form , where is a formula (built from variables and constants and using residuated lattice operations) and is a finite, possibly empty, sequence of formulae. The empty sequence is denoted by . As usual, is called the antecedent and the succedent of the sequent. A sequent is interpreted as ; means . Axioms and inference rules of are as follows.

 \infer[(ax)]α⊢α\infer[(0⊢)]Γ,0,Δ⊢γ\infer[(1⊢)]Γ,1,Δ⊢γΓ,Δ⊢γ\infer[(⊢1)]Λ⊢1

The logic for action lattices, (action logic), is obtained from by adding the following rules.

The logic for *-continuous action lattices, (infinitary action logic), is an extension of with the following rules.

All these systems are sound and complete w.r.t. the corresponding classes of algebras, by Lindenbaum – Tarski construction.

In , is an -rule. The set of derivable sequents of is defined as the smallest set including axioms and closed under rule applications. Derivations in are possibly infinite, but well-founded trees (infinite branches forbidden).

Notice that we include cut as an official rule of the system only in . Indeed, in , as shown by Palka [40], cut is eliminable, while for no cut-free system is known. Attempts to construct such a system were taken by P. Jipsen [17] and M. Pentus [42]. Buszkowski [9] showed that in Jipsen’s system cut is not eliminable; neither it is in Pentus’ systems. Constructing a cut-free system for is an open problem. Due to lack of cut elimination, we also do not know how to axiomatize elementary fragments of (in restricted sublanguages) in order to guarantee conservativity.

### 1.3 Some Inspiration: Circular Proofs for ACT

Before going further to proving undecidablity of , let us reveal some of the intuitions behind this proof. These intuitions root in non-well-founded and circular proof systems for and . These systems were introduced by A. Das and D. Pous [11]; for the identity-free version of the calculi, where empty antecedents are forbidden and Kleene star is replaced by positive iteration , they independently appear in [31].

Let us first define the non-well-founded system for , denoted by . This system arises from by adding the following rules for Kleene star:

The cut rule is also a priori present. All these rules are finitary. As a trade-off, we now allow non-well-founded derivations (derivations with infinite branches). The derivations should satisfy the following correctness condition: on each infinite branch of the proof, there eventually starts and continues a thread of a formula in the antecedent, which undergoes infinitely many times.

As shown by Das and Pous [11], enjoys cut elimination and is equivalent (that is, derives the same set of sequents) to . Moreover, the circular fragment of happens to be equivalent to . The definition of the circular fragment is as follows: a derivation in (obeying the correctness condition) is called regular, if it contains only a finite number of non-isomorphic subtrees. The term “circular” comes from the following interpretation of regularity: once in an infinite derivation tree we come across a subtree which is isomorphic to the tree it contains, we can replace this subtree by a backlink to the root of the bigger tree (which is the same). Thus, a regular proof gets represented as a finite object, but which is now a graph with cycles, not a tree.

Using cycles in derivations seems philosophically weird, reminding of circuli vitiosi, but the correctness condition guarantees that such proofs are sound. Unlike , its circular fragment does not enjoy cut elimination: if one applies the cut elimination procedure, a regular proof could become irregular. The circular fragment with cut, however, is equivalent to  [11].

This circular system is not formally used in this article: we rather use a traditional formulation of as presented in the previous subsection. However, it provides an inspiration for our undecidability proofs. Buszkowski’s proof of -hardness of is based on encoding the totality problem for context-free grammars, which, in its turn, allows encoding of non-halting

of Turing machines. Thus, for a Turing machine

and its input word , one can construct a sequent which is derivable in if and only if does not halt on . Informally one can say that “ can prove non-halting of on .” Being a weaker system, cannot prove non-halting of on in all cases where it is true: otherwise, would be also -hard, which is not the case (it is recursively enumerable). However, in some easy cases proving non-halting in is possible. We can formulate this as the following motto:

circular proofs for circular behaviour.

This roughly means that if goes into a cycle on input (this is a very specific kind of non-halting), then the proof of non-halting on also becomes circular, thus, can be carried out in . Since circular behaviour of on is undecidable, this leads to undecidability of .

We shall implement this general strategy with the following important modifications.

1. Instead of considering cycling in general, we restrict ourselves to trivial cycling, where just gets stuck: once it reaches a specific state, the rules prescribe it to stay in this state forever, neither moving nor altering the data on the tape.

2. We have no good tools for analysis of proofs (neither a cut-free system, nor reasonable semantics). Therefore, while we can establish the implication from circular behaviour of on to derivability of the corresponding sequent in , proving the “backwards implication,” from derivability in to circular behaviour, becomes problematic. We overcome this issue by using an indirect technique for proving undecidability and complexity, based on the notions of recursive inseparability (Subection 2.4) and effective inseparability (Section 3).

To conclude the introductory part, let us discuss one issue with the circular system. As one can notice, the rule and are asymmetric. On the other hand, as mentioned in Subsection 1.1, every left RKL is necessarily also a right one. Thus, it looks plausible that adding the following “right” versions of these rules would not alter the set of derivable sequents.

This is indeed true for —but not for its circular fragment!

In the circular fragment, replacing the “left” rules and with the “right” ones, and , yields the same logic, . However, the circular calculus including both “left” and “right” rules derives some sequents, which are not derivable in . An example of such a sequent is  [32] (here ). Thus, the circular system with both “left” and “right” rules is a natural example of an intermediate system strictly between and . Indeed, it does not coincide with due to an explicit counterexample and does not coincide with , because the latter is -hard, while circular systems are recursively enumerable. We denote this system, with two sets of rules for Kleene star, by .

## 2 Undecidability of ACT

Buszkowski [9] proves -hardness (and thus undecidability) of the derivability problem for by encoding the non-halting problem for deterministic Turing machines. In this section we extend Buszkowski’s result and prove undecidability for a range of logics.

We consider logics in the language of and in a broad sense, just as arbitrary sets of sequents. Such a logic will denoted by . The words “ is derivable ” mean .

###### Theorem 2.

If , then is undecidable.

In particular, we get undecidability for (introduced in the previous section), which is strictly between and , and, most importantly, for itself:

is undecidable.

### 2.1 Encoding I: Behaviour of Turing Machines

The proof of Theorem 2 is based on encoding behaviour of deterministic Turing machines via the totality property of context-free grammars. Usually, in undecidability proofs one takes care about halting vs. non-halting of a Turing machine on a given input. This is the way Buszkowski’s [9] proof goes. In contrast, we distinguish three possible kinds of behaviour of a Turing machine on input :

1. halts on ;

2. trivially cycles on (we define this notion below);

3. , when running on , does not halt for another reason.

In what follows, we consider only deterministic, single-tape, single-head Turing machines. For a Turing machine , let denote its internal alphabet (the input word is given in the external alphabet, which is a subset of ). Let be the set of states, with a designated initial state . A configuration of includes the following information: (1) the word , over alphabet , written in the internal memory; (2) the current state of , and (3) which letter of is currently being observed. We encode such configurations by words over : if the machine is in state and observes letter of , then this configuration is encoded as .

The Turing machine is controlled by a finite set of rules of the form , where , , and . Such a rule is applied when is in state observing letter . The rule commands to replace with , change the state to , and perform a move according to . If , move one cell left; if , move one cell right; if , no move is performed. For technical reasons, we consider Turing machines with the tape growing only to the right. The left end is fixed, and if the machine tries to go left () when it is already observing the leftmost cell, it halts. In contrast, if the machine is at the rightmost cell and applies a rule with , then the tape is extended by one cell, which is filled with a designated blank symbol . As is deterministic, for each pair there exists at most one rule .

###### Definition 3.

A Turing machine halts on input word , if it reaches a configuraton from which there is no next move (thus, we do not distinguish “successful” computations from those which halt by error).

The notion of trivally cycling is defined as follows. Let us suppose that every Turing machine includes a special cycling state with rules for any : once reaches , it gets stuck and never changes the configuration. This requirement does not restict capabilities of Turing machines, since one can just make unreachable.

###### Definition 4.

A Turing machine trivially cycles on input word , if reaches the cycling state while running on .

The notion of trivially cycling is essentially equivalent to reachability of the designated state . For our exposition, however, it is more convenient to consider the case of trivially cycling as a subcase of non-halting. Therefore, we force the Turing machine to get stuck in forever and thus forbid halting after reaching .

There is also a more general notion of cycling on a given input, when returns to the same configuration (and therefore runs infinitely long). For our purposes, the more restrictive notion of trivially cycling is more appropriate.

Consider the united alphabet (we suppose that and ).

###### Definition 5.

A protocol (computation history) of execution of on input is the word over , where is the (code of the) initial configuration, and each is the successor configuration of , that is, is obtained from by applying the appropriate rule of . The protocol is a halting one, if has no successor. Otherwise, the protocol is incomplete.

Some encodings, in order to simplify proofs a bit, make configurations in a protocol alternatingly reversed (); however, in Kozen’s textbook [25] one can find an encoding without reversions.

Let us fix and its input word . Our aim is to describe all the words except the halting protocol of on by a context-free grammar . Moreover, we shall provide an algorithm for constructing from and .

We consider the following three classes of words, which are not the halting protocol.

1. Words beginning with which cannot be even a prefix of a halting protocol. These include the following three subclasses:

• words which include , where is the cycling state;

• words which include a block between ’s, which is not a code of a configuration (that is, includes zero or more than one letters from , or the only is the rightmost letter, immediately before );

• words which include a block of the form , where is not the successor of ;

• words which start with where is not the initial configuration (that is, if is non-empty and if is empty).

2. Possibly incomplete protocols and prefixes of protocols, also beginning with . These include:

• words whose last symbol is not ;

• words of the form , where is a configuration which has a successor and is arbitrary.

3. Words not beginning with .

Now we are ready to construct , which is going to be a context-free grammar in Greibach [15] normal form. First we postulate rules for a non-terminal symbol which will generate just all non-empty words:

 U⇒aU,U⇒a,

for all .

Next, construct a context-free grammar, in Greibach normal form, for all words of class 1, with the leftmost removed. The most interesting case here is subclass 1.3. Words of this subclass can be recognized by a non-deterministic pushdown automaton, see [25, Lecture 35], and it is well-known that any language recognized by a non-deterministic pushdown automaton is context-free. Removing the leftmost from all words in this language does not affect context-freeness. Subclasses 1.1, 1.2, and 1.4 clearly form regular languages, and therefore are indeed context-free. Let our context-free grammar for words of class 1, with the leftmost removed, be in Greibach normal form and have starting symbol .

Class 2 also forms a regular language: for subclass 2.1 it is obvious, and for subclass 2.2 one just builds a finite automaton which checks whether is a correct configuration and a rule of is applicable. Thus, there is a context-free grammar for words of class 2, with the leftmost removed. Let this grammar also be in Greibach normal form, with non-terminals disjoint from the ones used for class 1 (and ). Denote the starting symbol of this new grammar by .

Finally, we put all things together and construct adding the following rules:

 S⇒a for all a∈Σ, S⇒aU for all a∈Σ−{#} (this handles class 3), S⇒#YU S⇒#Z S⇒##

Notice that appears in the production rule with , but not the one with . As mentioned above, any word which has a prefix from class 1 is necessarily not the halting protocol. For class 2, this is not always the case.

The rule is necessary because handles only words of length greater or equal than 3. Other words of length 2 are handled by or .

By construction, generates all non-empty words if and only if there exists no halting protocol, that is, does not halt on .

### 2.2 Some Derivable Rules

It will be convenient for us to consider the Kleene plus (positive iteration), defined as follows: . In , the Kleene plus obeys the following rules

(The left rule is a combination of and and the right one combines and .)

We shall also consider conjunctions and disjunctions of finite sets of formulae. For let and (the order of does not matter due to associativity and commutativity of and ). We can generalize the rules for and in order to handle these “big” and :

 \infer[(⋀⊢),ξ∈Ξ]Γ,⋀Ξ,Δ⊢γΓ,ξ,Δ⊢γ\infer[(⊢⋀)]Π⊢⋀Ξ(Π⊢ξ)ξ∈Ξ
 \infer[(⋁⊢)]Γ,⋁Ξ,Δ⊢γ(Γ,ξ,Δ⊢γ)ξ∈Ξ\infer[(⊢⋁),ξ∈Ξ]Π⊢⋁ΞΠ⊢ξ

These new “big” rules are obtained by applying the original “small” (binary) ones several times.

In order to facilitate construction of derivations in and , we introduce several auxiliary rules. These rules are going to be derivable using the rules of (including cut), and thus valid in and all its extensions (including ). We start with inverting some of the rules.

###### Lemma 1.

The following rules are derivable in :

 \infer[(⊢∖)inv]α,Π⊢βΠ⊢α∖β\infer[(⊢/)inv]Π,α⊢βΠ⊢β/α\infer[(⋅⊢)inv]Γ,α,β,Δ⊢γΓ,α⋅β,Δ⊢γ
 \infer[(⊢∧)inv,i=1,2]Π⊢αiΠ⊢α1∧α2\infer[(∨⊢)inv,i=1,2]Γ,αi,Δ⊢γΓ,α1∨α2,Δ⊢γ
 \infer[(∗⊢)inv,n≥0]Γ,αn,Δ⊢γΓ,α∗,Δ⊢γ\infer[(+⊢)inv,n≥1]Γ,αn,Δ⊢γΓ,α+,Δ⊢γ
###### Proof.

All these rules are established by cut, with the following sequents (respectively), which are derivable in :

 α,α∖β⊢β; β/α,α⊢α; α,β⊢α⋅β; α1∧α2⊢αi,i=1,2; αi⊢α1∨α2,i=1,2; αn⊢α∗,n≥0; αn⊢α+,n≥1.

Notice that and , being inversions of -rules (for Kleene star and Kleene plus respectively), are derivable already in .

Consecutive applications of yield invertibility of the corresponding “big” rule, ; the same for :

 \infer[(⊢⋀)inv,ξ∈Ξ]Π⊢ξΠ⊢⋀Ξ\infer[(⋁⊢)inv,ξ∈Ξ]Γ,ξ,Δ⊢γΓ,⋁Ξ,Δ⊢γ

Next, we present a fixpoint-style rule for Kleene plus:

###### Lemma 2.

The following rule is derivable in :

###### Proof.

The derivation is as follows:

Finally, we establish derivability of the “long rule” for Kleene plus.

###### Lemma 3.

For any natural , the following “long rule” is admissible in :

###### Proof.

Induction on . The base case, , is trivial (the conclusion coincides with the only premise). For the induction step, we start deriving by applying the “long rule” for :

The first premises are given. The last one is derived as follows:

Here and are given, and is generally true in Kleene algebra, thus derivable in .333The derivation is as follows:

Derivations of and are obvious.

Notice that disjunction () is not need for formulating the “long rule,” but is essentially used when establishing its admissibility.

### 2.3 Encoding II: from Grammars to Sequents

Let us now translate the context-free grammar into the Lambek calculus. The construction essentially resembles the translation of context-free grammars to basic categorial grammars by Gaifman [4]. Let non-terminals of be variables in our logics. For each letter let

 Ξa={A/(B1⋅…⋅Bℓ)∣(A⇒aB1…Bℓ) is a production rule of GM,x}

(in particular, for a production rule of the form we have , and means just ),

 φa=⋀Ξa,

and

 ψM,x=⋁{φa∣a∈Σ}.

Further we shall write just for , if it does not lead to confusion.

We shall need the following technical lemma about derivability in :

###### Lemma 4.

Let , …, be finite sets of formulae built using only , , and , and let be also built using only , , . Then is derivable in if and only if there exist , …, such that is derivable in .

###### Proof.

The “if” part is just application of . The interesting direction is “only if.” Consider a cut-free derivation of and trace the occurrences of upwards from the goal sequent. After each rule application the conjunction either remains intact (if it is not the active formula in this rule) or loses some of the conjuncts (actually, it either gets reduced to the rightmost conjunct, or loses this conjunct). The crucial observation here is that the trace does not branch. This is due to the fact that our derivation does not include and . Finally, gets reduced to one formula, . Then we just replace all the formulae on the trace by , resulting in a valid derivation of in . ∎

The next three lemmas are due to Buszkowski [9] and form the base for Buszkowski’s proof of -hardness of .

###### Lemma 5.

A word is generated from non-terminal in if and only if the sequent is derivable in [4, 9]

###### Proof.

By Lemma 4, is derivable if and only if there exist , such that is derivable.

In order to proceed by induction, we formulate the following more general statement. Let be a word in the extended alphabet , including both terminals and non-terminals. Then we claim that is derivable from in if and only if there exist such that:

1. for each , if , then ;

2. for each , if , then ;

3. the sequent is derivable.

Both implications here are proved by induction on derivation. For the “only if” direction, the base case is trivial ( is an axiom), and for the induction step let be the first rule applied. Then let , and we enjoy the following derivation:

The induction hypothesis, applied to subderivations starting from , …, , yields such that the left premise is derivable by .

For the “if” part, we first notice that the only rules which can be applied in a cut-free derivation of are and . We claim that if is derivable, then and for (this is a small “focusing lemma”). This is proved by an easy induction on derivation. Indeed, if the lowermost rule is , we have , where and are derivable. Applying the induction hypothesis to the latter, we get , with derivable (). If the lowermost rule is , then , and and are derivable. By induction hypothesis, , , and the following sequents are derivable: and for . Applying to the former, we get , which is the needed sequent.

Now we proceed by induction on the total number of connectives in . This sequent is cut-free derivable, and lowermost rule in its derivation could be only for :

As shown above, derivability of yields derivability of

 ξi+1,…,ξk1⊢B1; ξk1+1,…,ξk2⊢B2; … ξkℓ−1+1,…,ξj⊢Bℓ.

Each of these sequents has less connectives than the original one, thus we can apply the induction hypothesis and get the following derivabilities in :

 ei+1,…,ek1 is derivable from B1; ek1+1,…,ek2 is derivable from B2; … ekℓ−1+1,…,ej is derivable from Bℓ.

Moreover, applying the induction hypothesis to , we get derivability of from in Finally, since is not a variable, is a terminal symbol, and since , is a production rule of . Applying this rule and the derivabilities from , …, established above, we get derivability of from . ∎

###### Lemma 6.

The grammar generates all words of length if and only if is derivable in .

###### Proof.

Immediately from Lemma 5, by and . ∎

###### Lemma 7.

Turing machine does not halt on input if and only if is derivable in .

###### Proof.

Immediately from Lemma 6, by and . ∎

This lemma yields undecidability of , since the (non-)halting problem is undecidable. We go further and study derivability of the same sequent in . Our new key lemma is as follows:

###### Lemma 8.

If trivially cycles on , then is derivable in .

###### Proof.

Let us first show derivability of in , that is, establish that is capable of proving the fact that indeed generates all non-empty words. Due to production rules and , we have and for any . Thus, by we have and , and by we get and . Now is derived as follows:

Suppose that trivially cycles on and consider the execution of on input

up to the moment when

enters the cycling state . Such an execution is unique, because is deterministic. Let the (incomplete) protocol of execution of on up to the configuration with be of length ( is the number of letters in the protocol, not the number of configurations!). Notice that , since this protocol includes at least and the ’s surrounding the initial configuration.

Now we derive using the “long rule” (Lemma 3):

All its premises, except the last one, are of the form and are derivable by Lemma 6. In order to derive the last premise, we first apply all instances of in . Now we have to derive for any word over .

Apply cut as follows:

The left premise, , is derivable. For the right one, notice that belongs to class 1 or class 3 (see Subsection 2.1). Indeed, even if this word is a correct prefix of the (infinite) protocol of on , then, having length , it should include . This makes it belong to class 1. In other cases, depending on whether or not, this word belongs either again to class 1, or to class 3.

If belongs to class 1, then and is derivable in from . The conjunction includes , thanks to the production rule. Thus, can be derived as follows:

The sequent is derivable by Lemma 5, since is derivable in from .

If belongs to class 3, then and we use the production rule. Now and , (thanks to ). Now the desired sequent can be derived, by , from . The latter is derivable in the Lambek calculus, and thus in . ∎

Notice that the rules with non-terminal symbol are used only for deriving , . For longer words beginning with , we use only .

### 2.4 Undecidability via Inseparability

Now we are ready to prove Theorem 2. Notice that Lemma 8 gives only a one-way encoding: from cycling behaviour of on input to derivability of in . If the inverse implication were also true, we would immediately have undecidability of , since cycling behaviour (that is, reachability of ) is undecidable. However, we do not have this inverse implication, and therefore use a trickier argument.

Let us introduce some notations:

 C={⟨M,x⟩∣M trivially cycles on x}, H={⟨M,x⟩∣M halts on x}, ¯¯¯¯¯H={⟨M,x⟩∣M does not halt on x}.

Evidently, , and we shall use a folklore fact that and are recursively inseparable:

###### Proposition 1.

There exists no decidable class of pairs such that .

We omit the proof of Proposition 1, since it is rather standard and, moreover, follows from a stronger Proposition 3 below.

Next, for an arbitrary logic in our language let

 K(L)={⟨M,x⟩∣ψ+M,x⊢S is derivable in L}.

If , then of course . Now Buszkowski’s Lemma 7 and our Lemma 8 can be expressed in the following way: if , then

 C⊆K(ACT)⊆K(L)⊆K(ACTω)=¯¯¯¯¯H.

By Proposition 1 is undecidable, thus so is itself. This finishes the proof of Theorem 2.

## 3 Σ01-completeness of ACT

In this section we show that our construction actually yields more than just undecidability. Namely, we prove -completeness for any recursively enumerable such that —in particular, for and . The infinitary system is, dually, -complete: the lower bound was proved by Buszkowski [9], by ; for the upper bound, there exist two proofs: by Palka [40] via her *-eliminating technique, and by Das and Pous [11] via non-well-founded proofs.

We follow a general road to obtain -completeness results from inseparability, noticed by Speranski [46]. The idea is to use effective inseparability instead of the usual one. The methods used come from the classics of recursive function theory. Here we give only the definitions and results necessary for our purposes; for a broader scope we refer to Rogers’ book [45].

The theory of effective inseparability is usually developed for sets of natural numbers. Thus, we suppose that pairs (a Turing machine and its input) are encoded by natural numbers in an injective and computable way.

First, we recall the definition of a recursively enumerable (r.e.) set as the domain of a partial recursive function. By we denote the domain of the partial recursive function whose program is coded by natural number — informally speaking, “the -th r.e. set.”

###### Definition 6.

Two sets are called effectively inseparable, if and there exists a partial recursive function of two arguments such that if , and , then is defined and .

The notion of effective inseparability is closely related to the notion of creativity.

###### Definition 7.

A set is called creative, if is r.e. and there exists a partial recursive function such that if , then is defined and .

###### Proposition 2.

If and are effectively inseparable and are both r.e., then both and are creative.

###### Proof.

Since and are r.e., we have and for some and . Define as follows. For any let . Notice that is computable from and . Next, let . Let be an r.e. set disjoint with . Then, since is also disjoint with , so is . By definition of , since , , and , we see that is defined and . Thus, (because ). Therefore, is creative. Reasoning for is symmetric. ∎

For creative sets, the following Myhill’s theorem establishes their -completeness.

###### Theorem 3 (J. Myhill 1955).

If is creative, than any r.e. set is m-reducible to . In other words, any creative set is -complete. [36, Theorem 10]

###### Corollary 2.

If and are effectively inseparable and are both r.e., then both and are -complete. [45, Exercise 11-14]

Thus, while recursive inseparability allows proving undecidability, effective inseparability is a tool for proving -completeness. In order to apply this technique to proving -completeness of , we strengthen Proposition 1 and establishes effective inseparability of and .

###### Proposition 3.

The sets and are effectively inseparable.

(This is also probably a folklore fact, cf.

[45, Exercise 7-55d].)

###### Proof.

Recall that here we consider sets of pairs of Turing machines and their inputs, and silently suppose that these pairs are encoded as natural numbers. Let and be two such sets, which are both r.e., disjoint, and , .

The proof is a diagonalization procedure. Construct a Turing machine . Given an input , operates as follows:

• if is not a code of a Turing machine, then halt;

• if is a code of a Turing machine , start enumerating and in parallel, waiting for to appear (since , it could appear only in one enumeration); next,

• if