DeepAI
Log In Sign Up

State Complexity Characterizations of Parameterized Degree-Bounded Graph Connectivity, Sub-Linear Space Computation, and the Linear Space Hypothesis

11/15/2018
by   Tomoyuki Yamakami, et al.
0

The linear space hypothesis is a practical working hypothesis, which originally states the insolvability of a restricted 2CNF Boolean formula satisfiability problem parameterized by the number of Boolean variables. From this hypothesis, it follows that the degree-3 directed graph connectivity problem (3DSTCON) parameterized by the number of vertices in a given graph cannot belong to PsubLIN, composed of decision problems computable by polynomial-time, sub-linear-space deterministic Turing machines. This hypothesis immediately implies L≠NL and it was used as a solid foundation to obtain new lower bounds on the computational complexity of various NL search and NL optimization problems. The state complexity of transformation refers to the cost of converting one type of finite automata to another type, where the cost is measured in terms of the increase of the number of inner states of the converted automata from that of the original automata. We relate the linear space hypothesis to the state complexity of transforming restricted 2-way nondeterministic finite automata to computationally equivalent 2-way alternating finite automata having narrow computation graphs. For this purpose, we present state complexity characterizations of 3DSTCON and PsubLIN. We further characterize a non-uniform version of the linear space hypothesis in terms of the state complexity of transformation.

READ FULL TEXT VIEW PDF

page 1

page 2

page 3

page 4

09/29/2017

The 2CNF Boolean Formula Satisfiability Problem and the Linear Space Hypothesis

We aim at investigating the solvability/insolvability of nondeterministi...
01/17/2019

Supportive Oracles for Parameterized Polynomial-Time Sub-Linear-Space Computations in Relation to L, NL, and P

We focus our attention onto polynomial-time sub-linear-space computation...
12/21/2019

Lower bounds for the state complexity of probabilistic languages and the language of prime numbers

This paper studies the complexity of languages of finite words using aut...
01/31/2018

New Size Hierarchies for Two Way Automata

We introduce a new type of nonuniform two--way automaton that can use a ...
05/18/2020

On the Power of Unambiguity in Büchi Complementation

In this work, we exploit the power of unambiguity for the complementatio...

1 Background and a Quick Overview

1.1 Parameterized Problems and the Linear Space Hypothesis

The nondeterministic logarithmic-space complexity class NL has been discussed since early days of computational complexity theory. Typical NL decision problems include the 2CNF Boolean formula satisfiability problem (2SAT) as well as the directed - connectivity problemThis problem is also known as the graph accessibility problem and the graph reachability problem. () of determining whether there exists a path from a given vertex to another vertex in a given directed graph . These problems are also known to be -complete under log-space many-one reductions. The -completeness is so robust that even if we restrict our interest within graphs whose vertices are limited to be of degree at most , the corresponding decision problem, , remains -complete.

When we discuss the computational complexity of given problems, we in practice tend to be more concerned with parameterizations of the problems and we measure the complexity with respect to the value of an associated size parameter. There may be multiple ways to choose natural size parameters for each given problem. For example, given an instance to , where is a directed graph (or a digraph) and are vertices, we often use as a size parameter the number of vertices in or the number of edges in . We treat the size of specific “input objects” given to the problem as a “practical” size parameter and use it to measure how much resources are needed for algorithms to solve those problems. To emphasize the choice of such a size parameter for a decision problem over an alphabet , it is convenient to write , which gives rise to a parameterized decision problem. Since we deal only with such parameterized problems in the rest of this paper, we occasionally drop the adjective “parameterized” as long as it is clear from the context.

Any instance to is usually parameterized by the numbers of vertices and of edges in the graph . It was shown in [2] that with vertices and edges can be solved in steps using only space for a suitable constant . However, it is unknown whether we can reduce this space usage down to for a certain fixed constant . Such a bound is informally called “sub-linear” in a strong sense. It has been conjectured that, for every constant , no polynomial-time -space algorithm solves with vertices (see references in, e.g., [1, 5]). For convenience, we denote by the collection of all parameterized decision problems solvable deterministically in time polynomial in using space at most for certain constants and certain polylogarithmic (or polylog, in short) functions [15].

The linear space hypothesis (LSH), proposed in [15], is a practical working hypothesis, which originally asserts the insolvability of a restricted form of , denoted , together with the size parameter indicating the number of variables in each given Boolean formula , in polynomial time using sub-linear space. As noted in [15], it is unlikely that replaces . From this hypothesis, nevertheless, we immediately obtain the separation , which many researchers believe to hold. It was also shown in [15] that can be replaced by , where refers to the number of vertices in . LSH has acted as a reasonable foundation to obtain new lower bounds of several -search and -optimization problems [15, 16]. To find more applications of this hypothesis, we need to translate the hypothesis into other fields. In this paper, we look for a logically equivalent statement in automata theory, in hope that we would find more applications of LSH in this theory.

1.2 Families of Finite Automata and Families of Languages

The purpose of this work is to look for an automata-theoretical statement that is logically equivalent to the linear space hypothesis; in particular, we seek a new characterization of the relationship between and in terms of the state complexity of transforming a certain type of finite automata to another type with no direct reference to or .

It is often cited from [3] (re-proven in [11, Section 3]) that, if , then every -state two-way nondeterministic finite automaton (or 2nfa, for short) can be converted into an -state two-way deterministic finite automaton (or 2dfa) that agrees with it on all inputs of length at most . Conventionally, we call by unary finite automata automata working only on unary inputs (i.e, inputs over a one-letter alphabet). Geffert and Pighizzini [8] strengthened the aforementioned result by proving that the assumption of leads to the following: for any -state unary 2nfa, there is a unary 2dfa of at most -states agreeing with it on all strings of length at most . Within a few years, Kapoutsis [11] gave a similar characterization using , a non-uniform version of : if and only if (iff) there is a polynomial such that any -state 2nfa has a 2dfa of at most states agreeing with the 2nfa on strings of length at most . Another incomparable characterization was given by Kapoutsis and Pighizzini [12]: iff there is a polynomial satisfying that any -state unary 2nfa has an equivalent unary 2dfa of states at most . For the linear space hypothesis, we wish to seek a similar automata characterization.

Sakoda and Sipser [14] laid out a complexity-theoretical framework to discuss the state complexity by giving formal definitions to non-uniform state complexity classes (such as , , ), each of which is generally composed of non-uniform families of “promise decision problems” (or partial decision problems) recognized by finite automata of specified types and input sizes. Such complexity-theoretical treatments of families of finite automata were also considered by Kapoutsis [10, 11] and Kapoutsis and Pighizzini [12] to establish relationships between non-uniform state complexity classes and non-uniform (space-bounded) complexity classes. For those non-uniform state complexity classes, it was proven in [11, 12] that iff iff .

We discover that a family of promise decision problems is more closely related to a parameterized decision problem than any standard decision problem (whose complexity is measured by the binary encoding size of inputs). Given a parameterized decision problem , we naturally identify it with a family of promise decision problems defined by and for each index , where is the set of natural numbers. On the contrary, given a family over an alphabet satisfying , if we define and set for which , then is a parameterized decision problem. This identity leads us to establish a characterization of LSH, which will be discussed in Section 1.3.

1.3 Main Contributions

As the main contribution of this paper, firstly we provide two characterizations of and in terms of 2nfa’s and two-way alternating finite automata (or 2afa’s, for short), each of which takes universal states and existential states alternatingly, producing alternating -levels and -levels in its (directed) computation tree made up of (surface) configurations. Secondly, we give a characterization of LSH in terms of the state complexity of transforming a restricted form of 2nfa to another restricted form of 2afa’s. The significance of our characterization includes the fact that LSH can be expressed completely by the state complexity of finite automata of certain types with no clear reference to , , or even ; therefore, this characterization may help us apply LSH to a wider range of -complete problems.

To describe our result precisely, we further need to explain our terminology. A simple 2nfa is a 2nfa having a “circular” input tape§§§A 2nfa with a tape head that sweeps a circular tape is called “rotating” in, e.g., [9, 12]. (in which both endmarkers are located next to each other) whose tape head “sweeps” the tape (i.e. it moves only to the right), and making nondeterministic choices only at the right endmarker.This is also known as outer nondeterminism [12]. For a positive integer , a -branching 2nfa makes only at most nondeterministic choices at every step and a family of 2nfa’s is called constant-branching if there is a constant for which every 2nfa in the family is -branching. A -narrow 2afa is a 2afa whose computation graphs have width (i.e., the number of distinct vertices at a given level) at every -level is bounded by . A family of 2afa’s is said to run in expected polynomial time if, for a certain polynomial , each runs on average in time polynomial in , where is any input.

For convenience, we say that a finite automaton is equivalent (in computational power) to another finite automaton over the same input alphabet if agrees with on all inputs. Here, we use a straightforward binary encoding of an -state finite automaton using bits. A family is said to be -uniform if a deterministic Turing machine (or a DTM) produces from an encoding of finite automaton using space logarithmic in .

Proposition 1.1

Every -uniform family of constant-branching -state simple 2nfa’s can be converted into another -uniform family of equivalent -narrow 2afa’s with states running in expected polynomial time for a certain constant .

Theorem 1.2

The following three statements are logically equivalent.

  1. The linear space hypothesis fails.

  2. For any constant , there exists a constant such that every -uniform family of constant-branching simple 2nfa’s with at most states can be converted into another -uniform family of equivalent -narrow 2afa’s with states running in polynomial time.

  3. For any constant , there exists a constant and a log-space computable function that, on every input of an encoding of -branching simple -state 2nfa, produces another encoding of equivalent -narrow 2afa with states in polynomial time.

Our proof of Theorem 1.2 is based on two explicit characterizations, given in Section 3, of and in terms of state complexities of restricted 2nfa’s and of restricted 2afa’s, respectively.

In addition to the original linear space hypothesis, it is possible to discuss its non-uniform version, which asserts that does not belong to a non-uniform version of , succinctly denoted by .

The non-uniform state complexity class consists of all families of promise decision problems, each of which is recognized by a certain -branching simple -state 2nfa on all inputs for appropriate constants . Moreover, is composed of families of promise decision problems recognized by -narrow 2afa’s of states running in polynomial time on all inputs for a certain fixed polynomial .

Theorem 1.3

The following three statements are logically equivalent.

  1. The non-uniform linear space hypothesis fails.

  2. For any constant , there exists a constant such that every -branching simple -state 2nfa can be converted into an equivalent -narrow 2afa with at most states running in polynomial time.

  3. .

Unfortunately, it is still open whether in Theorem 1.3(3) can be replaced by or even . This is related to the question of whether we can replace in the definition of LSH by [15]; if we can answer the question positively, then LSH is simply rephrased as the non-inclusion relation of .

So far, we work mostly on non-unary input alphabets. In contrast, if we turn our attention to “unary” finite automata, then we obtain only a slightly weaker implication to the failure of LSH.

Theorem 1.4

Each of the following statements implies the failure of the linear space hypothesis.

  1. For any constants , there exists a constant such that every -uniform family of -branching simple unary 2nfa’s with at most states can be converted into an -uniform family of equivalent -narrow unary 2afa’s with states running in polynomial time.

  2. For any constants , there exist a constant and a log-space computable function that, on every input of an encoding of -branching simple unary 2nfa with at most states, produces another encoding of equivalent -narrow unary 2afa having states in polynomial time.

We cannot confirm that the converse of the implication of Theorem 1.4 is true. Hence, we still have no precise characterization.

Theorems 1.21.3 will be proven in Section 4 after we establish basic properties of and in Section 3. Theorem 1.4 will be shown in Section 5.

2 Fundamental Terminology

Let us explain fundamental terminology used in Section 1 to read through the subsequent sections.

2.1 Numbers, Languages, and Size Parameters

We denote by the set of all natural numbers (i.e., nonnegative integers) and set . For two integers with , an integer interval is a set . Given a set , expresses the power set of ; that is, the set of all subsets of . We assume that all polynomials have integer coefficients and all logarithms are to base . A function is polynomially bounded if there exists a polynomial satisfying for all .

An alphabet is a finite nonempty set of “symbols” and a string is a finite sequence of such symbols. The length of a string , denoted by , is the total number of symbols in . We use to express the empty string of length . Given an alphabet , the notation (resp., ) indicates the set of all strings of length at most (resp., exactly) over . A language over alphabet is a set of strings over . The complement of is and is succinctly denoted by as long as is clear from the context. We write for . For convenience, we abuse the notation to indicate its characteristic function as well; that is, for all , and for all . A function is polynomially bounded if there is a polynomial such that for all .

A size parameter is a function mapping to , which can be useful as a basis unit in our analysis. We call such a size parameter ideal if there are constants and such that for all with .

Given a number , denotes the binary representation of ; for example, , , and . For a length- string over the alphabet (where ), we define its binary encoding as , where , , , and . For example, and .

A promise decision problem over an alphabet is a pair of disjoint subsets of . We interpret and into sets of accepted (or positive) instances and of rejected (or negative) instances, respectively. In this paper, we consider a family of promise decision problems over the same alphabet.In certain literature, the authors consider families of promise decision problems , where an alphabet used to describe each set may vary. Nonetheless, for the purpose of this paper, we do take this definition.

As noted in Section 1.2, there is a direct translation between parameterized decision problems and families of promise decision problems. Given a parameterized decision problem over alphabet , a family is said to be induced from if, for each index , and , where . In the rest of this paper, we identify those parameterized problems with induced families of promise problems.

2.2 Turing Machine Models

To discuss space-bounded computation, we consider only the following form of 2-tape Turing machines. A nondeterministic Turing machine (or an NTM, for short) is a tuple with a read-only input tape (over the extended alphabet ) and a rewritable work tape (over the tape alphabet ). The transition function of is a map from to with and . We assume that contains a distinguished blank symbol . In contrast, when maps to , is called a deterministic Turing machine (or a DTM). Each input string is written between (left endmarker) and (right endmarker) on the input tape and all cells in the input tape are indexed by all numbers incrementally from to , where and are respectively placed at cell and cell . The work tape is a semi-infinite tape, stretching to the right, and the leftmost cell of the tape is indexed . All other cells are consecutively numbered to the right as .

Given a Turing machine and an input , a surface configuration is a tuple , where , , , , and , which represents the situation in which is in state , scanning a symbol at cell of the input tape and a symbol at cell of the work tape containing string . For each input , an NTM produces a computation tree, in which each node is labeled by a surface configuration of on . An NTM accepts input if it starts with state scanning and, along a certain computation path, it enters an accepting state and halts. Otherwise, rejects .

In this paper, we generally use Turing machines to solve parameterized decision problems. However, we also use Turing machines to compute functions. For this purpose, we need to append an extra write-only output tape to each DTM, where a tape is write only if its tape head must move to the right whenever it write any non-blank symbol onto the tape.

A function is log-space computable if there exists a DTM such that, for each given length , takes as its input and then produces on a write-only output tape using work space. In contrast, a size parameter is called a log-space size parameter if there exists a DTM that, on any input , produces (in unary) on its output tape using only work space [15]. Concerning space constructibility, we here take the following simplified definition. A function is -time space constructible if there exists a DTM such that, for each given length , when takes as an input written on the input tape and produces on its output tape and halts within steps using no more than cells. A function is log-space time constructible if there is a DTM such that, for any , starts with as an input and halts exactly in steps using space.

2.3 Sub-Linear-Space Computability and Advice

Let and be two functions of the form and . Take two functions and , and let denote any size parameter. The notation (where expresses merely a symbolic input) denotes the collection of all parameterized decision problems recognized by DTMs (each of which is equipped with a read-only input tape and a semi-infinite rewritable work tape) within time using space at most on every input for certain absolute constants . The parameterized complexity class , defined in [15], is the union of all classes for any positive polynomial .

Karp and Lipton [13] supplemented extra information, represented by advice strings, to underlying Turing machines to enhance the computational power of the machines. More precisely, we equip our underlying machine with an additional read-only advice tape, to which we provide exactly one advice string, surrounded by two endmarkers ( and ), of pre-determined length for all instances of every length .

Let be an arbitrary function. The non-uniform complexity class is obtained from by providing to underlying Turing machines an advice string of length for all instances of every length . Similar to , its advised variant can be defined by supplementing advice of polynomial size.

Definition 2.1

The class is then defined to be the union of all for any log-space size parameter , any constants and , and any polylog function . Moreover, we can define as a non-uniform version of .

2.4 Models of Two-Way Finite Automata

We consider two-way finite automata, equipped with a read-only input tape and a tape head that moves along the input tape in both directions (to the left and to the right). To clarify the use of two endmarkers and , we explicitly include them in the description of finite automata.

Let us start with defining two-way nondeterministic finite automata (or 2nfa’s). A 2nfa is formally a tuple , where is a finite set of inner states, is an input alphabet with , ( is the initial state, and are respectively sets of accepting and rejecting states with and , and is a transition function from to with and . We always assume that . The 2nfa behaves as follows. Assume that is in state scanning symbol . If a transition has the form , then changes its inner state to , moves its tape head in direction (where means “to the right” and means “to the left.”).

A two-way deterministic finite automaton (or a 2dfa) is defined as a tuple , which is similar to a 2nfa but its transition function is a map from to .

An input tape is called circular if the right of cell is cell and the left of cell is cell . Hence, on this circular tape, when a tape head moves off the right of (resp., the left of ), it instantly reaches (resp., ). A circular-tape finite automaton is sweeping if the tape head always moves to the right. A circular-tape 2nfa is said to be end-nondeterministic if it makes nondeterministic choices only at the cell containing . A simple 2nfa is a 2nfa that has a circular tape, is sweeping, and is end-nondeterministic.

For a fixed constant , a 2nfa is said to be -branching if, for any state and tape symbol , there are at most next moves (i.e., ). Note that all 2dfa’s are -branching. We say that a family of 2nfa’s is constant-branching if every 2nfa in the family is -branching for an absolute constant .

A surface configuration of a finite automaton is a tuple with and . Since, for each input size , ranges only over the integer interval , the total number of surface configurations of an -state finite automaton working on inputs of length is .

We use the following acceptance criteria: accepts input if there is a finite accepting computation path of on ; otherwise, is said to reject . We say that accepts in time if, for any length and any input of length , if accepts , then there exists an accepting path of length at most . Let express the set of all strings accepted by . We say that two finite automata are (recognition) equivalent if .

To characterize polynomial-time sub-linear-space computation, we further look into a model of two-way alternating finite automata (or 2afa’s) whose computation trees are particularly “narrow.” Formally, a 2afa is expressed as a tuple , where is partitioned into a set of universal states (or -states) and a set of existential states (or -states). On each input, similar to a 2nfa, a 2afa branches out according to the value after scanning symbol in state , and generates a computation tree whose nodes are labeled by configurations of . A -label of a node is defined as follows. A node has a -label (resp., an -label) if its associated configuration has a universal state (resp., an existential state). A computation tree of on an input is said to be -leveled if all nodes of the same depth from the root node have the same -label (either or ).

A 2afa accepts an input if there is an accepting computation subtree of on , in which (i) contains exactly one branch from every node labeled by an existential state, (ii) contains all branches from each node having -labels, and (iii) all leaves of must have accepting states. Otherwise, we say that rejects . Abusing the terminology, we say that a family of 2afa’s runs in polynomial time if there exists a polynomial such that, for any and for any input accepted by , the height of a certain accepting computation subtree of on is bounded from above by .

Let be any function on . An -narrow 2afa is a 2afa that, on each input , produces a -leveled computation tree that has width at most at every -level. A -time 2afa is a 2afa that halts within time on all computation paths.

We say that two machines and are equivalent if agrees with on all inputs.

2.5 Non-Uniform State Complexity and State Complexity Classes

Given a finite automaton , the state complexity of refers to the number of ’s inner states.

The state complexity of transforming 2nfa’s to 2dfa’s refers to the minimal cost of converting any 2nfa into a certain 2dfa . For more precisely, when any given -state 2nfa can be transformed into its equivalent -state 2dfa , if is the minimal number, then is called the state complexity of transforming 2nfa’s to 2dfa’s.

Instead of considering each finite automaton separately, here, we are concerned with a family or a collection of finite automata, each of which has a certain number of states, depending only on parameterized sizes and input lengths . We say that a family of machines solves (or recognizes) a family of promise decision problems if, for every , (1) for any , accepts and (2) for any , rejects . Notice that there is no requirement for any string outside of . Kapoutsis [11] and Kapoutsis and Pighizzini [12] presented a characterization of in terms of the non-uniform state complexity classes , , and . These classes are described as collections of promise decision problems (or partial decision problems).

Definition 2.2
  1. The non-uniform state complexity class is the collection of non-uniform families of promise decision problems satisfying the following: there exist four constants , , and a non-uniform family of -branching 2nfa’s such that, for each index , has at most states and recognizes .

  2. Given a function on , we define to be the collection of non-uniform families of promise decision problems, each of which is recognized by a certain -state 2afa whose computation trees are -narrow.

3 Two Fundamental Automata Characterizations

Since Theorems 1.21.3 are concerned with and , before proving the theorems, we want to look into their basic properties in depth. In what follows, we will present two automata characterizations of the parameterized complexity class and the language .

3.1 Automata Characterizations of PsubLIN

Let us present a precise characterization of in terms of narrow 2afa’s. The narrowness of 2afa’s directly corresponds to the space usage of DTMs. What we intend to prove in Proposition 3.1 is, in fact, far more general than what we actually need for proving Theorems 1.21.3. We expect that such a general characterization could find other useful applications as well.

Firstly, let us recall the parameterized complexity class from Section 2.2. Our proof of Proposition 3.1 requires a fine-grained analysis of the well-known transformation of alternating Turing machines (or ATMs) to DTMs and vice versa. In what follows, we freely identify a language with its characteristic function.

Proposition 3.1

Let be log-space time constructible and let be -time space constructible. Consider a language and a log-space size parameter .

  1. If , then there are three constants and an -uniform family of -narrow 2afa’s having at most states such that each computes in time on all inputs .

  2. If there are constants and an -uniform family of -narrow 2afa’s having at most states such that each computes in time on all inputs , then belongs to .

Hereafter, we will proceed to the proof of Proposition 3.1. Our proof is different from a well-known proof in [4], which shows a simulation between space-bounded DTMs and alternating Turing machines (ATMs). For example, a simulation of an ATM by an equivalent space-bounded DTM in [4] uses the depth-first traversal of a computation tree, whereas we use the breadth-first traversal because of the narrowness of 2afa’s.

Proof of Proposition 3.1.   Take a parameterized problem with a log-space size parameter , a log-space time-constructible function , and an -time space-constructible function . Consider a family of promise decision problems induced from , as described in Section 2.1.

(1) Assume that is in . Let us consider a DTM that solves in time at most using space at most on all inputs , where are appropriate positive constants. In our setting, has a read-only input tape and a semi-infinite rewritable work tape. Let denote a unique blank symbol of .

We first modify so that it halts in scanning both on the input tape and in the start cell (i.e., cell ) of the work tape. We also force to halt after making all work-tape cells blank. Since is log-space time constructible, by modifying appropriately, we can make it halt in exactly steps for an appropriate constant . Moreover, we make have a unique accepting state. This last modification can be done by changing all accepting states into non-accepting ones, adding an extra vertex as a new unique accepting state, and inserting all new transitions from originally accepting states to this new vertex. In addition, by adding one more extra state, we can force the machine to enter a unique accepting state, say, just after scanning . For readability, we hereafter denote by .

In what follows, we wish to simulate by an -uniform family of appropriate 2afa’s specified by the proposition. As a preparation, let , where is a distinguished symbol indicating that the work-tape head is scanning symbol . The use of eliminates the inclusion of any extra information on the location of a tape head.

Let be any instance to and let . Let us consider surface configurations of on , each of which indicates that is in state , scanning both the th cell of the input tape and the th cell of the work tape composed of . We want to trace down these surface configurations using an alternating series of universal states and existential states of . The number of all surface configurations is at most since all surface configurations belong to .

Since each move of affects at most consecutive cells of the work tape, it suffices to focus our attention to these local cells. Our idea is to define ’s surface configuration so that it represents ’s surface configuration at time in such a way that indicates either the th cell content or the content of its neighboring cells. Furthermore, when , also carries extra information (by changing tape symbol to ) that the tape head is located at the th cell.

Formally, let us define the desired 2afa-family that computes . We make composed of all in satisfying that only at most one of is in . Consider any input and set . An inner state is of the form with , , , . The number of such inner states is thus at most for an appropriate constant . A surface configuration of is a tuple , where is an inner state and . This tuple indicates that, at time , is in state , and ’s input-tape head is scanning cell . If , then ’s work tape contains symbol in cell and its tape head is not scanning this cell. In contrast, when , ’s tape head is scanning at cell . Consider the case where . If , then the cells indexed by respectively contain but ’s tape head does not stay on these cells. If , then is scanning at cell . The other cases of and can be similarly treated. The initial surface configuration of on is , which corresponds to the final accepting surface configuration of on .

Hereafter, we will describe how to simulate ’s computation on by tracing down surface configurations of on using a series of universal and existential states of . Starting with , we inductively generate the next surface configuration of roughly in the following way. In an existential state, guesses (i.e., nondeterministically chooses) the content of consecutive cells in the current configuration of on . In a universal state, checks whether the guessed content is indeed correct by branching out computation paths, each of which selects, in using an existential state, one of the chosen cells. The -narrowness comes from the space bound of .

Let us return to a formal description. We first introduce a notation . Let and be two surface configurations of , where , , , , and . We write if there exist constants and a symbol that satisfy and the following two conditions (i)–(ii). (i) In the case of , it holds that either and , or and . (ii) In the case of , it holds that, if , then and ; if , then and ; and if , then . We define to be the set of all inner states of the form with and satisfying . Note that since only parameters in may vary.

(a) Assume that the current surface configuration of is with , , and . We nondeterministically choose one element from . We assign ACCEPT to if there is a surface configuration in whose label is ACCEPT.

(b) Assume that the current surface configuration is with . We universally generate the following three inner states: , without moving ’s tape head. We assign ACCEPT to if the above three states are all labeled with ACCEPT.

(c) Leafs are of the form with , , and . We assign ACCEPT to if and either or . Otherwise, we assign REJECT.

Hereafter, we claim the following statement (*). Let denote the th symbol of ; in particular, .

(*) For any input with , a surface configuration of on is in an accepting computation path at step iff all surface configurations of on for any are labeled by ACCEPT, and is labeled with ACCEPT.

This statement implies that there exists an accepting computation subtree of on iff has an accepting computation path on . The height of the shortest accepting computation subtree is bounded from above by . We set