DeepAI
Log In Sign Up

Between SC and LOGDCFL: Families of Languages Accepted by Logarithmic-Space Deterministic Auxiliary Depth-k Storage Automata

03/18/2022
by   Tomoyuki Yamakami, et al.
0

The closure of deterministic context-free languages under logarithmic-space many-one reductions (L-m-reductions), known as LOGDCFL, has been studied in depth from an aspect of parallel computability because it is nicely situated between L and AC^1∩SC^2. By changing a memory device from pushdown stacks to access-controlled storage tapes, we introduce a computational model of one-way deterministic depth-k storage automata (k-sda's) whose tape cells are freely modified during the first k accesses and then become blank forever. These k-sda's naturally induce the language family kSDA. Similarly to LOGDCFL, we study the closure LOGkSDA of all languages in kSDA under L-m-reductions. We first demonstrate that DCFL⊆ kSDA⊆SC^k by significantly extending Cook's early result (1979) of DCFL⊆SC^2. The entire hierarch of LOGkSDA for all k≥1 therefore lies between LOGDCFL and SC. As an immediate consequence, we obtain the same simulation bounds for Hibbard's limited automata. We further characterize the closure LOGkSDA in terms of a new machine model, called logarithmic-space deterministic auxiliary depth-k storage automata that run in polynomial time. These machines are as powerful as a polynomial-time two-way multi-head deterministic depth-k storage automata. We also provide "generic" LOGkSDA-complete languages under L-m-reductions by constructing a universal simulator working for all two-way k-sda's.

READ FULL TEXT VIEW PDF

page 1

page 2

page 3

page 4

02/04/2020

On Stochastic Automata over Monoids

Stochastic automata over monoids as input sets are studied. The well-def...
10/08/2022

Automata Equipped with Auxiliary Data Structures and Regular Realizability Problems

We consider general computational models: one-way and two-way finite aut...
11/09/2021

Behavioral Strengths and Weaknesses of Various Models of Limited Automata

We examine the behaviors of various models of k-limited automata, which ...
04/24/2018

Quantum Büchi Automata

This paper defines a notion of quantum Büchi automaton (QBA for short) w...
07/14/2017

Outfix-guided insertion

Motivated by work on bio-operations on DNA strings, we consider an outfi...
10/20/2017

The Emptiness Problem for Valence Automata over Graph Monoids

This work studies which storage mechanisms in automata permit decidabili...

1 DCFL, LOGDCFL, and Beyond

In the literature, numerous computational models and associated language families have been proposed to capture various aspects of parallel computation. Of those language families, we wish to pay special attention to the family known as , which is obtained from , the family of all deterministic context-free (dcf) languages, by taking the closure under logarithmic-space many-one reductions (or -m-reductions, for short) [3, 23]. These dcf languages were first defined in 1966 by Ginsburg and Greibach [8] and their fundamental properties were studied extensively since then. It is well known that is a proper subfamily of , the family of all context-free languages, because the context-free language (where means the reverse of ), for instance, does not belong to . The dcf languages in general behave quite differently from the context-free languages. As an example of such differences, is closed under complementation, while is not. This fact structurally distinguishes between and . Moreover, dcf languages requires computational resources of polynomial time and space simultaneously [4]; however, we do not know the same statement holds for context-free languages. Although dcf languages are accepted by one-way deterministic pushdown automata (or 1dpda’s), these languages have a close connection to highly parallelized computation, and thus has played a key role in discussing parallel complexity issues within because of the nice inclusions and .

It is known that can be characterized without using -m-reductions by several other intriguing machine models, which include: Cook’s polynomial-time logarithmic-space deterministic auxiliary pushdown automata [3], two-way multi-head deterministic pushdown automata running in polynomial time, logarithmic-time CROW-PRAMs with polynomially many processors [5], and circuits made up of polynomially many multiplex select gates having logarithmic depth [6] or having polynomial proof-tree size [18]. Such a variety of characterizations prove to be a robust and fully-applicable notion in computer science.

Another important feature of (as well as its underlying ) is the existence of “complete” languages, which are practically the most difficult languages in to recognize. Notice that a language is said to be -m-complete for a family of languages (or is -complete, for short) if belongs to and every language in is -m-reducible to . Sudborough [23] first constructed such a language, called , which possess the highest complexity (which he quoted as “tape hardest”) among all dcf languages under -m-reductions; therefore, are -m-complete for and also for . Using Sudborough’s hardest languages, Lohrey [17] lately presented another -complete problem based on semi-Thue systems. Nonetheless, only a few languages are known today to be complete for as well as .

A large void seems to lie between and (as well as ). This void has been filled with, for example, the union hierarchy and the intersection hierarchy over , where (resp., ) is composed of all unions (resp., intersections) of dcf languages. They truly form distinctive infinite hierarchies [14, 26]. Taking a quite different approach, Hibbard [12] devised specific rewriting systems, known as deterministic scan limited automata. Those rewriting system were lately remodeled in [20, 21] as single input/storage-tape 2-way deterministic linear-bounded automata that can modify the contents of tape cells whenever the associated tape heads access them (when a tape head makes a turn, however, we count it twice); however, such modifications are limited to only the first accesses. We call those machines deterministic -limited automata (or -lda’s, for short). Numerous followup studies, including Pighizzini and Prigioniero [22], Kutrib and Wendlandt [16], and Yamakami [25], have lately revitalized an old study on -lda’s. It is possible to make -lda’s satisfy the so-called blank-skipping property [25], by which each tape cell becomes blank after the accesses and inner states cannot be changed while reading any blank symbol. A drawback of Hibbard’s model is that the use of a single tape prohibits us from accessing memory and input simultaneously.

It seems quite natural to seek out a reasonable extension of by generalizing its underlying machines in a simple way. A basis of is of course 1dpda’s, each of which is equipped with a read-once333A read-only tape is called read once if, whenever it reads a tape symbol (except for -moves, if any), it must move to the next unread cell. input tape together with a storage device called a stack. Each stack allows two major operations. A pop operation is a deletion of a symbol and a push operation is a rewriting of a symbol on the topmost stack cell. The stack usage of pushdown storage seems too restrictive in practice and various extensions of such pushdown automata have been sought in the past literature. For instance, a stack automaton of Ginsburg, Greibach, and Harrison [9, 10] is capable of freely traversing the inside of the stack to access each stored item but it is disallowed to modify them unless the scanning stack head eventually comes to the top of the stack. Thus, each cell of the stack could be accessed a number of times. Meduna’s deep pushdown automata [19] also allow stack heads to move deeper into the content of the stacks and to replace some symbols by appropriate strings. Other extensions of pushdown automata include [2, 13]. To argue parallel computations, we intend to seek a reasonable restriction of stack automata by introducing access-controlled storage device. Each cell content of such a device is modified by its own tape head, which moves sequentially back and forth along the storage tape. This special tape and its tape head naturally allow underlying machines to make more flexible memory manipulations.

In real-life circumstances, it seems reasonable to limit the number of times to access data sets stored in the storage device. For instance, rewriting data items in blocks of a memory device, such as external hard drives or rewritable DVDs, is usually costly and it may be restricted during each execution of a computer program. We thus need to demand that every memory cell on this device can be modified only during the first few accesses and, in case of exceeding the intended access limit, the storage cell turns unusable and no more rewriting is possible. We refer to the number of times that the content of a storage cell is modified as “depth”. The aforementioned blank-skipping property of -lda’s, for instance, (?????) While scanning such unaccessible data sets, reading more input symbols may or may not be restricted. We leave a further discussion on this restriction to Section 2.2.

To understand the role of depth limit for an underlying machines, let us consider how to recognize the non-context-free language under an additional requirement that new input symbols are only read while storage cells are non-blank. Given an input of the form , we first write into the first cells of the storage device, check if by simultaneously reading and traversing the storage device backward by changing to , and then check if by simultaneously reading together with moving the device’s scanning head back and forth by changing to and then to (blank). This procedure requires depth .

A storage device whose cells have depth at most is called a depth- storage tape in this exposition and its scanning head is hereafter cited as a depth- storage-tape head for convenience. The machines equipped with those devices are succinctly called one-way deterministic depth- storage automata (or -sda’s, for short). Our -sda’s naturally expand Hibbard’s -lda’s444This claim comes from the fact that Hibbard’s rewriting systems can be forced to satisfy the blank-skipping property without compromising their computational power [25]. Each storage cell is initially empty and turned blank

after exceeding its depth limit. This latter requirement is imperative because, without it, the machines become as powerful as polynomial-time Turing machines. This follows from the fact that non-erasing stack automata can recognize the circuit value problem, which is a

-complete problem.

In a simultaneous access input and storage tape cells, the behaviors of -sda’s are influenced by the circumstances where an access to new input symbols is “immune” or “susceptible” to the depth of storage-tape cells.

For convenience, we introduce the notation for each index to express the family of all languages recognized by those -sda’s (for a more precise definition, see Section 2.2). As the aforementioned example shows, contains even non-context-free languages. With the use of -m-reductions analogously to , for any index , we consider the closure of under -m-reductions, denoted by . It follows from the definitions that . Among many intriguing questions, we wish to raise the following three simple questions regarding our new language families as well as their -m-closures.

(1) What is the computational complexity of language families as well as ?

(2) Is there any natural machine model that can precisely characterize in order to avoid the use of -m-reductions?

(3) Is there any language that is -m-complete for ?

The sole purpose of this exposition is to partially answer these questions through Sections 35 after a formal introduction of -sad’s in Section 2.

2 Introduction of Storage Automata

We formally define a new computational model, dubbed as deterministic storage automata, and show basic properties of them.

2.1 Numbers, Sets, Languages, and Turing Machines

We begin with fundamental notions and notation necessary to introduce a new computation model of storage automata.

The two notations and represent the set of all integers and that of all natural numbers (i.e., nonnegative integers), respectively. Given two numbers with , denotes the integer interval . In particular, when , we abbreviate as . Given a set , denotes the power set of , namely, the set of all subsets of .

An alphabet is a finite nonempty set of “symbols” or “letters.” Let denote any alphabet. A string over is a finite sequence of symbols in . The length of a string is the total number of symbols in and it is denoted by . The special notation is used to express the empty string of length . Given a string , the reverse of is and is denoted . For two strings and over the same alphabet, is said to be a prefix of if there exists a string for which . In this case, is called a suffix of . Given a language over , let denote the set of all prefixes of any string in , namely, . The notation denotes the set of all strings over . A language over is simply a subset of . As customarily, we freely identify a decision problem with its corresponding language. We use the binary representations of natural numbers. For such a representation , the notation denotes the corresponding natural number of . For instance, we obtain , , , , , etc.

We assume the reader’s familiarity with multi-tape Turing machines and we abbreviate deterministic Turing machines as DTMs. To manage a sub-linear space DTM, we assume that its input tape is read only and an additional rewritable index tape is used to access a tape cell whose address is specified by the content of the index tape. Since the index tape requires bits to specify each symbol of an input , the space usage of an underlying machine is thus limited to only work tapes.

For each constant , Steve’s class is the family of all languages recognized by DTMs in polynomial time using space [3]. Let denote the union . It follows that .

In order to compute a function, we further provide a DTM with an extra write-once output tape so that the machine produces output strings, where a tape is write-once if its tape head never moves to the left and, whenever its tape head writes a nonempty symbol, it must move to the right. All (total) functions computable by such DTMs with output tapes in polynomial time using only logarithmic space form the function class, known as .

Given two languages over alphabet and over , we say that is -m-reducible to (denoted by ) if there exists a function computed by an appropriate polynomial-time DTM using only space such that, for any , iff . We say that is inter-reducible to via -m-reductions (denoted by ) if both and hold.

We use for the collection of all deterministic context-free (dcf) languages. Those languages are recognized by one-way deterministic pushdown automata (or 1dpda’s, for short). The notation expresses the -m-closure of .

In relation to our new machine model, introduced in Section 2.2, we briefly explain Hibbard’s then-called “scan-limited automata” [12], which were lately reformulated by Pighizzini and Pisoni [20, 21] using a single-tape Turing machine model. This exposition follows their formulation of deterministic -limited automata (or -lda’s, for short) for any positive integer . A -lda is a single-tape linear automaton with two endmarkers. Initially, an input/work tape holds an input string and a tape head modifies its tape symbol whenever the tape head reads it for the first visits (when the tape head makes a turn, we double count the visit).

2.2 Storage Tapes and Storage Automata

We expand the standard model of pushdown automata by substituting its stack for a more flexible storage device, called a storage tape. Formally, a storage tape is a semi-infinite rewritable tape whose cells are initially blank (filled with distinguished initial symbols ) and are accessed sequentially by a tape head that can move back and forth along the tape by changing tape symbols as it passes through.

In what follows, we fix a constant . A one-way deterministic depth- storage automaton (or a -sda, for short) is a 2-tape DTM (equipped only with a read-only input tape and a rewritable work tape) of the form with a finite set of inner states, an input alphabet , storage alphabets for indices with , a transition function from to with , , and , an initial state in , and sets and of accepting states and rejecting states, respectively, with and , provided that (where is a distinguished initial symbol), (where is a unique blank symbol) and for any distinct pair . The two sets and indicate the direction of the input-tape head and that of the storage-tape head, respectively. The choice of forces the input tape to be read only once. We say that the input tape is read once if its tape head either moves to the right or stays still with scanning no input symbol. A single move (or step) of is dictated by . If is in inner state , scanning on the input tape and on the storage tape, a transition forces to change to , overwrite by , and move the input-tape head in direction and the storage-tape head in direction .

Instead of making -moves (i.e., a tape head neither moves nor reads any tape symbol), we allow a tape head to make a stationary move,555The use of stationary move is made in this exposition only for convenience sake. It is also possible to define -sda’s using -moves in place of stationary moves. by which the tape head stays still and the scanning symbol is unaltered. The tape head direction “” indicates such a stationary move. For stationary moves, must satisfy the following stationary-move requirement: assuming that , (i) if , then and (ii) either or . Thus, whenever the storage-tape head moves to a neighboring cell, it must change a tape symbol.

All tape cells are indexed by natural numbers from left to right. The leftmost tape cell is the start cell indexed . An input tape has endmarkers and a storage tape has only the left endmarker . When an input string is given to the input tape, it should be surrounded by the two endmarkers as so that is located at the start cell and is at the cell indexed . For any index , denotes the tape symbol written on the th input-tape cell, provided that (left endmarker) and (right endmarker). Similarly, when represents the non- portion of the content of a storage tape, the notation expresses the symbol in the th tape cell. Note that .

For the storage tape, we request the following rewriting restriction, called the depth- requirement, to be satisfied. Whenever the storage-tape head passes through a tape cell containing a symbol in with , the machine must replace it by another symbol in except for the case of the following “turns”. We distinguish two types of turns. A left turn at step refers to ’s step at which, after ’s tape head moves to the right at step , it moves to the left at step . In other words, takes two transitions at step and at step . Similarly, we say that makes a right turn at step if ’s tape head moves from the left at step and changes its direction to the right at step . Whenever a tape head makes a turn, we treat this case as “double accesses.” More formally, at a turn, any symbol in with must be changed to another symbol in . No symbol in can be modified at any time. Although we use various storage alphabets , we can easily discern from which direction the tape head arrives simply by scanning a storage tape symbol written in the current tape cell.

Assuming that modifies on a storage tape with to and moves its storage-tape head in direction , if equals , then must belong to ; if , then both and must be the same; otherwise, must be in . A storage tape that satisfies the depth- requirement is succinctly called a depth- storage tape.

Let us consider two different models whose input-tape head is either depth-susceptible or depth-immune to the content of a storage-tape cell. A tape head (except for the storage-tape head) is called depth-susceptible if, while the currently scanning symbol on a storage tape cell is in , the tape head must make a stationary move; namely, if with , then follows. The tape head is called depth-immune if there is no restriction on the scanning symbol .

A surface configuration of on input is of the form with , , , and , which indicates the situation where is in state , the storage tape contains (except for the tape symbol ), and two tape heads scan the th cell of the input tape and the th cell of the storage tape. For readability, we drop the word “surface” altogether in the subsequent sections.

The initial (surface) configuration has the form and describes how to reach the next surface configuration in a single step. For convenience, we define the depth value of a surface configuration to be the number satisfying . An accepting configuration (resp., a rejecting configuration) is of the form with (resp., ). A halting configuration means either an accepting configuration or a rejecting configuration. A computation of starts with the initial configuration and ends with a halting configuration. The -sda accepts (resp., rejects) if starts with the initial configuration with the input and reaches an accepting configuration (resp., a rejecting configuration).

For a language over , we say that recognizes (accepts or solves) if, for any input string , (i) if , then accepts and (ii) if , then rejects . For two -sda’s and over the same input alphabet , we say that is (computationally) equivalent to if, for any input , accepts (resp., rejects) iff accepts (resp., rejects) .

For notational convenience, we write for the collection of all languages recognized by depth-susceptible -sda’s and for the collection of languages that are respectively -m-reducible to certain languages in . Moreover, we set to be the union . With this notation, the non-context-free language , discussed in Section 1, belongs to . Thus, follows instantly. For the depth-immune model of -sda, we similarly define , , and .

As a special case of , the following holds. This demonstrates the fact that -sda’s truly expand 1dpda’s.

Lemma 2.1

.

Proof.

It was shown that is precisely characterized by deterministic 2-limited automata (or 2-lda’s, for short) [12, 21]. Since any 2-lda can be transformed to another 2-lda with the blank-skipping property [25], depth-susceptible 2-sda’s can simulate 2-lda’s; thus, we immediately conclude that .

For the converse, it suffices to simulate depth-susceptible 2-sda’s by appropriate 1dpda’s. Given a depth-susceptible 2-sda , we design a one-way deterministic pushdown automaton (or a 1dpda) that works as follows. We want to treat the storage-tape of as a stack by ignoring all symbols in except for the left endmarker . When modifies the initial symbol on its storage tape to a new symbol in , pushes to a stack. Consider the case where modifies a storage symbol to , pops the same symbol . Note that, since is depth-susceptible, it cannot move the input-tape head. As for the behavior of ’s input-tape head, if reads an input symbol and moves to the right, then does the same. On a tape cell containing , if ’s input-tape head makes a series of stationary moves, then reads as the first move, remembers , and makes -moves afterwards until ends its stationary moves. Obviously, the resulting machine is a 1dpda and it simulates . ∎

Remark. Remember that tape cells of -sda’s become blank after the first accesses. If we allow such tape cells to keep the last written symbols instead of erasing them, then the resulting machines get enough power to recognize languages to which even -complete problems are -m-reducible. In this exposition, we do not further delve into this topic.

3 Two Machine Models that Characterize LOGSda

We begin with a structural study of , which is the closure of under -m-reductions, defined in Section 2.2. We intend to seek different characterizations of with no use of -m-reductions. The idea of the elimination of such reductions is attributed to Sudborough [23], who characterized using two machine models: polynomial-time log-space auxiliary deterministic pushdown automata and polynomial-time multi-head deterministic pushdown automata. Our goal here is to expand these machine models to fit into the framework of depth- storage automata.

3.1 Deterministic Auxiliary Depth- Storage Automata

Let us expand deterministic auxiliary pushdown automata to deterministic auxiliary depth- storage automata, each of which is equipped with a two-way read-only depth-susceptible input tape, an auxiliary rewritable tape, a depth- storage tape.

Let us formally formulate deterministic auxiliary depth- storage automata. For the description of such a machine, we first prepare a two-way read-only input tape and a depth- storage tape and secondly we supply a new space-bounded auxiliary rewritable work tape whose cells are freely modified by a two-way tape head. Notice that the storage-tape head is allowed to make stationary moves (that is, the storage-tape head neither reads any symbol nor moves to any adjacent cell). A deterministic auxiliary depth- storage automaton (or an aux--sda, for short) is formally a 3-tape DTM with a read-only input tape, an auxiliary rewritable (work) tape with an alphabet , and a depth- storage tape. Initially, the input tape is filled with , the auxiliary tape is blank, and the depth- storage tape has only designated blank symbols except for the left endmarker . We set , , and , provided that for any distinct pair . The transition function of maps to , where , , and . A transition indicates that, on reading two input symbols and , changes its inner state to by moving two tape heads in directions and , changes auxiliary tape symbol to by moving a tape head in direction , and changes storage tape symbol to by moving in direction . A string is accepted (resp., rejected) if enters an inner state in (resp., ).

When excluding from the definition of , the resulting automaton must fulfill the depth- requirement of -sda’s given in Section 2.2. The depth-susceptibility condition is stated as follows: if with , then must hold. To implement any stationary move of two tape heads, should satisfy a similar requirement as underlying -sda’s; namely, assuming that , (i) if , then , (ii) if , then , and (iii) at least one of , , and is in .

Given an aux--sda , take a positive integer and a polynomial so that ’s auxiliary tape uses at most tape cells within steps on any input of length . We reduce this space bound down to by introducing a larger auxiliary tape alphabet as follows. Since we can express each element of as a binary string of length (by adding s as dummy bits if necessary), we can split the auxiliary tape into tracks to hold the element of . Without loss of generality, our machine can be assumed to have an auxiliary tape composed of tracks holding binary symbols for an appropriately chosen constant and to use at most tape cells on this auxiliary tape. We also assume that initially writes all s on every track of the auxiliary tape.

3.2 Multi-Head Deterministic Depth- Storage Automata

We further introduce another useful machine model by expanding multi-head pushdown automata to (two-way) multi-head deterministic depth- storage automata, each of which uses two-way multiple tape heads to read a given input beside a tape head that modifies symbols on a depth- storage tape.

For each fixed number , we define an -head deterministic depth- storage automaton as a 2-tape DTM with read-only depth-susceptible tape heads scanning over an input tape and a single read/write tape head over a depth- storage tape. For convenience, we call such a machine by a -sda(), where the subscript “” emphasizes that all subordinate tape heads move in both directions (except for the case of stationary moves). Notice that each -sda() has actually tape heads, including one tape head working along the storage tape.

More formally, a -sda() is a tuple with a transition function mapping to , where , , , , , and , provided that for any distinct pair . A transition means that, if is in inner state , scanning a tuple of symbols on the input tape by the read-only tape heads as well as symbol on the depth- storage tape by the rewritable tape head, then enters inner state and writes on the depth- storage tape by moving the th input-tape head in direction for every index and the storage-tape head in direction . Remember that a -sda() and a depth-susceptible -sda are similar in their machine structures but the former can move its input-tape head to the left.

The treatment of the acceptance/rejection criteria is the same as underlying -sda’s. A stationary move requires that (i) implies and (ii) at least one of is not . The depth-susceptibility condition of says that, for any transition of , if , then follows.

3.3 Characterization Theorem

We intend to demonstrate that the two new machine models introduced in Sections 3.13.2 precisely characterize . This result can be seen as a natural extension of Sudborough’s machine characterization of to .

Theorem 3.1

Let . Let be any language. The following three statements are logically equivalent.

  1. is in .

  2. There exists an aux--sda that recognizes in polynomial time using logarithmic space.

  3. There exist a number and a -sda() that recognizes in polynomial time.

In the rest of this section, we intend to prove Theorem 3.1. Note that Sudborough’s proof [23, Lemmas 3–6] for relies on the heavy use of stack operations, which are applied only to the topmost symbol of the stack but the other symbols in the stack are intact. In our case, however, we need to deal with the operations of a storage-tape head, which can move back and forth along a storage tape by modifying cell’s content as many as times. Sudborough’s characterization utilizes a simulation procedure [11, pp.338–339] of Hartmanis and a proof argument [7, Lemma 4.3] of Galil; however, we cannot directly use them, and thus a new idea is definitely needed to establish Theorem 3.1. The proof of the theorem therefore requires a technically challenging simulation among -functions and the other machine models of aux--sda and -sda.

Lemma 3.2

Given a function in for certain alphabets and and a depth-susceptible -sda working over in polynomial time, there exists a log-space aux--sda that recognizes in polynomial time.

Proof.

Let be any function in . We take a DTM , equipped with an read-only input tape, a logarithmic space-bounded rewritable work tape, and a write-once output tape, and assume that computes in polynomial time. A given depth-susceptible -sda running in polynomial time is denoted . We set .

We design the desired aux--sda for as follows. An input tape of holds the string for any given . In the following construction of , we treat the input tape of as an imaginary tape. Given any input in , we repeat the following process until enters a certain halting state. Using an auxiliary tape of , we keep track of the content on ’s work tape, two head positions of ’s input tape and ’s input tape.

Assume that ’s input-tape head is scanning the same tape cell as ’s input-tape head. Assume that ’s head of the imaginary input tape is located at cell .
(1) If makes an input-stationary move, then we simply simulate one step of the behavior of ’s storage-tape head since we can reuse the last produced output symbol of .
(2) Assume that the current step is not any input-stationary move and that moves to cell on the imaginary input tape and scans this cell. We remember the position of the input-tape head, return to the last position of ’s input-tape head, and resume the simulation of with the use of ’s auxiliary tape as a work tape by a series of stationary moves of the input-tape head and the storage-tape head using both the input tape and the auxiliary tape of until produces the th output symbol, say, . This is possible because the output tape of is write-once. Once is obtained, remembers the positions of ’s tape heads, moves its input-tape head to restore its location. We then simulate a single step of on by reading current content cell of the storage tape together with a stationary move of both the input-tape head and the auxiliary-tape head. Note that the movement of the storage-tape head requires only the symbol but does not need any movement of the other tape heads. We then update the tape-head positions.

It is not difficult to show that eventually reaches the same type of halting states as does in polynomially many steps. Notice that the storage-tape head and the auxiliary-tape head do not work simultaneously. The depth-susceptibility of comes from that of since is simulated only when ’s storage-tape head reads a symbol not in . Thus, is indeed an aux--sda. ∎

We then transform an aux--sda to a -sda, which mimics the behavior of the aux--sda.

Lemma 3.3

Let . Let denote a polynomial-time log-space aux--sda, there are a constant and a -sda that simulates in polynomial time.

Proof.

Let and let denote any aux--sda that runs in polynomial time using logarithmic space on all inputs of length . We use ’s depth-susceptible input tape as the principal tape head of . We introduce additional tape heads to simulate the behavior of an auxiliary-tape head of as follows.

As noted in Section 3.1, we assume that the auxiliary tape of is split into tracks for a certain constant and that uses at most cells on the auxiliary tape, where indicates input length. We want to construct a polynomial-time -sda for which coincides with . Since each track of the auxiliary tape uses the binary alphabet , we can view the content of each track as the binary representation of a natural number. In what follows, we fix one of such tracks. If the track contains a string of length , we treat it as the binary number . We use two tape heads to remember the positions of the input-tape head and the auxiliary-tape head of . To remember the number , we need additional tape heads (other than the storage-tape head). Since there are tracks, we need heads for our intended simulation.

Head 1 keeps the tape head position. Whenever moves the auxiliary tape head, moves head 1 as well. Head 2 moves backward to measure the distance of the auxiliary tape head from the left end of the auxiliary tape. Using this information, we move head 3 as follows. If the tape head changes to (resp., to ) on this target track, then we need to move the head cells to the right (resp., to the left). How can we move one head to cell from ? Following [11], we use three tape heads to achieve this goal as follows. We move head 4 one cell to the right. As head 4 takes one step on the way back to , we move head 5 two cells to the right. We then switch the roles of heads 4 and 5. As head 5 takes one step back to , we move head 4 two cells to the right. If we repeat this process times, one of the heads indeed reaches cell . Hence, for the th run, we should move head 3 to cell . This process requires 3 tapes. Thus, the total of tape heads are sufficient to simulate the operation on the track content of the auxiliary tape.

By the above behaviors of the additional tape heads, they are depth-susceptible. ∎

Figure 1: Storage-tape head movement. (1) After moving from to , changes to . The storage-tape head of moves as depicted in this figure. (2) After moving from to , changes to . The storage-tape head of moves as shown in this figure.

We lessen the number of input-tape heads from to for any by implementing a “counter head” to record the movement of one input-tape head. A counter head is a two-way depth-susceptible tape head along an input tape such that, once this tape head is activated, it starts moving from to the right and comes back to without stopping or reading any input symbol on its way.

Lemma 3.4

Let and . Given any -sda() with a counter head over input alphabet running in polynomial time, there exists a polynomial-time -sda() with a counter head that recognizes the language , where and are tape symbols not in , where with for .

Proof.

Let . Let denote any -sda() with a counter head running in polynomial time. Among all read-only tape heads, we call the principal tape head by head 1 and choose two subordinate tape heads (except for the counter head) as head 2 and head 3. We want to simulate these three tape heads by two tape heads, eliminating one subordinate tape head. We leave all the remaining tape heads unmodified so that they stay working over . This simulation can be carried out on an appropriate polynomial-time -sda(), say, . With the use of and , let . The associated input to is . Initially, heads are stationed at cell . We move heads 2 and 3 to the leftmost of . Assume that head is originally located at cell and head is at cell . Such a location pair is expressed as . We call each block of as block for any . For convenience, and are respectively called block 0 and block . We want to express the pair by stationing a single tape head at the th symbol of the th block of ’s input tape. We assume that the associated input-tape head of , say, head is located at the th symbol of the th block. We force to hold two input symbols, say, written at the th and the th cells of ’s input tape.

We assume that, in a single step, head and head of move to a new location pair for . To simulate this single step of , needs to move head to a new location and read two symbols . If , then moves head in direction . By contrast, if , then moves head in direction , reads its input symbol , and remember it using the inner states. As moves its tape head leftward to the nearest , moves the counter head to the right (from the start cell) to remember the value . Next, moves head to the symbol in block without moving the counter head. Finally, moves head to the right until the counter head comes back to the start cell. After the counter head arrives at the start cell, head reaches the th cell in block . Next, reads an input symbol and remember using its inner states. Note that the tape head on the storage tape never moves during the above process. ∎

Notice that a -sda owns only a one-way input-tape head whereas a -sda() uses a two-way input-tape head. We thus need to restrict two-way head moves of a -sda() onto one-way head moves. For this purpose, we utilize a counter again together with the use of the reverse of an input.

Lemma 3.5

Given a -sda() with a counter head running in polynomial time, there exists another -sda() with a counter head such that (i) ’s input-tape head never moves to the left and (ii) recognizes the language in polynomial time, where .

Proof.

Let be any polynomial-time -sda() with a counter head. We simulate the two-way movement of ’s input-tape head by an appropriate one-way tape head in the following way. Assume that represents the position of ’s input-tape head. Let and denote the tape symbols at cells and . For simplicity, the input-tape head is called head 1. If ’s input-tape head moves to the right or makes a stationary move, then we exactly simulate the ’s step. In what follows, we consider the case where ’s input-tape head moves to the left; that is, the new position of the input-tape head is . By the depth-susceptibility condition of , the current storage-tape cell contains no symbol in . In this case, we move head 1 and the counter head simultaneously tot he right until head 1 reaches the first encounter of . We continue moving head 1 to the right while we move the counter head back to the left endmarker. We finally make head 1 shift to the right cell. Head 1 then reaches . ∎

Next, we show how to eliminate a counter head using the fact that the counter head is depth-stopping.

Lemma 3.6

Let denote any -sda() with a one-way input-tape head and a counter head running in polynomial time. There exists a depth-susceptible -sda such that (1) ’s input tape is also one-way and (2) recognizes in polynomial time, where for .

Sudborough made a similar claim, whose proof relies on Galil’s argument [7], which uses a stack to store and remove specific symbols to remember the distance of a tape head from a particular input tape cell. However, since tape cells on the storage tape are not allowed to modify more than times, we need to develop a different strategy to prove Lemma 3.6.

For this purpose, we use an additional string of the form to make enough blank space on the depth- storage tape for future recording of the movement.

Proof of Lemma 3.6.   Take any -sda() with a counter head. Since the counter head is depth-susceptible, it thus suffices to show how to simulate the behaviors of the counter head using a storage tape while the current storage tape cell holds no symbol from . In what follows, we describe the simulation procedure. Let denote any input and set .

Whenever the counter head is activated, it starts at , moves to the right for a certain number of steps, say, , and moves back to the start cell to complete a “counting” process. To simulate this process on a -sda , there are three cases to consider separately. Note that, even in the case of , if the counter head is not activated, then we simply moves an input-tape head as does. Using its inner states, can remember (a) which direction the storage-tape head comes from and (b) the contents of the currently scanning cell and its left and right adjacent tape cells (if any). Assume that is the content of three neighboring cells, the middle of which is being scanned by ’s storage-tape head.

We partition the storage tape into a number of “regions”. A region consists of blocks, each of which contains cells. Each region is used to simulate one run of the counter head and basically holds the information on one storage symbol. Two regions are separated by one separator block of cells. We call a tape cell a representative if it holds the information on the tape symbol stored in a storage-tape cell of . A block is called active if it contains a representative, and all other blocks are called passive. In particular, we call a block consumed if it contains but no representative.

In a run of the procedure described below, we maintain the situation that there is at most one active block in a region. Moreover, we maintain the following condition as well.

(*) Between and (resp., between and ), all symbols appearing in this tape region of are in (resp., ), where for any .

We use a new storage alphabet consisting of symbols of the form and for , where the parameter (resp., ) indicates that there are consumed blocks in the area of the region left (resp., right) to the currently scanning cell.

Assume that is scanning .

(1) Assume that ’s storage-tape head comes from the left. If the counter head is not activated but writes and moves its storage-tape head in direction (). In this case, overwrites by and moves its storage-tape head in direction . Using as well as the value , moves to the right, find a border to the neighboring region. If finds a representative in this neighboring region, then stops. If reaches the right border of the neighboring region without finding any representative, then this region must represent and returns and stops at the center of the neighboring region. If ’s storage-tape head comes from the left, we use in place of .

(2) Consider the case where ’s storage-tape head sits at the cell that has never been visited before. Note that this cell is blank and all cells located at its right area are also blank.

(a) If writes and moves its storage-tape head to the left, then we need to secure enough space for future executions of (3)–(5) because the storage-tape head must move around to mark certain cells. Assume that is scanning . To simulate the counter head moves, writes as a marker, moves its storage-tape head for steps, and comes back to , moves to the right for steps, writes , and then moves to the left to search for a representative in the left neighboring region as in (1).

(b) In contrast, if moves to the left, then we further need to produce a new region. For this purpose, further moves to the right for cells to find a new border, continues moving for cells to find the center of this new region, and finally stops.

(3) Consider the case where ’s storage-tape head reads a non-blank tape symbol with , write a tape symbol over it, and moves in direction .

(a) Assume that had moved to from the left. See Fig. 1(1). Notice that . At present, is assumed to be scanning . In this case, remembers in its inner states, modifies it to (), and moves its tape head for steps to the right as the counter head does, by changing every encountered symbol of the form with to on its way. When makes steps, it makes a left turn, returns to , and writes . This mimics the back-and-forth movement of the counter head. The tape head again starts moving rightwards by changing for to for exactly steps, and it finally writes . This is possible because the input-tape head reads and (*) is satisfied. Finally, if , then moves to the right looking for a representative of the right region. On the contrary, when , moves to the left looking for a representative of the left region.

(b) Assume that had moved to from the right. This case is treated symmetrically to (a). See Fig. 1(2).

(4) Consider the case where ’s storage-tape head reads a tape symbol and moves in direction . In this case, behaves similarly to (1) except that its tape head writes a new marker since .

(5) Consider the case where ’s storage-tape head reads the symbol in . We then move the storage-tape head in direction . In this case, ’sw storage-tape head is at the center of the current region. Since we do not need to simulate the counter head, we writes (whenever ) and we follow the procedure of moving to the neighboring region as in (1).

Finally, we combine all the lemmas (Lemmas 3.23.5) and verify Theorem 3.1.

Proof of Theorem 3.1.   Let . The implication (1)(2) is shown as follows. Take any language in over alphabet . There exist a function in for an appropriate alphabet and a depth-susceptible -sda working over such that, for any string , if , then accepts ; otherwise, rejects . By Lemma 3.2, we can obtain a log-space aux--sda that recognizes in polynomial time.

Lemma 3.3 obviously leads to the implication (2)(3). Finally, we want to show that (3) implies (1). Given a language , we assume that there is a polynomial-time -sda recognizing for a certain number . We transform this -sda() to another -sda() by providing a (dummy) counter head. We repeatedly apply Lemma 3.4 to reduce the number of input-tape heads down to . Lemma 3.5 then implies the existence of a -sda() with a one-way input-tape head that correctly recognizes in polynomial time. By Lemma 3.6, we further obtain a depth-susceptible -sda that can recognize in polynomial time. Since consists of strings of the form , it suffices to set . By the definition of , we can compute this function using log space. This concludes that belongs to .

4 Universal Simulators and the Space Hardest Languages

As a major significant feature, we intend to prove the existence of -m-complete languages in for each . For this purpose, we first construct a universal simulator that can simulate all -sda’s by properly encoding -sda’s and their inputs. We further force this universal simulator to be a -sda.

4.1 LogSDA-Complete Languages

Sudborough [23] earlier proposed, for every number , the special language , which is -m-complete for and thus for because is closed under -m-reductions. Sudborough discovered a “tape-hardest” language, which literally encodes transitions of deterministic pushdown automata. Sudborough’s success comes from the fact that the use of one-way and two-way deterministic pushdown automata makes no difference in formulating . For , we propose the following decision problem (or a language) . Recall that a decision problem is identified with its associated language.

Membership SDA Problem (MEMB):