1 DCFL, LOGDCFL, and Beyond
In the literature, numerous computational models and associated language families have been proposed to capture various aspects of parallel computation. Of those language families, we wish to pay special attention to the family known as , which is obtained from , the family of all deterministic contextfree (dcf) languages, by taking the closure under logarithmicspace manyone reductions (or mreductions, for short) [3, 23]. These dcf languages were first defined in 1966 by Ginsburg and Greibach [8] and their fundamental properties were studied extensively since then. It is well known that is a proper subfamily of , the family of all contextfree languages, because the contextfree language (where means the reverse of ), for instance, does not belong to . The dcf languages in general behave quite differently from the contextfree languages. As an example of such differences, is closed under complementation, while is not. This fact structurally distinguishes between and . Moreover, dcf languages requires computational resources of polynomial time and space simultaneously [4]; however, we do not know the same statement holds for contextfree languages. Although dcf languages are accepted by oneway deterministic pushdown automata (or 1dpda’s), these languages have a close connection to highly parallelized computation, and thus has played a key role in discussing parallel complexity issues within because of the nice inclusions and .
It is known that can be characterized without using mreductions by several other intriguing machine models, which include: Cook’s polynomialtime logarithmicspace deterministic auxiliary pushdown automata [3], twoway multihead deterministic pushdown automata running in polynomial time, logarithmictime CROWPRAMs with polynomially many processors [5], and circuits made up of polynomially many multiplex select gates having logarithmic depth [6] or having polynomial prooftree size [18]. Such a variety of characterizations prove to be a robust and fullyapplicable notion in computer science.
Another important feature of (as well as its underlying ) is the existence of “complete” languages, which are practically the most difficult languages in to recognize. Notice that a language is said to be mcomplete for a family of languages (or is complete, for short) if belongs to and every language in is mreducible to . Sudborough [23] first constructed such a language, called , which possess the highest complexity (which he quoted as “tape hardest”) among all dcf languages under mreductions; therefore, are mcomplete for and also for . Using Sudborough’s hardest languages, Lohrey [17] lately presented another complete problem based on semiThue systems. Nonetheless, only a few languages are known today to be complete for as well as .
A large void seems to lie between and (as well as ). This void has been filled with, for example, the union hierarchy and the intersection hierarchy over , where (resp., ) is composed of all unions (resp., intersections) of dcf languages. They truly form distinctive infinite hierarchies [14, 26]. Taking a quite different approach, Hibbard [12] devised specific rewriting systems, known as deterministic scan limited automata. Those rewriting system were lately remodeled in [20, 21] as single input/storagetape 2way deterministic linearbounded automata that can modify the contents of tape cells whenever the associated tape heads access them (when a tape head makes a turn, however, we count it twice); however, such modifications are limited to only the first accesses. We call those machines deterministic limited automata (or lda’s, for short). Numerous followup studies, including Pighizzini and Prigioniero [22], Kutrib and Wendlandt [16], and Yamakami [25], have lately revitalized an old study on lda’s. It is possible to make lda’s satisfy the socalled blankskipping property [25], by which each tape cell becomes blank after the accesses and inner states cannot be changed while reading any blank symbol. A drawback of Hibbard’s model is that the use of a single tape prohibits us from accessing memory and input simultaneously.
It seems quite natural to seek out a reasonable extension of by generalizing its underlying machines in a simple way. A basis of is of course 1dpda’s, each of which is equipped with a readonce^{3}^{3}3A readonly tape is called read once if, whenever it reads a tape symbol (except for moves, if any), it must move to the next unread cell. input tape together with a storage device called a stack. Each stack allows two major operations. A pop operation is a deletion of a symbol and a push operation is a rewriting of a symbol on the topmost stack cell. The stack usage of pushdown storage seems too restrictive in practice and various extensions of such pushdown automata have been sought in the past literature. For instance, a stack automaton of Ginsburg, Greibach, and Harrison [9, 10] is capable of freely traversing the inside of the stack to access each stored item but it is disallowed to modify them unless the scanning stack head eventually comes to the top of the stack. Thus, each cell of the stack could be accessed a number of times. Meduna’s deep pushdown automata [19] also allow stack heads to move deeper into the content of the stacks and to replace some symbols by appropriate strings. Other extensions of pushdown automata include [2, 13]. To argue parallel computations, we intend to seek a reasonable restriction of stack automata by introducing accesscontrolled storage device. Each cell content of such a device is modified by its own tape head, which moves sequentially back and forth along the storage tape. This special tape and its tape head naturally allow underlying machines to make more flexible memory manipulations.
In reallife circumstances, it seems reasonable to limit the number of times to access data sets stored in the storage device. For instance, rewriting data items in blocks of a memory device, such as external hard drives or rewritable DVDs, is usually costly and it may be restricted during each execution of a computer program. We thus need to demand that every memory cell on this device can be modified only during the first few accesses and, in case of exceeding the intended access limit, the storage cell turns unusable and no more rewriting is possible. We refer to the number of times that the content of a storage cell is modified as “depth”. The aforementioned blankskipping property of lda’s, for instance, (?????) While scanning such unaccessible data sets, reading more input symbols may or may not be restricted. We leave a further discussion on this restriction to Section 2.2.
To understand the role of depth limit for an underlying machines, let us consider how to recognize the noncontextfree language under an additional requirement that new input symbols are only read while storage cells are nonblank. Given an input of the form , we first write into the first cells of the storage device, check if by simultaneously reading and traversing the storage device backward by changing to , and then check if by simultaneously reading together with moving the device’s scanning head back and forth by changing to and then to (blank). This procedure requires depth .
A storage device whose cells have depth at most is called a depth storage tape in this exposition and its scanning head is hereafter cited as a depth storagetape head for convenience. The machines equipped with those devices are succinctly called oneway deterministic depth storage automata (or sda’s, for short). Our sda’s naturally expand Hibbard’s lda’s^{4}^{4}4This claim comes from the fact that Hibbard’s rewriting systems can be forced to satisfy the blankskipping property without compromising their computational power [25]. Each storage cell is initially empty and turned blank
after exceeding its depth limit. This latter requirement is imperative because, without it, the machines become as powerful as polynomialtime Turing machines. This follows from the fact that nonerasing stack automata can recognize the circuit value problem, which is a
complete problem.In a simultaneous access input and storage tape cells, the behaviors of sda’s are influenced by the circumstances where an access to new input symbols is “immune” or “susceptible” to the depth of storagetape cells.
For convenience, we introduce the notation for each index to express the family of all languages recognized by those sda’s (for a more precise definition, see Section 2.2). As the aforementioned example shows, contains even noncontextfree languages. With the use of mreductions analogously to , for any index , we consider the closure of under mreductions, denoted by . It follows from the definitions that . Among many intriguing questions, we wish to raise the following three simple questions regarding our new language families as well as their mclosures.
(1) What is the computational complexity of language families as well as ?
(2) Is there any natural machine model that can precisely characterize in order to avoid the use of mreductions?
(3) Is there any language that is mcomplete for ?
2 Introduction of Storage Automata
We formally define a new computational model, dubbed as deterministic storage automata, and show basic properties of them.
2.1 Numbers, Sets, Languages, and Turing Machines
We begin with fundamental notions and notation necessary to introduce a new computation model of storage automata.
The two notations and represent the set of all integers and that of all natural numbers (i.e., nonnegative integers), respectively. Given two numbers with , denotes the integer interval . In particular, when , we abbreviate as . Given a set , denotes the power set of , namely, the set of all subsets of .
An alphabet is a finite nonempty set of “symbols” or “letters.” Let denote any alphabet. A string over is a finite sequence of symbols in . The length of a string is the total number of symbols in and it is denoted by . The special notation is used to express the empty string of length . Given a string , the reverse of is and is denoted . For two strings and over the same alphabet, is said to be a prefix of if there exists a string for which . In this case, is called a suffix of . Given a language over , let denote the set of all prefixes of any string in , namely, . The notation denotes the set of all strings over . A language over is simply a subset of . As customarily, we freely identify a decision problem with its corresponding language. We use the binary representations of natural numbers. For such a representation , the notation denotes the corresponding natural number of . For instance, we obtain , , , , , etc.
We assume the reader’s familiarity with multitape Turing machines and we abbreviate deterministic Turing machines as DTMs. To manage a sublinear space DTM, we assume that its input tape is read only and an additional rewritable index tape is used to access a tape cell whose address is specified by the content of the index tape. Since the index tape requires bits to specify each symbol of an input , the space usage of an underlying machine is thus limited to only work tapes.
For each constant , Steve’s class is the family of all languages recognized by DTMs in polynomial time using space [3]. Let denote the union . It follows that .
In order to compute a function, we further provide a DTM with an extra writeonce output tape so that the machine produces output strings, where a tape is writeonce if its tape head never moves to the left and, whenever its tape head writes a nonempty symbol, it must move to the right. All (total) functions computable by such DTMs with output tapes in polynomial time using only logarithmic space form the function class, known as .
Given two languages over alphabet and over , we say that is mreducible to (denoted by ) if there exists a function computed by an appropriate polynomialtime DTM using only space such that, for any , iff . We say that is interreducible to via mreductions (denoted by ) if both and hold.
We use for the collection of all deterministic contextfree (dcf) languages. Those languages are recognized by oneway deterministic pushdown automata (or 1dpda’s, for short). The notation expresses the mclosure of .
In relation to our new machine model, introduced in Section 2.2, we briefly explain Hibbard’s thencalled “scanlimited automata” [12], which were lately reformulated by Pighizzini and Pisoni [20, 21] using a singletape Turing machine model. This exposition follows their formulation of deterministic limited automata (or lda’s, for short) for any positive integer . A lda is a singletape linear automaton with two endmarkers. Initially, an input/work tape holds an input string and a tape head modifies its tape symbol whenever the tape head reads it for the first visits (when the tape head makes a turn, we double count the visit).
2.2 Storage Tapes and Storage Automata
We expand the standard model of pushdown automata by substituting its stack for a more flexible storage device, called a storage tape. Formally, a storage tape is a semiinfinite rewritable tape whose cells are initially blank (filled with distinguished initial symbols ) and are accessed sequentially by a tape head that can move back and forth along the tape by changing tape symbols as it passes through.
In what follows, we fix a constant . A oneway deterministic depth storage automaton (or a sda, for short) is a 2tape DTM (equipped only with a readonly input tape and a rewritable work tape) of the form with a finite set of inner states, an input alphabet , storage alphabets for indices with , a transition function from to with , , and , an initial state in , and sets and of accepting states and rejecting states, respectively, with and , provided that (where is a distinguished initial symbol), (where is a unique blank symbol) and for any distinct pair . The two sets and indicate the direction of the inputtape head and that of the storagetape head, respectively. The choice of forces the input tape to be read only once. We say that the input tape is read once if its tape head either moves to the right or stays still with scanning no input symbol. A single move (or step) of is dictated by . If is in inner state , scanning on the input tape and on the storage tape, a transition forces to change to , overwrite by , and move the inputtape head in direction and the storagetape head in direction .
Instead of making moves (i.e., a tape head neither moves nor reads any tape symbol), we allow a tape head to make a stationary move,^{5}^{5}5The use of stationary move is made in this exposition only for convenience sake. It is also possible to define sda’s using moves in place of stationary moves. by which the tape head stays still and the scanning symbol is unaltered. The tape head direction “” indicates such a stationary move. For stationary moves, must satisfy the following stationarymove requirement: assuming that , (i) if , then and (ii) either or . Thus, whenever the storagetape head moves to a neighboring cell, it must change a tape symbol.
All tape cells are indexed by natural numbers from left to right. The leftmost tape cell is the start cell indexed . An input tape has endmarkers and a storage tape has only the left endmarker . When an input string is given to the input tape, it should be surrounded by the two endmarkers as so that is located at the start cell and is at the cell indexed . For any index , denotes the tape symbol written on the th inputtape cell, provided that (left endmarker) and (right endmarker). Similarly, when represents the non portion of the content of a storage tape, the notation expresses the symbol in the th tape cell. Note that .
For the storage tape, we request the following rewriting restriction, called the depth requirement, to be satisfied. Whenever the storagetape head passes through a tape cell containing a symbol in with , the machine must replace it by another symbol in except for the case of the following “turns”. We distinguish two types of turns. A left turn at step refers to ’s step at which, after ’s tape head moves to the right at step , it moves to the left at step . In other words, takes two transitions at step and at step . Similarly, we say that makes a right turn at step if ’s tape head moves from the left at step and changes its direction to the right at step . Whenever a tape head makes a turn, we treat this case as “double accesses.” More formally, at a turn, any symbol in with must be changed to another symbol in . No symbol in can be modified at any time. Although we use various storage alphabets , we can easily discern from which direction the tape head arrives simply by scanning a storage tape symbol written in the current tape cell.
Assuming that modifies on a storage tape with to and moves its storagetape head in direction , if equals , then must belong to ; if , then both and must be the same; otherwise, must be in . A storage tape that satisfies the depth requirement is succinctly called a depth storage tape.
Let us consider two different models whose inputtape head is either depthsusceptible or depthimmune to the content of a storagetape cell. A tape head (except for the storagetape head) is called depthsusceptible if, while the currently scanning symbol on a storage tape cell is in , the tape head must make a stationary move; namely, if with , then follows. The tape head is called depthimmune if there is no restriction on the scanning symbol .
A surface configuration of on input is of the form with , , , and , which indicates the situation where is in state , the storage tape contains (except for the tape symbol ), and two tape heads scan the th cell of the input tape and the th cell of the storage tape. For readability, we drop the word “surface” altogether in the subsequent sections.
The initial (surface) configuration has the form and describes how to reach the next surface configuration in a single step. For convenience, we define the depth value of a surface configuration to be the number satisfying . An accepting configuration (resp., a rejecting configuration) is of the form with (resp., ). A halting configuration means either an accepting configuration or a rejecting configuration. A computation of starts with the initial configuration and ends with a halting configuration. The sda accepts (resp., rejects) if starts with the initial configuration with the input and reaches an accepting configuration (resp., a rejecting configuration).
For a language over , we say that recognizes (accepts or solves) if, for any input string , (i) if , then accepts and (ii) if , then rejects . For two sda’s and over the same input alphabet , we say that is (computationally) equivalent to if, for any input , accepts (resp., rejects) iff accepts (resp., rejects) .
For notational convenience, we write for the collection of all languages recognized by depthsusceptible sda’s and for the collection of languages that are respectively mreducible to certain languages in . Moreover, we set to be the union . With this notation, the noncontextfree language , discussed in Section 1, belongs to . Thus, follows instantly. For the depthimmune model of sda, we similarly define , , and .
As a special case of , the following holds. This demonstrates the fact that sda’s truly expand 1dpda’s.
Lemma 2.1
.
Proof.
It was shown that is precisely characterized by deterministic 2limited automata (or 2lda’s, for short) [12, 21]. Since any 2lda can be transformed to another 2lda with the blankskipping property [25], depthsusceptible 2sda’s can simulate 2lda’s; thus, we immediately conclude that .
For the converse, it suffices to simulate depthsusceptible 2sda’s by appropriate 1dpda’s. Given a depthsusceptible 2sda , we design a oneway deterministic pushdown automaton (or a 1dpda) that works as follows. We want to treat the storagetape of as a stack by ignoring all symbols in except for the left endmarker . When modifies the initial symbol on its storage tape to a new symbol in , pushes to a stack. Consider the case where modifies a storage symbol to , pops the same symbol . Note that, since is depthsusceptible, it cannot move the inputtape head. As for the behavior of ’s inputtape head, if reads an input symbol and moves to the right, then does the same. On a tape cell containing , if ’s inputtape head makes a series of stationary moves, then reads as the first move, remembers , and makes moves afterwards until ends its stationary moves. Obviously, the resulting machine is a 1dpda and it simulates . ∎
Remark. Remember that tape cells of sda’s become blank after the first accesses. If we allow such tape cells to keep the last written symbols instead of erasing them, then the resulting machines get enough power to recognize languages to which even complete problems are mreducible. In this exposition, we do not further delve into this topic.
3 Two Machine Models that Characterize LOGSda
We begin with a structural study of , which is the closure of under mreductions, defined in Section 2.2. We intend to seek different characterizations of with no use of mreductions. The idea of the elimination of such reductions is attributed to Sudborough [23], who characterized using two machine models: polynomialtime logspace auxiliary deterministic pushdown automata and polynomialtime multihead deterministic pushdown automata. Our goal here is to expand these machine models to fit into the framework of depth storage automata.
3.1 Deterministic Auxiliary Depth Storage Automata
Let us expand deterministic auxiliary pushdown automata to deterministic auxiliary depth storage automata, each of which is equipped with a twoway readonly depthsusceptible input tape, an auxiliary rewritable tape, a depth storage tape.
Let us formally formulate deterministic auxiliary depth storage automata. For the description of such a machine, we first prepare a twoway readonly input tape and a depth storage tape and secondly we supply a new spacebounded auxiliary rewritable work tape whose cells are freely modified by a twoway tape head. Notice that the storagetape head is allowed to make stationary moves (that is, the storagetape head neither reads any symbol nor moves to any adjacent cell). A deterministic auxiliary depth storage automaton (or an auxsda, for short) is formally a 3tape DTM with a readonly input tape, an auxiliary rewritable (work) tape with an alphabet , and a depth storage tape. Initially, the input tape is filled with , the auxiliary tape is blank, and the depth storage tape has only designated blank symbols except for the left endmarker . We set , , and , provided that for any distinct pair . The transition function of maps to , where , , and . A transition indicates that, on reading two input symbols and , changes its inner state to by moving two tape heads in directions and , changes auxiliary tape symbol to by moving a tape head in direction , and changes storage tape symbol to by moving in direction . A string is accepted (resp., rejected) if enters an inner state in (resp., ).
When excluding from the definition of , the resulting automaton must fulfill the depth requirement of sda’s given in Section 2.2. The depthsusceptibility condition is stated as follows: if with , then must hold. To implement any stationary move of two tape heads, should satisfy a similar requirement as underlying sda’s; namely, assuming that , (i) if , then , (ii) if , then , and (iii) at least one of , , and is in .
Given an auxsda , take a positive integer and a polynomial so that ’s auxiliary tape uses at most tape cells within steps on any input of length . We reduce this space bound down to by introducing a larger auxiliary tape alphabet as follows. Since we can express each element of as a binary string of length (by adding s as dummy bits if necessary), we can split the auxiliary tape into tracks to hold the element of . Without loss of generality, our machine can be assumed to have an auxiliary tape composed of tracks holding binary symbols for an appropriately chosen constant and to use at most tape cells on this auxiliary tape. We also assume that initially writes all s on every track of the auxiliary tape.
3.2 MultiHead Deterministic Depth Storage Automata
We further introduce another useful machine model by expanding multihead pushdown automata to (twoway) multihead deterministic depth storage automata, each of which uses twoway multiple tape heads to read a given input beside a tape head that modifies symbols on a depth storage tape.
For each fixed number , we define an head deterministic depth storage automaton as a 2tape DTM with readonly depthsusceptible tape heads scanning over an input tape and a single read/write tape head over a depth storage tape. For convenience, we call such a machine by a sda(), where the subscript “” emphasizes that all subordinate tape heads move in both directions (except for the case of stationary moves). Notice that each sda() has actually tape heads, including one tape head working along the storage tape.
More formally, a sda() is a tuple with a transition function mapping to , where , , , , , and , provided that for any distinct pair . A transition means that, if is in inner state , scanning a tuple of symbols on the input tape by the readonly tape heads as well as symbol on the depth storage tape by the rewritable tape head, then enters inner state and writes on the depth storage tape by moving the th inputtape head in direction for every index and the storagetape head in direction . Remember that a sda() and a depthsusceptible sda are similar in their machine structures but the former can move its inputtape head to the left.
The treatment of the acceptance/rejection criteria is the same as underlying sda’s. A stationary move requires that (i) implies and (ii) at least one of is not . The depthsusceptibility condition of says that, for any transition of , if , then follows.
3.3 Characterization Theorem
We intend to demonstrate that the two new machine models introduced in Sections 3.1–3.2 precisely characterize . This result can be seen as a natural extension of Sudborough’s machine characterization of to .
Theorem 3.1
Let . Let be any language. The following three statements are logically equivalent.

is in .

There exists an auxsda that recognizes in polynomial time using logarithmic space.

There exist a number and a sda() that recognizes in polynomial time.
In the rest of this section, we intend to prove Theorem 3.1. Note that Sudborough’s proof [23, Lemmas 3–6] for relies on the heavy use of stack operations, which are applied only to the topmost symbol of the stack but the other symbols in the stack are intact. In our case, however, we need to deal with the operations of a storagetape head, which can move back and forth along a storage tape by modifying cell’s content as many as times. Sudborough’s characterization utilizes a simulation procedure [11, pp.338–339] of Hartmanis and a proof argument [7, Lemma 4.3] of Galil; however, we cannot directly use them, and thus a new idea is definitely needed to establish Theorem 3.1. The proof of the theorem therefore requires a technically challenging simulation among functions and the other machine models of auxsda and sda.
Lemma 3.2
Given a function in for certain alphabets and and a depthsusceptible sda working over in polynomial time, there exists a logspace auxsda that recognizes in polynomial time.
Proof.
Let be any function in . We take a DTM , equipped with an readonly input tape, a logarithmic spacebounded rewritable work tape, and a writeonce output tape, and assume that computes in polynomial time. A given depthsusceptible sda running in polynomial time is denoted . We set .
We design the desired auxsda for as follows. An input tape of holds the string for any given . In the following construction of , we treat the input tape of as an imaginary tape. Given any input in , we repeat the following process until enters a certain halting state. Using an auxiliary tape of , we keep track of the content on ’s work tape, two head positions of ’s input tape and ’s input tape.
Assume that ’s inputtape head is scanning the same tape cell as ’s inputtape head. Assume that ’s head of the imaginary input tape is located at cell .
(1) If makes an inputstationary move, then we simply simulate one step of the behavior of ’s storagetape head since we can reuse the last produced output symbol of .
(2) Assume that the current step is not any inputstationary move and that moves to cell on the imaginary input tape and scans this cell. We remember the position of the inputtape head, return to the last position of ’s inputtape head, and resume the simulation of with the use of ’s auxiliary tape as a work tape by a series of stationary moves of the inputtape head and the storagetape head using both the input tape and the auxiliary tape of until produces the th output symbol, say, . This is possible because the output tape of is writeonce. Once is obtained, remembers the positions of ’s tape heads, moves its inputtape head to restore its location. We then simulate a single step of on by reading current content cell of the storage tape together with a stationary move of both the inputtape head and the auxiliarytape head. Note that the movement of the storagetape head requires only the symbol but does not need any movement of the other tape heads. We then update the tapehead positions.
It is not difficult to show that eventually reaches the same type of halting states as does in polynomially many steps. Notice that the storagetape head and the auxiliarytape head do not work simultaneously. The depthsusceptibility of comes from that of since is simulated only when ’s storagetape head reads a symbol not in . Thus, is indeed an auxsda. ∎
We then transform an auxsda to a sda, which mimics the behavior of the auxsda.
Lemma 3.3
Let . Let denote a polynomialtime logspace auxsda, there are a constant and a sda that simulates in polynomial time.
Proof.
Let and let denote any auxsda that runs in polynomial time using logarithmic space on all inputs of length . We use ’s depthsusceptible input tape as the principal tape head of . We introduce additional tape heads to simulate the behavior of an auxiliarytape head of as follows.
As noted in Section 3.1, we assume that the auxiliary tape of is split into tracks for a certain constant and that uses at most cells on the auxiliary tape, where indicates input length. We want to construct a polynomialtime sda for which coincides with . Since each track of the auxiliary tape uses the binary alphabet , we can view the content of each track as the binary representation of a natural number. In what follows, we fix one of such tracks. If the track contains a string of length , we treat it as the binary number . We use two tape heads to remember the positions of the inputtape head and the auxiliarytape head of . To remember the number , we need additional tape heads (other than the storagetape head). Since there are tracks, we need heads for our intended simulation.
Head 1 keeps the tape head position. Whenever moves the auxiliary tape head, moves head 1 as well. Head 2 moves backward to measure the distance of the auxiliary tape head from the left end of the auxiliary tape. Using this information, we move head 3 as follows. If the tape head changes to (resp., to ) on this target track, then we need to move the head cells to the right (resp., to the left). How can we move one head to cell from ? Following [11], we use three tape heads to achieve this goal as follows. We move head 4 one cell to the right. As head 4 takes one step on the way back to , we move head 5 two cells to the right. We then switch the roles of heads 4 and 5. As head 5 takes one step back to , we move head 4 two cells to the right. If we repeat this process times, one of the heads indeed reaches cell . Hence, for the th run, we should move head 3 to cell . This process requires 3 tapes. Thus, the total of tape heads are sufficient to simulate the operation on the track content of the auxiliary tape.
By the above behaviors of the additional tape heads, they are depthsusceptible. ∎
We lessen the number of inputtape heads from to for any by implementing a “counter head” to record the movement of one inputtape head. A counter head is a twoway depthsusceptible tape head along an input tape such that, once this tape head is activated, it starts moving from to the right and comes back to without stopping or reading any input symbol on its way.
Lemma 3.4
Let and . Given any sda() with a counter head over input alphabet running in polynomial time, there exists a polynomialtime sda() with a counter head that recognizes the language , where and are tape symbols not in , where with for .
Proof.
Let . Let denote any sda() with a counter head running in polynomial time. Among all readonly tape heads, we call the principal tape head by head 1 and choose two subordinate tape heads (except for the counter head) as head 2 and head 3. We want to simulate these three tape heads by two tape heads, eliminating one subordinate tape head. We leave all the remaining tape heads unmodified so that they stay working over . This simulation can be carried out on an appropriate polynomialtime sda(), say, . With the use of and , let . The associated input to is . Initially, heads – are stationed at cell . We move heads 2 and 3 to the leftmost of . Assume that head is originally located at cell and head is at cell . Such a location pair is expressed as . We call each block of as block for any . For convenience, and are respectively called block 0 and block . We want to express the pair by stationing a single tape head at the th symbol of the th block of ’s input tape. We assume that the associated inputtape head of , say, head is located at the th symbol of the th block. We force to hold two input symbols, say, written at the th and the th cells of ’s input tape.
We assume that, in a single step, head and head of move to a new location pair for . To simulate this single step of , needs to move head to a new location and read two symbols . If , then moves head in direction . By contrast, if , then moves head in direction , reads its input symbol , and remember it using the inner states. As moves its tape head leftward to the nearest , moves the counter head to the right (from the start cell) to remember the value . Next, moves head to the symbol in block without moving the counter head. Finally, moves head to the right until the counter head comes back to the start cell. After the counter head arrives at the start cell, head reaches the th cell in block . Next, reads an input symbol and remember using its inner states. Note that the tape head on the storage tape never moves during the above process. ∎
Notice that a sda owns only a oneway inputtape head whereas a sda() uses a twoway inputtape head. We thus need to restrict twoway head moves of a sda() onto oneway head moves. For this purpose, we utilize a counter again together with the use of the reverse of an input.
Lemma 3.5
Given a sda() with a counter head running in polynomial time, there exists another sda() with a counter head such that (i) ’s inputtape head never moves to the left and (ii) recognizes the language in polynomial time, where .
Proof.
Let be any polynomialtime sda() with a counter head. We simulate the twoway movement of ’s inputtape head by an appropriate oneway tape head in the following way. Assume that represents the position of ’s inputtape head. Let and denote the tape symbols at cells and . For simplicity, the inputtape head is called head 1. If ’s inputtape head moves to the right or makes a stationary move, then we exactly simulate the ’s step. In what follows, we consider the case where ’s inputtape head moves to the left; that is, the new position of the inputtape head is . By the depthsusceptibility condition of , the current storagetape cell contains no symbol in . In this case, we move head 1 and the counter head simultaneously tot he right until head 1 reaches the first encounter of . We continue moving head 1 to the right while we move the counter head back to the left endmarker. We finally make head 1 shift to the right cell. Head 1 then reaches . ∎
Next, we show how to eliminate a counter head using the fact that the counter head is depthstopping.
Lemma 3.6
Let denote any sda() with a oneway inputtape head and a counter head running in polynomial time. There exists a depthsusceptible sda such that (1) ’s input tape is also oneway and (2) recognizes in polynomial time, where for .
Sudborough made a similar claim, whose proof relies on Galil’s argument [7], which uses a stack to store and remove specific symbols to remember the distance of a tape head from a particular input tape cell. However, since tape cells on the storage tape are not allowed to modify more than times, we need to develop a different strategy to prove Lemma 3.6.
For this purpose, we use an additional string of the form to make enough blank space on the depth storage tape for future recording of the movement.
Proof of Lemma 3.6. Take any sda() with a counter head. Since the counter head is depthsusceptible, it thus suffices to show how to simulate the behaviors of the counter head using a storage tape while the current storage tape cell holds no symbol from . In what follows, we describe the simulation procedure. Let denote any input and set .
Whenever the counter head is activated, it starts at , moves to the right for a certain number of steps, say, , and moves back to the start cell to complete a “counting” process. To simulate this process on a sda , there are three cases to consider separately. Note that, even in the case of , if the counter head is not activated, then we simply moves an inputtape head as does. Using its inner states, can remember (a) which direction the storagetape head comes from and (b) the contents of the currently scanning cell and its left and right adjacent tape cells (if any). Assume that is the content of three neighboring cells, the middle of which is being scanned by ’s storagetape head.
We partition the storage tape into a number of “regions”. A region consists of blocks, each of which contains cells. Each region is used to simulate one run of the counter head and basically holds the information on one storage symbol. Two regions are separated by one separator block of cells. We call a tape cell a representative if it holds the information on the tape symbol stored in a storagetape cell of . A block is called active if it contains a representative, and all other blocks are called passive. In particular, we call a block consumed if it contains but no representative.
In a run of the procedure described below, we maintain the situation that there is at most one active block in a region. Moreover, we maintain the following condition as well.
(*) Between and (resp., between and ), all symbols appearing in this tape region of are in (resp., ), where for any .
We use a new storage alphabet consisting of symbols of the form and for , where the parameter (resp., ) indicates that there are consumed blocks in the area of the region left (resp., right) to the currently scanning cell.
Assume that is scanning .
(1) Assume that ’s storagetape head comes from the left. If the counter head is not activated but writes and moves its storagetape head in direction (). In this case, overwrites by and moves its storagetape head in direction . Using as well as the value , moves to the right, find a border to the neighboring region. If finds a representative in this neighboring region, then stops. If reaches the right border of the neighboring region without finding any representative, then this region must represent and returns and stops at the center of the neighboring region. If ’s storagetape head comes from the left, we use in place of .
(2) Consider the case where ’s storagetape head sits at the cell that has never been visited before. Note that this cell is blank and all cells located at its right area are also blank.
(a) If writes and moves its storagetape head to the left, then we need to secure enough space for future executions of (3)–(5) because the storagetape head must move around to mark certain cells. Assume that is scanning . To simulate the counter head moves, writes as a marker, moves its storagetape head for steps, and comes back to , moves to the right for steps, writes , and then moves to the left to search for a representative in the left neighboring region as in (1).
(b) In contrast, if moves to the left, then we further need to produce a new region. For this purpose, further moves to the right for cells to find a new border, continues moving for cells to find the center of this new region, and finally stops.
(3) Consider the case where ’s storagetape head reads a nonblank tape symbol with , write a tape symbol over it, and moves in direction .
(a) Assume that had moved to from the left. See Fig. 1(1). Notice that . At present, is assumed to be scanning . In this case, remembers in its inner states, modifies it to (), and moves its tape head for steps to the right as the counter head does, by changing every encountered symbol of the form with to on its way. When makes steps, it makes a left turn, returns to , and writes . This mimics the backandforth movement of the counter head. The tape head again starts moving rightwards by changing for to for exactly steps, and it finally writes . This is possible because the inputtape head reads and (*) is satisfied. Finally, if , then moves to the right looking for a representative of the right region. On the contrary, when , moves to the left looking for a representative of the left region.
(b) Assume that had moved to from the right. This case is treated symmetrically to (a). See Fig. 1(2).
(4) Consider the case where ’s storagetape head reads a tape symbol and moves in direction . In this case, behaves similarly to (1) except that its tape head writes a new marker since .
(5) Consider the case where ’s storagetape head reads the symbol in . We then move the storagetape head in direction . In this case, ’sw storagetape head is at the center of the current region. Since we do not need to simulate the counter head, we writes (whenever ) and we follow the procedure of moving to the neighboring region as in (1).
Proof of Theorem 3.1. Let . The implication (1)(2) is shown as follows. Take any language in over alphabet . There exist a function in for an appropriate alphabet and a depthsusceptible sda working over such that, for any string , if , then accepts ; otherwise, rejects . By Lemma 3.2, we can obtain a logspace auxsda that recognizes in polynomial time.
Lemma 3.3 obviously leads to the implication (2)(3). Finally, we want to show that (3) implies (1). Given a language , we assume that there is a polynomialtime sda recognizing for a certain number . We transform this sda() to another sda() by providing a (dummy) counter head. We repeatedly apply Lemma 3.4 to reduce the number of inputtape heads down to . Lemma 3.5 then implies the existence of a sda() with a oneway inputtape head that correctly recognizes in polynomial time. By Lemma 3.6, we further obtain a depthsusceptible sda that can recognize in polynomial time. Since consists of strings of the form , it suffices to set . By the definition of , we can compute this function using log space. This concludes that belongs to .
4 Universal Simulators and the Space Hardest Languages
As a major significant feature, we intend to prove the existence of mcomplete languages in for each . For this purpose, we first construct a universal simulator that can simulate all sda’s by properly encoding sda’s and their inputs. We further force this universal simulator to be a sda.
4.1 LogSDAComplete Languages
Sudborough [23] earlier proposed, for every number , the special language , which is mcomplete for and thus for because is closed under mreductions. Sudborough discovered a “tapehardest” language, which literally encodes transitions of deterministic pushdown automata. Sudborough’s success comes from the fact that the use of oneway and twoway deterministic pushdown automata makes no difference in formulating . For , we propose the following decision problem (or a language) . Recall that a decision problem is identified with its associated language.
Membership SDA Problem (MEMB):