DeepAI
Log In Sign Up

Behavioral Strengths and Weaknesses of Various Models of Limited Automata

11/09/2021
by   Tomoyuki Yamakami, et al.
0

We examine the behaviors of various models of k-limited automata, which naturally extend Hibbard's [Inf. Control, vol. 11, pp. 196–238, 1967] scan limited automata, each of which is a single-tape linear-bounded automaton satisfying the k-limitedness requirement that the content of each tape cell should be modified only during the first k visits of a tape head. One central computation model is a probabilistic k-limited automaton (abbreviated as a k-lpa), which accepts an input exactly when its accepting states are reachable from its initial state with probability more than 1/2 within expected polynomial time. We also study the behaviors of one-sided-error and bounded-error variants of such k-lpa's as well as the deterministic, nondeterministic, and unambiguous models of k-limited automata, which can be viewed as natural restrictions of k-lpa's. We discuss fundamental properties of these machine models and obtain inclusions and separations among language families induced by them. In due course, we study special features – the blank skipping property and the closure under reversal – which are keys to the robustness of k-lpa's.

READ FULL TEXT VIEW PDF

page 1

page 2

page 3

page 4

03/18/2019

One-Way Topological Automata and the Tantalizing Effects of Their Topological Features

We cast new light on the existing models of 1-way deterministic topologi...
03/28/2022

Sublinear-Time Probabilistic Cellular Automata

We propose and investigate a probabilistic model of sublinear-time one-d...
11/04/2021

The No Endmarker Theorem for One-Way Probabilistic Pushdown Automata

In various models of one-way pushdown automata, the explicit use of two ...
03/22/2020

The Power of a Single Qubit: Two-way Quantum Finite Automata and the Word Problem

The two-way finite automaton with quantum and classical states (2QCFA), ...
10/29/2018

The problem with probabilistic DAG automata for semantic graphs

Semantic representations in the form of directed acyclic graphs (DAGs) h...

1 Historical Background, Motivations, and Chellenges

1.1 Limited Automata

Over the past 6 decades, automata theory has made a remarkable progress to unearth hidden structures and properties of various types of finite-state-controlled machines, including fundamental computation models of finite(-state) automata and one-way pushdown automata.

In an early period of the development of automata theory, Hibbard [6] introduced a then-novel rewriting system of so-called scan limited automata in hope of characterizing context-free and deterministic context-free languages by direct simulations of their underlying one-way pushdown automata. Unfortunately, Hibbard’s model seems to have been paid little attention until Pighizzini and Pisoni [16, 17] reformulated the model from a modern-machinery perspective and reproved a characterization theorem of Hibbard in a more sophisticated manner. A -limited automaton,333Hibbard’s original formulation of “-limited automaton” is equipped with a semi-infinite tape that stretches only to the right with no endmarker but is filled with the blank symbols outside of an input string. Our definition in this paper is different from Hibbard’s but it is rather similar to Pighizzini and Pisoni’s [16, 17]. for each fixed index

, is roughly a one-tape (or a single-tape) Turing machine

444A single-tape Turing machine model was discussed in the past literature, including [5, 21]. whose tape head is allowed to rewrite each tape cell between two endmarkers only during the first scans or visits (except that, whenever a tape head makes a “turn,” we count this move as “double” visits). after the visits to a tape cell, the last symbol in the tape cell becomes unrewritable and frozen forever. Although these automata can be viewed as a special case of linear-bounded finite automata, the restriction on the number of times that they rewrite tape symbols brings in quite distinctive effects on the computational power of the underlying automata, different from other restrictions, such as upper bounds on the numbers of nondeterministic choices or the number of tape-head turns. Hibbard conducted an intensive study on deterministic and nondeterministic behaviors of -limited automata. In his study, he discovered that nondeterministic -limited automata (abbreviated as -lna’s) for are exactly as powerful as 1npda’s, whereas 1-lna’s are equivalent in power to 2-way deterministic finite automata (or 2dfa’s) [22]. This gives natural characterizations of 1npda’s and 2dfa’s in terms of access-controlled Turing machines.

Another close relationship was proven by Pighizzini and Pisoni [17] between deterministic -limited automata (or -lda’s) and one-way deterministic pushdown automata (or 1dpda’s). In fact, they proved that -lda’s embody exactly the power of 1dpda’s. This equivalence in computational complexity contrasts Hibbard’s observation that, for each index , -lda’s in general possess more recognition power than -lda’s. These phenomena exhibit a clear structural difference between determinism and nondeterminism on the machine model of “limited automata” and such a difference naturally raises an important question of whether other variants of limited automata can match their corresponding models of one-way pushdown automata in computational power.

1.2 Extension to Probabilistic and Unambiguous Computations

Lately, a computation model of one-way probabilistic pushdown automata (or 1ppda’s) has been discussed extensively to demonstrate computational strengths as well as weaknesses in [8, 10, 15, 29].

While nondeterministic computation is purely a theoretical notion, probabilistic computation could be implemented in real life by installing a mechanism of generating (or sampling) random bits (e.g., by flipping fair or biased coins). From a generic viewpoint, deterministic and nondeterministic computations can be seen merely as restricted variants of probabilistic computation by appropriately defining the criteria for “error probability” of computation. A bounded-error probabilistic machine makes error probability bounded away from , whereas an unbounded-error probabilistic machine allows error to take arbitrarily close to probability .

In many cases, a probabilistic approach helps us solve a target mathematical problem algorithmically faster, and probabilistic (or randomized) computation often exhibits its superiority over its deterministic counterpart even on simple machine models. For example, as Freivalds [2] demonstrated, 2-way probabilistic finite automata (or 2pfa’s) running in expected exponential time can recognize non-regular languages with bounded-error probability [2]. By contrast, when restricted to expected subexponential runtime, bounded-error 2pfa’s recognize only regular languages [1, 9]. As this example shows, the expected runtime bounds of probabilistic machines largely affect the computational power of the machines, and thus its probabilistic behaviors significantly differ from deterministic behaviors. In many studies, the runtime of machines is limited to expected polynomial time. Probabilistic variants of pushdown automata were discussed intensively by Hromkovič and Schnitger [8] as well as Yamakami [29]. They demonstrated clear differences in computational power between two pushdown models, 1npda’s and 1ppda’s.

The aforementioned usefulness of probabilistic algorithms motivates us to take a probabilistic approach toward Hibbard’s original model of -limited automata. When -limited automata are allowed to err, these machines are naturally expected to exhibit significantly better performance in computation. This paper in fact aims at introducing a novel model of probabilistic -limited automata (or -lpa’s) and their natural variants, including one-sided-error, bounded-error, and unbounded-error models restricted to expected polynomial running time, and to explore their fundamental properties to obtain strengths and weaknesses of families of languages recognized by those machine models. Since -lda’s and -lna’s are viewed as special cases of -lpa’s, many properties of -lda’s and -lna’s can be discussed within a wider framework of -lpa’s.

Unambiguity has been paid special attention in formal languages and automata theory. The unambiguity for finite automata infers a unique accepting computation path if any. Stearns and Hunt III [20], for instance, demonstrated that, for converting a nondeterministic finite automaton into an equivalent finite machine, an unambiguous finite automaton performs no better than any deterministic finite automaton. For unambiguous context-free languages, there are known efficient algorithms to parse given words of the languages. Herzog [4] further generalized this notion for pushdown automata and discussed the amount of ambiguity. With the use of polynomial-size Karp-Lipton advice, as Reinhardt and Allender [19] demonstrated, for logarithmic-space auxiliary pushdown automata, it is possible to make nondeterministic computation unambiguous. In this paper, we also discuss unambiguous -limited automata (abbreviated as -lua) as an unambiguous model of -lna’s. We wish to ask what relationships are met between pushdown automata and -limited automata of the same machine types.

Let us introduce the notation for language families induced by the aforementioned models of -limited automata. The notation denotes the family of all languages recognized by expected-polynomial-time -lpa’s with unbounded-error probability. In a way similar to , one-sided-error and bounded-error models of -lpa’s induce and , respectively. Furthermore, , , and are obtained from respectively by replacing -lpa’s with -lda’s, -lna’s, and -lua’s. Containment relations among those language families are summarized in Fig. 1.

Figure 1: Containment relations among language families discussed in this paper. Each upper family contains its lower family except for .

Organization of This Work

In Section 3.1, we will give a formal definition of -lpa’s, following the existing models of 1ppda’s explained in detail through Section 2.2. Section 2.3 argues how to convert any 1ppda to its pop-controlled form, called “ideal-shape”. We will present a basic property of “blank skipping” of -lpa’s, which is useful together with “ideal shape” in proving Theorem 3.5 in Section 3.3. The collapses of language families induced by -lpa’s and by -lua’s as in Theorem 3.11 will be proven in Section 3.5. We will discuss closure/non-closure properties of -lda’s in Section 4.1 and -lpa’s in Section 4.2. Numerous problems left unanswered throughout this paper will be listed in Section 5 as a guiding compass to the future study of -limited automata.

2 Fundamental Notions and Notation

Let us formally introduce various computational models of limited automata, in which we can rewrite the content of each tape cell only during the first scans or visits of the cell. In comparison, we also describe probabilistic pushdown automata.

2.1 Numbers, Alphabets, and Languages

Let denote the set of all natural numbers, which are nonnegative integers, and set to be . We denote by the set for any two integers and with . This set is conveniently called an integer interval in comparison to a real interval. In addition, we abbreviate as the integer interval for any integer .

A nonempty finite set of “symbols” or “letters” is called an alphabet. A string over alphabet is a finite sequence of symbols taken from and its length expresses the total number of symbols in . The empty string is a unique string of length and is always denoted . The notation denotes the set of all strings over and any subset of is called a language over . Given a number , (resp., ) expresses the set of all strings of length exactly (resp., at most ) over . Obviously, coincides with .

Given two alphabets and , we construct a new alphabet using the track notation of [21]. A string over this alphabet is of the form , which is further abbreviated as for and . To provide such a string to an input tape, we split the tape into two tracks so that the upper track holds and the lower track does .

Given a string of the form over alphabet with for all , the reverse of is and denoted by . Given a language , the notation denotes the reverse language . For a family of languages, expresses the collection of for any language in .

For any two languages and , the concatenation (more succinctly, ) denotes . In particular, when is a singleton , we write instead of . Similarly, when , we use the succinct notation .

Any function for two alphabets and is called a homomorphism. Such a homomorphism is called -free if holds for any . We naturally expand a homomorphism to a map from to by setting for any .

2.2 One-Way Probabilistic Pushdown Automata and Their Variants

As a fundamental computation model, we begin with one-way probabilistic pushdown automata (or 1pda’s, for short) as a basis to introduce a new model of probabilistic -limited automata in Section 3.1. One-way deterministic and nondeterministic pushdown automata (abbreviated as 1dpda’s and 1npda’s, respectively) can be viewed as special cases of the following one-way probabilistic pushdown automata (or 1ppda’s, for short). We also obtain one-way unambiguous pushdown automata (or 1upda’s) as a restriction of 1npda’s.

An input string is initially placed on an input tape, surrounded by two endmarkers (left endmarker) and (right endmarker). For various models of pushdown automata, the use of such endmarkers is nonessential (see, e.g., [31]). To clarify the use of the endmarkers, however, we explicitly include them in the description of a 1ppda.

Formally, a 1ppda is a tuple , in which is a finite set of (inner) states, is an input alphabet, is a stack alphabet, is a finite subset of with , is a probabilistic transition function from to , () is an initial state, () is a bottom marker, () is a set of accepting states, and () is a set of rejecting states, , , and . Let . For a given set of symbols, a -symbol refers to any symbol in . The push size of a 1ppda is the maximum length of any string pushed into a stack by any single move, and thus follows.

For clarity reason, we express as with the use of a special separator “”. This value expresses the probability that, when scans on the input tape in inner state , changes to and updates a topmost stack symbol to . In particular, when we always demand (instead of the unit real interval ) for all tuples , we obtain a one-way deterministic pushdown automaton (or a 1dpda). In contrast, when is required to be either the fixed constant or , we obtain a one-way nondeterministic pushdown automaton (or a 1npda).

For any , we set . In the case of , we specifically call its transition a -move (or a -transition) and the tape head must stay still.

At any point, can probabilistically select either a -move or a non--move. This is formally stated as for any given triplet .

Whenever reads a nonempty input symbol, the tape head of must move to the right. During -moves, nevertheless, the tape head must stay still. After reading , the tape head is considered to move off the input region, which is marked as for input . All cells on the input tape are indexed by natural numbers from left to right, where cell is the start cell containing and cell contains for each input .

Throughout this paper, we express a stack content, which is a string stored inside the stack in a sequential order from the topmost symbol to the bottom symbol , as .

A (surface) configuration of on input is a triplet , which indicates that is in inner state , its tape head scans the th cell, and its stack contains . The initial configuration is . An accepting (resp., a rejecting) configuration is a configuration with an accepting (resp., a rejecting) state and a halting configuration is either an accepting or a rejecting configuration. We say that a configuration follows with probability if is the th input symbol and if and if . A computation path of length on the input is a series of configurations, which describes a history of consecutive transitions (or moves) made by on , starting at the initial configuration with probability and the st configuration follows the th configuration with probability and ending at a halting configuration with probability . The probability of each computation path is determined by the multiplication of all chosen transition probabilities along the computation path. We thus assign the probability to such a computation path. It is important to note that, after reading , is allowed to make a finite series of -moves until it enters halting states. A computation path is called accepting (resp., rejecting) if the path ends with an accepting configuration (resp., a rejecting configuration).

Generally, a 1ppda may produce an extremely long computation path or even an infinite computation path. Following an early discussion in Section 1.1 on the expected runtime of probabilistic machines, it is desirable to restrict our attention to 1ppda’s whose computation paths have a polynomial length on average; that is, there is a polynomial for which the expected length of all terminating computation paths on input is bounded from above by . A standard definition of 1dpda’s and 1npda’s does not have such a runtime bound, because we can easily convert those machines to ones that halt within time (e.g., [7]). Throughout this paper, we implicitly assume that all 1ppda’s should satisfy this expected polynomial termination requirement. This makes it possible for us to mostly concentrate on polynomial-length computation paths.

Given an arbitrary string , the acceptance probability of on is the sum of all probabilities of accepting computation paths of starting with written on the input tape and we notationally express by the acceptance probability of on . Similarly, we define to be the rejection probability of on . We say that accepts (resp., rejects) with probability if the value matches (resp., ). If is clear from the context, we often omit script “” entirely and write, e.g., instead of . We say that accepts if and rejects if . Given a language , in general, we say that recognizes if, for any , accepts and, for any , rejects. The notation stands for the set of all strings accepted by ; that is, . The error probability of on for refers to the probability that ’s outcome is different from . We further say that makes bounded error if there exists a constant (called an error bound) such that, for every input , either or . With or without this condition, is said to make unbounded error. Moreover, we say that makes one-sided error if, for all strings , either or holds.

We require every 1ppda to run in expected polynomial time. Two 1ppda’s and are (recognition) equivalent if . More strongly, we say that two 1ppda’s are error-equivalent if they are recognition equivalent and their error probabilities coincide with each other on all inputs.

For any error bound , the notations and refer to the families of all languages recognized by (expected-polynomial-time) -error 1ppda’s and (expected-polynomial-time) -error 2pfa’s, respectively. As a restriction of , denotes the family of all languages recognized by 2pfa’s with one-sided error probability at most . Similarly, we define as the one-sided-error variant of . In addition, we often use more familiar notation of , , and respectively for , , and . The strengths and weaknesses of were discussed earlier by Macarie and Ogihara [15] and those of were studied by Hromkovič and Schnitger [8] and Yamakami [29].

In comparison, denotes the family of all regular languages, which are recognized by one-way deterministic finite automata. Similarly, and denote the families of all languages recognized by 1npda’s and by 1dpda’s, respectively. It follows that . Notice that . Since the language is in [8], follows, further leading to . Furthermore, unambiguous computation refers to nondeterministic computation consisting of at most one accepting computation path. Let us define by restricting 1npda’s used for to produce only unambiguous computation (see, e.g., [26]).

To describe the behaviors of a stack, we use the basic terminology used in [24, 27, 28]. A stack content formally means a series of stack symbols in , which are stored in the stack sequentially from at the top of the stack to () at the bottom of the stack. We refer to a stack content obtained just after the tape head scans and moves off the th tape cell as a stack content at the th position. A stack transition means the change of a stack content by an application of a single move.

For two language families and , the notation (resp., ) denotes the 2-disjunctive closure (resp., the 2-conjunctive closure ). For any index , define and . Notice that for any [14] (reproven in [30] by a different argument).

2.3 Ideal Shape Lemma for Pushdown Automata

We start with restricting the behaviors of 1ppda’s without compromising their language recognition power. Any 1ppda that makes such a restricted behavior is called “ideal shape” in [31].

We want to show how to convert any 1ppda to a “push-pop-controlled” form, in which (i) the pop operations always take place by first reading an input symbol and then making a series (one or more) of the pop operations without reading any further input symbol and (ii) push operations add single symbols without altering any existing stack content. In other words, a 1ppda in an ideal shape is restricted to take only the following actions. (1) Scanning , preserve the topmost stack symbol (called a stationary operation). (2) Scanning , push a new symbol () without changing any symbol stored in the stack. (3) Scanning , pop the topmost stack symbol. (4) Without scanning an input symbol (i.e., -move), pop the topmost stack symbol. (5) The stack operation (4) comes only after either (3) or (4). These five conditions can be stated more formally. We say that a 1ppda is in an ideal shape if it satisfies the following conditions. If , then (i) implies and (ii) implies for a certain . Moreover, for any with , (iii) if with , then and (iv) if , then .

Lemma 2.1 states that any 1ppda can be converted into its “equivalent” 1ppda in an ideal shape. The lemma was first stated in [31] for 1ppda’s equipped with the endmarkers as well as the model of 1ppda’s without endmarkers. Note that 1ppda with no endmarker is obtained from the definition of 1ppda given in Section 2.2 simply by removing and . The acceptance and rejection of such a no-endmarker 1ppda is determined by whether the 1ppda is in accepting states or non-accepting states just after reading off the entire input string.

Lemma 2.1

[Ideal Shape Lemma, [31]] Let . Any -state 1ppda with stack alphabet size and push size can be converted into another error-equivalent 1ppda in an ideal shape with states and stack alphabet size . The above statement is also true for the model with no endmarker.

Since error probability can differ according to inputs, by setting it appropriately, we can obtain the ideal shape lemma for 1dpda’s and 1npda’s. The proof of Lemma 2.1 given in [31, Section 4.2] is lengthy, consisting of a series of transformations of automata, and is proven by utilizing, in part, basic ideas of Hopcroft and Ullman [7, Chapter 10] and of Pighizzini and Pisoni [17, Section 5]. For completeness of the paper, we describe a rough sketch of the proof given in [31, Section 4.2].

Proof Sketch of Lemma 2.1.   Let us begin the proof sketch by fixing a 1ppda arbitrarily. Let , , and be the push size of . Starting with this machine , we will perform a series of conversions of the machine to satisfy the desired condition of the ideal shape 1ppda. To make our description simpler, we introduce the succinct notation ( and ) to denote the probability of the event that, starting with state and stack content (for an arbitrary string ), makes a (possible) series of consecutive -moves without accessing any symbol in and eventually enters state with stack content .

(1) We first convert the original 1ppda to another error-equivalent 1ppda, say, whose -moves are restricted only to pop operations; namely, for all elements , , and . For this purpose, we need to remove any -move by which changes topmost stack symbol to a certain nonempty string . We also need to remove the transitions of the form , which violates the requirement of concerning pop operations. Notice that, once reads , it makes only -moves only with states in and eventually empties the stack.

(2) We next convert to another error-equivalent 1ppda that conducts only the following types of moves: () pushes one symbol without changing the exiting stack content, () it replaces the topmost stack symbol by a (possibly different) single symbol, and () it pops the topmost stack symbol. We also require that all -moves of are limited only to either () or ().

(3) We further convert to so that satisfies (2) and, moreover, there is no operation that replaces any topmost symbol with a different single symbol (namely, stationary operation). This is done by remembering the topmost stack symbol without writing it down into the stack. For this purpose, we use a new symbol of the form (where and ) to indicate that is in state , reading in the stack.

(4) We convert to that satisfies (3) and also makes only -moves of pop operations that only follow a (possible) series of pop operations. Let and . Let and . It follows that and . The probabilistic transition function is constructed as follows. A basic idea of our construction is that, when makes a pop operation after a certain non-pop operation, we combine them as a single move.

(5) Finally, we set as the desired 1ppda .

The ideal shape lemma is useful for simplifying certain proofs associated with 1ppda’s. One such example was exhibited in [31].

Lemma 2.2

[31] is closed under reversal; that is, .

2.4 Nondeterministic Finite Automata with Output Tapes

As done in [25, 26, 27], we equip each 1nfa with a write-once output tape.555An output tape is write-once if its cells are initially blank, its tape head never moves to the left, and the tape head must move to the right whenever it writes a non-blank symbol. We use such a 1nfa as a nondeterministic variant of Mealy machine, that is, a machine that produces a single output symbol whenever it processes a single input symbol. Since the machine cannot erase written output strings, we allow the machine to invalidate any produced string on the output tape by later entering a rejecting inner state. For brevity, any 1nfa that behaves in this specific way is called real-time. Let denote the class of all multi-valued partial functions from to whose output values are produced on write-once tapes along only accepting computation paths of real-time 1nfa’s ending in an accepting configuration in which, after the real-time 1nfa’s scan the right endmarker, they enter a designated unique accepting state, where and are arbitrary alphabets. The last requirement on accepting configurations ensures that the number of all distinct accepting configurations on each input equals . We further write for the collection of all total functions in .

We define the “reversal” of a function simply by setting for any . We use the notation for . The following equality holds for the functional composition “” and this equality will be used in Section 3.5.

Lemma 2.3

.

Proof.

Let denote any multi-valued total function in . Take two functions such that holds for all , where denotes . Note that . Take two real-time 1nfa’s and with output tapes computing and , respectively. For a machine , let . Consider a machine that first runs and then runs by moving a tape head backward. This machine correctly computes but it is not a real-time 1nfa. Here, we want to construct another real-time 1nfa that reads an input symbol by symbol from left to right and simulates and simultaneously.

(i) At scanning , we guess and store in an internal memory. Furthermore, we guess and satisfying and . Update the internal memory to and move to the right.

(ii) Assume that cell contains input symbol and the internal memory of contains . Guess and satisfying and . Update the memory to and output onto ’s output tape.

(iii) At scanning , assume that is in the memory. Guess satisfying . We then check whether . If not, then enters a rejecting state. Otherwise, halts in accepting states.

If there is any discrepancy in the above simulation, then our guess must be wrong, and thus immediately enters a rejecting state.

By the above construction, the number of accepting computation paths of matches that of at any input. Therefore, we obtain .

Since the opposite inclusion is clear, the lemma is true. ∎

3 Behaviors of Various Limited Automata

We will formally introduce probabilistic models of limited automata as a foundation and explain how to obtain its variants, such as deterministic, nondeterministic, and unambiguous models. We will then explore their fundamental properties by making structural analyses on their behaviors.

Our first goal is to provide in the field of probabilistic computation a complete characterization of finite and pushdown automata in terms of limited automata. All probabilistic machines in this paper are assumed to run in expected polynomial time.

3.1 Formal Definitions of Limited Automata

In a way similar to Section 2.2, we begin with an introduction of a probabilistic model of limited automata and then define other variants by modifying this basic model.

A probabilistic -limited automaton (or a -lpa, for short) is formally defined as a tuple , which accesses only tape area in between two endmarkers (those endmarkers can be accessible but not changeable), where is a finite set of (inner) states, () is a set of accepting states, () is a set of rejecting states, is an input alphabet, is a collection of mutually disjoint finite sets of tape symbols, is an initial state in , and is a probabilistic transition function from to the real unit interval with , , and for and . We implicitly assume that . The -lpa has a rewritable tape, on which an input string is initially placed, surrounded by two endmarkers and . In our formulation of -lpa, unlike 1ppda’s, the tape head always moves either to the right or to the left without stopping still. In other words, makes no -move. We also remark that is not required to halt immediately after reading .

At any step, probabilistically chooses one of all possible transitions given by . For convenience, we express as , which indicates the probability that, when scans on the tape in inner state , changes its inner state to , overwrites onto , and moves its tape head in direction . We set . The function must satisfy for every pair .

We say that a tape head (sometimes its underlying machine ) makes a left turn at cell if the tape head moves to the right into cell and then moves back to the left at the next step. Similarly, we define a right turn. For convenience, the tape head is said to make a turn if it makes either a left turn or a right turn. See Fig. 3 for a tape-head movement.

The -lpa must satisfy the following -limitedness requirement. During the first scans of each tape cell, at the th scan with , if reads the content of the cell containing a symbol in , then updates the cell content to another symbol in . After the the th scan, the cell becomes unchangeable (or frozen); that is, still reads a symbol in the cell but no longer alters the written symbol. For this rule, there is one exception: whenever the tape head makes a turn at any tape cell, we count this move as “double scans” or “double visits.” To make the endmarkers special, we further assume that no symbol in replaces the endmarkers. The above requirement is formally stated as follows.

The -limitedness requirement: for any transition with , , , and with , (1) if , then and , (2) if and is even, then , and (3) if and

is odd, then

.

We assume that all tape cells are indexed by natural numbers, where the cell containing the left endmarker is indexed (such the cell is called the start cell), the cell of the right endmarker is if an input is of length . Most notions and notations used for 1ppda’s in Section 2.2 are applied to -lpda’s with slight modifications if necessary. A (surface) configuration of on input is a triplet of the form , which indicates that is in state scanning the th cell of the tape composed of . Similarly to the case of 1ppda’s, a computation path of on is a series of configurations generated by applying repeatedly and we associate such a computation path with a probability of generating the computation path. A computation of on relates to a computation graph whose vertices are distinct configurations of on and (directed) edges represent single transitions of between two configurations. Similarly to 1ppda’s, we also define the notions of acceptance/rejection probability, one-sided error, bounded-error, and unbounded-error for -lpa’s as well as the notations, such as and . Concerning the running time of -lpa’s, similarly to 1ppda’s in Section 2.2, we implicitly assume that all -lpa’s in this work run in expected polynomial time.

When a string is written on a tape, the -region of the tape refers to a series of consecutive cells that hold each symbol of , provided that can be identified uniquely on the tape from the context. Even after some symbols of is altered, we may use the same term “-region” as long as the referred cells in the original region are easily discernible from the context. Moreover, a blank region is a series of consecutive cells containing only s whose ends are both adjacent to non-blank cells. A fringe of a blank region is a non-blank cell adjacent to one of the ends of the blank region. Since two endmarkers cannot be changed, each blank region always has two fringes. We say that enters the -region in direction in inner state if, at a certain step, moves into the -side of the -region from the outside of by changing its inner state to , where “-side” means the left-side if and the right-side if . Moreover, leaves the -region in direction in inner state if, at a certain step, moves out of the -side of the -region from the inside of by changing its inner state to .

Given an index and a constant , the basic notation refers to the family of all languages recognized by (expected-polynomial-time) -lpa’s with error probability at most . In the bounded-error model, is bounded away from , and thus the union ia abbreviated as . In the case of the unbounded-error model, by contrast, we write (occasionally, we write for clarity). Similarly, is defined by (expected-polynomial-time) one-sided -error -lpa’s. Let and .

Furthermore, by requiring -lna to produce only unambiguous computation on every input, we can introduce a machine model of unambiguous -limited automata (or -lua’s, for short). Using -lda’s, -lna’s, and -lua’s as underlying machines, the notations , , and are used to express the families of all languages recognized by -lda’s, -lna’s, and -lua’s, respectively.

Among all the aforementioned language families, it follows from the above definitions that, for each index , , , and for any constants and . By amplifying the success probability of -lra’s, it is possible to show the further inclusion: for every index . This inclusion is not obvious from the definition because a -lra can make error probability greater than , which is not bounded-error probability.

Lemma 3.1

For any , .

Proof.

Take any -lra and assume without loss of generality that . Choose a constant satisfying . We set . Given an input , we first run on . Whenever enters a rejecting state, we accept with probability and reject with probability . On the contrary, when enters an accepting state, we accept with probability . For any input , the total acceptance probability becomes . For the other input , the total rejection probability is . Hence, belongs to . ∎

We further define to be the union . Similarly, we can define , , , and . It then follows that .

3.2 Blank Skipping Property for Limited Automata

Hibbard [6] proved that and Pighizzini and Pisoni [17] demonstrated that coincides with . It is also possible to show that and using the ideal-shape property of and (see Lemma 3.8); however, the opposite inclusions are not known to hold. Therefore, our purpose of exact characterizations of and requires a specific property of -lpa’s, called blank skipping, for which a -lpa writes only a unique blank symbol, say, during the th visit and it makes only the same deterministic moves while reading in such a way that it neither changes its inner state nor changes the head direction (either to the right or to the left); in other words, it behaves exactly in the same way while reading consecutive blank symbols. When a -lpa passes a cell for the th time, it must make the cell blank (i.e., the cell has ) and the cell becomes frozen afterward. This property plays an essential role in simulating various limited automata on pushdown automata. In what follows, we define this property for various limited automata.

Definition 3.2

Let . A -limited automaton is said to be blank skipping if (1) , where is a unique blank symbol, and (2) while reading a consecutive series of -symbols, the machine must maintain the same inner states and the same head direction in a deterministic fashion. More formally, the condition (2) states that there are two disjoint subsets of for which for any direction and any inner state . See Fig. 2.

To emphasize the use of the blank skipping property, we append the prefix “bs-”, as in and .

Figure 2: A tape head that makes blank skipping.

Let us start to prove the blank skipping property of nondeterministic limited automata.

Lemma 3.3

[Blank Skipping Lemma] Let be any integer with . Given any -lna , there exists another -lna such that (1) is blank-skipping, (2) is recognition-equivalent to , and (3) the number of accepting computation paths of matches that of on every input.

In the case of -lda’s, as shown in Proposition 3.4, we can transform limited automata into their blank skipping form and this is, in fact, a main reason that equals (due to Theorem 3.5(2) with setting and using ). From Lemma 3.3, the proposition follows immediately because the inclusions , , and are obvious and Lemma 3.3 yields the opposite inclusions as well.

Proposition 3.4

For each index , and .

In what follows, let us begin the proof of Lemma 3.3.

Proof of Lemma 3.3.   The following argument uses in part a basic idea of Pighizzini and Pisoni [17]. We first describe the proof of the lemma for -lda’s and then explain how to amend it for -lna’s (and thus for -lua’s).

Let be any integer and let be any -lda. Let with . Note that, as long as , we can uniquely determine the tape-head direction from alone. Let and set .

Firstly, we modify so that remembers the tape-head direction at the current step. This can be done by defining a new machine with the following items. Let , , , , whenever . For simplicity, hereafter, we assume that has these items , , , , and , but we intentionally drop “” (tilde) and write them as , , , , and , respectively.

Let us introduce two notations and . For each string , let equal if enters the -region in direction in inner state , stays in the inside of the -region, and eventually leaves the -region in direction in inner state , and let be otherwise. We also set to be an -matrix such that, for any index pair and in , the value equals . Since the total number of distinct matrices for any is at most , it is possible to use as a part of inner states of . Moreover, let denote the set of all pairs such that, when (resp., ), enters the -region (resp., the -region) in direction (res., ), stays in the -region, and eventually leaves the -region in direction in state . To compute the value , we need to use two matrices and but we do not need to remember and .

In what follows, we wish to consider only the case of even because the other case is similar in principle. We construct the desired machine from . The desired -lda works as follows. Here, we try to meet the following requirements during the construction of . The new machine uses the same tape symbols as does, but also writes down other symbols of the form , , or to mark a fringe of a blank region. When writes over a non-blank symbol, it enters inner states of the form or with and . While stays in a blank region, however, it keeps the same inner states of the form . We first deal with the case where the tape head moves from the left to the current cell.

1. In the case of the first at most visits to the current cell, simulates precisely.

2. At the th visit to the current cell containing symbol , if changes the current inner state, say, to (where ) and writes over the symbol , then writes and changes to as a new inner state of .

3. At the