Constant-Space, Constant-Randomness Verifiers with Arbitrarily Small Error

06/22/2020 ∙ by M. Utkan Gezer, et al. ∙ Boğaziçi University 0

We study the capabilities of probabilistic finite-state machines that act as verifiers for certificates of language membership for input strings, in the regime where the verifiers are restricted to toss some fixed nonzero number of coins regardless of the input size. Say and Yakaryılmaz showed that the class of languages that could be verified by these machines within an error bound strictly less than 1/2 is precisely NL, but their construction yields verifiers with error bounds that are very close to 1/2 for most languages in that class. We characterize a subset of NL for which verification with arbitrarily low error is possible by these extremely weak machines. It turns out that, for any ε>0, one can construct a constant-coin, constant-space verifier operating within error ε for every language that is recognizable by a linear-time multi-head finite automaton (2nfa(k)). We discuss why it is difficult to generalize this method to all of NL, and give a reasonably tight way to relate the power of linear-time 2nfa(k)'s to simultaneous time-space complexity classes defined in terms of Turing machines.



There are no comments yet.


page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

The classification of languages in terms of the resources required for verifying proofs (“certificates”) of membership in them is a main concern of computational complexity theory. Major results in this area have demonstrated important tradeoffs among different types of resources such as time, space, and randomness: The power of deterministic polynomial-time, polynomial-space bounded verifiers, characterized by the class , has, for instance, been shown to be identical to that of probabilistic bounded-error polynomial-time logarithmic-space verifiers that toss only logarithmically many coins in terms of the input size cl95 . More recently, Say and Yakaryılmaz initiated the study of the power of finite-state verifiers that are restricted to toss some fixed nonzero number of coins regardless of the input size, and proved sayyakaryilmaz that the class of languages that could be verified by these machines within an error bound strictly less than is precisely , i.e. languages with deterministic logarithmic-space verifiers.

The construction given in sayyakaryilmaz could exhibit a constant-randomness verifier operating within error for some for any language in , however, it did not provide a method for reducing this error to more desirable smaller values. Indeed, for many languages in , the constructed verifier’s error bound is uncomfortably close to , raising the question of whether the class of languages for which it is possible to obtain verifiers with arbitrarily small positive error bounds is a proper subset of or not.

In this paper, we characterize a subset of for which verification with arbitrarily low error is possible by these extremely weak machines. It turns out that, for any , one can construct a constant-coin, constant-space verifier operating within error for every language that is recognizable by a linear-time multi-head finite automaton (2nfa). We discuss why it is difficult to generalize this method to all of , and give a reasonably tight way to relate the power of linear-time 2nfa’s to simultaneous time-space complexity classes defined in terms of Turing machines. We conclude with a list of open questions.

2 Preliminaries

The reader is assumed to be familiar with the standard concepts of automata theory, Turing machines, and basic complexity classes sipser .

The following notation will be used throughout this paper:

  • is the th element of the sequence

  • is the sequences and concatenated

  • is the encoding of objects in the alphabet of context

  • is the union of sets and , also asserting that and are disjoint

2.1 Multihead finite automata

A -head nondeterministic finite automaton, denoted 2nfa222The 2 in 2nfa is to indicate that they can move their heads in both directions., is a 6-tuple consisting of

  1. a finite set of states ,

  2. an input alphabet ,

  3. a transition function , where;

    • is the tape alphabet, where and are respectively the left and right end markers

    • is the set of head movements, where and respectively indicate moving left and right, and indicates staying put,

  4. an initial state ,

  5. an accept state , and

  6. a reject state .

A 2nfa initially starts from , and with written on its single read-only tape, where is the input string. All tape heads are initially on the symbol. The function maps the current state and the symbols under the tape heads to a set of alternative steps can take. By picking an alternative , transitions into the state , and moves its th head by .

The configuration of a 2nfa at a step of its execution is the

-tuple consisting of its state and its head positions at that moment. The initial configuration of

is .

Starting from its initial configuration, and following different alternatives offered by , a 2nfa may have several computational paths on the same string. A computational path of halts if it reaches or , or if does not offer any steps for to follow. accepts an input string , if there is a computational path of running on that halts on . rejects an input string , if running on halts on a state other than on every computational path. The language recognized by is the set of all strings accepted by .

Given an input string , may have computational paths that never halt. In the special case that given any input string halts on every computational path, is said to be an always halting 2nfa.

A -head deterministic finite automaton, denoted 2dfa, differs from a 2nfa in its transition function, which is defined as . 2nfa’s are simply called finite automata, and are denoted as 2dfa’s and 2nfa’s for the deterministic and nondeterministic counterparts, respectively.

For a particular , let denote the class of languages recognized by a 2nfa. For when is unspecified, let denote the class of languages that has a 2nfa recognizing them for some . In other words;

is the class of regular languages.

For any growth function , denotes the class of languages recognized by nondeterministic Turing machines which are allowed to use space for inputs of length . The class is commonly denoted as .

Lemma 1.

Nondeterministic multi-head finite automata are equivalent to nondeterministic logarithmic space TMs in terms of language recognition power hartmanis . Put formally;

Lemma 2.

The languages in are organized in a hierarchy based on the number of heads of the nondeterministic automata recognizing them. monien_two-way_1980 Formally, the following is true for any :

For any given , let denote the class of languages that are recognized by a 2nfa running for steps on every alternative computational path on any input of length . Clearly, those machines are also always halting. Let denote the class of languages that are recognized by a nondeterministic multi-head finite automata with any number of heads, and running in time. We use designation instead of .

Lemma 3.

The following is true for any :


Let be any 2nfa recognizing with as its set of states. Running on an input string of length , can have different configurations. If executes for more than steps, then it must have repeated a configuration, and be in a loop. Therefore, for every input string in , should have an accepting computation path of at most steps.

With the help of additional counter heads, the 2nfa can simulate while imposing it a runtime limit of steps. Machine can count up to as follows: Let denote the counter heads. Head moves right every th step of ’s simulation. For , whenever the head reaches the right end marker, it rewinds back to the left end, and head moves once to right. If attempts to move past the right end, rejects.

The 2nfa recognizes the same language as , but within the time limit of . ∎

Lemmas 3 and 2 can be combined into the following useful fact.

Corollary 4.

For every , there is a minimum number , such that there exists an always halting 2nfa recognizing , but not an always halting 2nfa with .


By Lemma 2, , but for some . The existence of an always halting 2nfa recognizing for a minimum between and is guaranteed by Lemmas 3 and 2, respectively. ∎

Lemma 5.

is decidable.

Proof 333We thank Neal E. Young, who introduced us the algorithm for this proof..

The two-way alternating finite automaton, denoted 2afa, is a generalization of the 2nfa model. The state set of a 2afa is partitioned into universal and existential states. A 2afa accepts a string , if and only if starting from the initial state, every alternative transition from the universal states, and at least one of the alternative transitions from the existential states leads to acceptance. Thus, a 2nfa is a 2afa with only existential states.

A one-way finite automaton, denoted 1dfa, is a 2dfa that cannot move its head to left. A 1nfa is a nondeterministic 1dfa.

Consider the following algorithm to recognize :

  1. “On input , where is an 2nfa, and is its alphabet:

    1. Construct a 2afa by modifying to accept whenever it halts and designating every state as universal.

    2. Convert to an equivalent 1dfa .

    3. Check whether recognizes . If it does, accept. Otherwise, reject.”

By its construction, recognizes if and only if halts in every computation path, running on every possible input string, i.e. it is always halting. Stage 2 can be implemented by the algorithms given in geffert and sipser The final check in stage 0c, also known as the universality problem, has a well-known algorithm. So the algorithm decides whether a given 2nfa is always halting. ∎

2.2 Probabilistic Turing machines and finite automata

A probabilistic Turing machine (PTM) is a Turing machine equipped with a randomization device. In its designated coin-tossing states, a PTM obtains a random bit using the device, and proceeds by its value. The language of a PTM

is the set of strings that it accepts with a probability greater than


A probabilistic finite automaton (2pfa) is a restricted PTM with a single read-only tape. This model can also be viewed as an extension of a 2dfa with designated coin-tossing states. A 2pfa tosses a hypothetical coin whenever it is in one of those states, and proceeds by its random outcome. Formally, a 2pfa consists of the following:

  1. a finite set of states , where;

    • is the set of deterministic states,444The letters d and r stand for deterministic and random, respectively. and

    • is the set of coin-tossing states,444The letters d and r stand for deterministic and random, respectively.

  2. an input alphabet ,

  3. a transition function overloaded as deterministic and coin-tossing ;

    • , where and are as defined for the 2nfa’s, and

    • , where is a random bit provided by a “coin toss”,

  4. an initial state ,

  5. an accept state , and

  6. a reject state .

The language of a 2pfa is similarly the set of strings which are accepted with a probability greater than .

Due to its probabilistic nature, a PTM may occasionally err, and disagree with its language. In this paper, we will be concerned about the following types of error:

  1. Failing to accept – rejecting or looping indefinitely given a member input

  2. Failing to reject – accepting or looping indefinitely given a non-member input

2.3 Interactive proof systems

Our definitions of interactive proof systems (IPSes) are based on dworkstock . We will only be interested with a single variant, namely, the private-coin one-way IPS.

An IPS consists of a verifier and a prover. The verifier is a PTM vested with the task of recognizing an input string’s membership, and the prover is a function providing the purported proof of membership.

In private-coin one-way IPS, the prover can be viewed as a certificate function that maps input strings to infinitely long certificates, where and are respectively the input and certificate alphabets. The verifier , in turn, can be thought of as having an additional certificate tape (with a head that cannot move left) to read from. Given an input string , executes on it as usual, and with written on its certificate tape.

In this paper, the term “PTM verifier in a private-coin one-way IPS” will be abbreviated as “PTM verifier”.

The language of PTM verifier is the set of strings that , paired with some , accepts with a probability greater than . The error bound555Our definition of the error bound corresponds to the “strong” version of the IPS definition in dworkstock . of , denoted , is then defined as the minimum value satisfying both of the following, given any such input string :

  • , paired with some , accepts with a probability at least .

  • , paired with any , rejects with a probability at least .

Let be the class of languages that have verifiers with an error at most () using space, and amount of coins in the worst case, and with an expected runtime in , where denotes the length of the input string. Instead of a function of , and when appropriate, we write simply , , , or to describe a constant, logarithmic, polynomial, or exponential limit, respectively, in terms of the input length. We write and to describe that a resource is unavailable and unlimited, respectively. Furthermore, let;

We note the following known results: Con93 ; cl95 ; sayyakaryilmaz

For polynomial-time verifiers with the ability to use at least logarithmic space, the class is identical to the corresponding class , since such an amount of memory can be used to time one’s own execution and reject computations that exceed the time limit, enabling the verifier to run through several consecutively appended copies of certificates for the same string, and deciding according to the majority of the results of the individual controls. For constant-space verifiers, this procedure is not possible, and the question of whether equals is nontrivial, as we will examine in the following sections.

3 Linear-time 2nfa()’s and verification with small error

In sayyakaryilmaz , Say and Yakaryılmaz showed that membership in any language in may be checked by a 2pfa verifier using some constant number of random bits. They also showed how the weak error of the verifier can be made arbitrarily small.666In contrast to the (strong) error definition we use in this paper, the weak error definition (also by dworkstock ) does not regard the verifier looping forever on an input which is not a member of the language as an error. We will now describe their approach, which forms the basis of our own work.

The method, which we will name , for producing a constant-randomness 2pfa verifier, given any language , takes an always halting 2nfa recognizing (for some ), which exists by Lemmas 3 and 1, as its starting point. The constructed verifier will attempt to simulate , relying on the certificate and its private coins to compensate for the fact that it has fewer input heads than . Given any input string , expects a certificate to provide the following information for each transition of en route to purported acceptance: the symbols read by the heads, and the nondeterministic branch taken. tracks the described computational path of according to , until either the path reaches a halting state, or catches a “lie” in the certificate, in which case it rejects. If a nondeterministic branching that reports turns out to be unavailable with the given readings, or the simulation arrives at the reject state, rejects. At the beginning of ’s simulation, chooses a head at random using coins. Throughout the simulation, mimics the movements of this chosen head, verifying ’s claims about what is being scanned by that head at any step, while leaving the claims about the remaining heads unverified. If this simulation can repeated for rounds, all of which end with the described computational path of reaching acceptance without any lies being caught, finally accepts.

For any language in which can be recognized by an always halting 2nfa , the verifier of simulating for rounds uses a total of coins, which is a constant with respect to the input length.

Paired with the proper certificate , accepts all strings with probability 1. As mentioned earlier, the “weak error” of therefore depends only on its worst-case probability of accepting some .

For , there does not exist an accepting computation of on . Still, a certificate may describe a fictional computational path of to acceptance, by reporting inaccurate values for the symbols read by at least one of the heads. Since can not check many of the actual readings, it may fail to notice those inaccuracies. However, since chooses a head to verify in random, there is a non-zero chance that detects any such lie.

The likelihood that chooses the same head to verify as the certificate is inaccurate about is at least .777The error in the approximation used in this analysis does not affect the end result, and simplifies the explanation. Therefore, the weak error of is at most . This upper bound for weak error can be made as close to as one desires by increasing , the number of rounds to simulate.

Although the underlying 2nfa recognizing is an always halting machine, the verifier can still be wound up in an infinite loop by some certificate: might be relying on the joint effort of its many heads to ensure that it always halts. Since validates only a single head’s readings, inaccuracies on what others read may tamper this joint effort, and lead into a loop. A malicious certificate might lead in a loop due to being inaccurate about one head alone. This might happen during the first round, and then, there would not be any more rounds for , as it would be in a loop. The (strong) error of is therefore at most . This upper bound to cannot be reduced to less than , where is the minimum number of heads required in an always halting machine to recognize , by Corollary 4.

Say and Yakaryılmaz also propose the method , a slightly modified version of that produces verifiers with errors less than , albeit barely so. Let , and be an always halting 2nfa recognizing , for some . Regardless of the input string, the verifier rejects at the very beginning with a probability , using coins. Then, it continues just like . The bounds for the error of are as follows:

3.1 Safe and risky heads

How much of may yet fit into ? Method was our starting point in working towards a lower bound for .

Let be the 2nfa that uses to verify . The cause for ’s high strong error turns out to be a decidable characteristic of ’s heads. We call such undependable heads risky heads.

[Safe and risky heads] Let be a 2nfa with the transition function . For between and , let be a 2nfa with the transition function defined as follows:

If is always halting, then the th head of is a safe head. Otherwise, it is a risky head.

The execution of each 2nfa in subsection 3.1 is designed to correspond to the th-head-only simulation of the 2nfa by the verifier of . Just like the verifier of , can make any of the transitions that ’s transition function allows, chooses one by the certificate, but making sure that the th symbol fed to ’s transition function is the same as the symbol it is reading itself. Crucially, if a certificate can wind the verifier of into a loop during the one-headed simulation of , then the 2nfa has a branch of computation that loops with an analogous certificate. The converse is also true. Therefore, the verifier of can be wound up in a loop during a round of verification, if and only if it has chosen a risky head to verify.

Lemma 6.

Being safe or risky is a decidable property of a 2nfa’s heads.


To decide whether the th head of a 2nfa is safe, an algorithm can construct the 2nfa described in subsection 3.1, and then test whether by the algorithm in Lemma 5. ∎

Consider a language that is recognized by a 2nfa that always halts, and has safe heads only. The verifier using cannot choose a risky head, and therefore can never loop. Thus, it verifies with .

3.2 2nfa’s with a safe head and small-error verification

The distinction of safe and risky heads has been the key to our improvement to the method . Method , to be introduced in the proof of the following lemma, is able to produce verifiers with an error bound equaling any desired non-zero constant, for a subset of languages in .

Lemma 7.

Let . If there exists an always halting 2nfa with at least one safe head recognizing , then .

Proof idea

The method in the proof will construct verifiers similar to those of , except for a key difference. Given a language recognized by an always halting 2nfa that has at least one safe head, every head of has essentially the same probability of getting chosen by . In contrast, will be more likely to choose safe heads than the risky heads. Since cannot loop while tracking a safe head, one can reduce the probability of looping in any round to any non-zero constant by increasing its bias towards the safe heads.

The (redeemable) disadvantage of for having a bias towards the safe heads is that it will be less likely to choose any risky head. So a certificate’s lies about the risky heads will be less likely to get detected. However, the probability of choosing any head is still non-zero, as long as the bias is not absolute. Thus, the chances of repeatedly missing lies for rounds will be at most , which can also be lowered to any non-zero value by increasing .


Let , and be an always halting 2nfa recognizing with at least one safe head.

Let . Regarding the risky heads, let be the set of their indices, be their count, and if , let be any total function, such that each in is mapped to by exactly or many times. Regarding the safe heads, let , , and be defined analogously.

The following parameters will be controlling the error of the verifier:

  • as the number of rounds to simulate

  • as the probability that the selected head is a risky head, which must be finitely representable in binary, and 0 if and only if is zero

Let be the minimum number of fractional digits to represent in binary. Then, the algorithm for is as follows:

  1. “On input :

    1. Repeat times:

    2. 1em0ptMove the tape head to its original position.

    3. 1em0ptChoose from randomly with bias, as follows:

    4. 2em0ptFlip coins for a uniformly random binary probability value with fractional digits.

    5. 2em0ptFlip more coins. Let be the outcomes.

    6. 2em0ptChoose as if , and as otherwise.

    7. 1em0ptLet . Repeat the following until :

    8. 2em0ptRead from the certificate. If differs from the symbol under the tape head, reject.

    9. 2em0ptRead from the certificate. If , or , reject.

    10. 2em0ptSet . Move the tape head by .

    11. Accept.”

An iteration of stage 0a is called a round. The string of symbols read from the certificate during a round is called a round of certificate. Running on a non-member input string, false accepts for a round, when that round ends without rejecting. Similarly, loops on a round, when that round does not end.

Verifier keeps track of ’s state, starting from , and advancing it by and the reports of the certificate. At any given round, can either be led to the state and pass, to the state and reject, to follow a loop of transitions availed by and run indefinitely, or to a verification failure and again reject. Since these are events of distinct premises, a certificate may not lead to any combination of those at the same time, regardless of ’s random choice of head to verify.

Verifier running on an input string always accepts, if paired with a proper certificate that provides rounds of certificate, each logging an accepting execution path of .

Given an input , every execution path of the always halting 2nfa recognizing rejects eventually. For to accept , or loop on it, a certificate must be reporting an execution path that is possible by , however impossible for running on . The weak point of ’s verification is the fact that it overlooks symbols in stage 0h. Hence, must lie about those overlooked symbols. Since, however, chooses a head to verify randomly and in private, lies about a head in have just as much chance of being detected as how often that head gets selected.

Let be the probability of choosing the least likely head of . By the restrictions on , and the definition of and , every head of has a non-zero chance of being chosen, and therefore . If has an inaccuracy, then is also the minimum probability of it being detected.

Falsely accepting a string is possible for , only if is not a member of , is an inaccurate certificate with more than rounds, and fails to detect the inaccuracies in each round. The probability of this event is at most


Looping on a string is possible for , only if is an inaccurate certificate with rounds, fails to detect the inaccuracies in each round, and chooses a risky head on the final and infinite round. The probability of this event is at most


The probability that falsely accepts (Equation 1) can be reduced arbitrarily to any non-zero value by increasing . The probability that it loops on a non-member input (Equation 2) can also be reduced to any positive value by reducing if , and is necessarily otherwise.

Verifier uses coins in its execution; a constant amount that does not depend on the input string. ∎

In summary, given any language that can be recognized by a 2nfa with at least one safe head, and for any error bound , can verify memberships to within that bound. The amount of coins uses depends only on , and is constant with respect to the input string.

3.3 Linear-time 2nfa’s and safe heads

Lemma 8.

Given a language , the following statements are equivalent:

  1. .

  2. is recognized by a 2nfa with at least one safe head.

The proof of Lemma 8 will be in two parts.

Proof of 12.

Given , for some , there exists a 2nfa recognizing together with a constant , such that given any input string , halts in at most steps. Consider the 2nfa , which operates its first heads by ’s algorithm, and uses its last head as a timer that moves to the next cell on the input tape every th step of the execution. Head times out when it reaches the end of the string, and rejects in that case.

Note that recognizes indeed the same language as , given that , as well as , runs for at most steps for any given input string , and therefore cannot ever reach the end of . Moreover, head in is a safe head. ∎

Proof of 21.

Let be a 2nfa recognizing , such that its th head is a safe head. Let be the transition function of . Let be the 2nfa with the following transition function as in subsection 3.1:

Note the relationship between the computational paths (sequences of configurations) of and running on the same input string. These machines have the same state set, but is running a program which has been obtained from the program of by removing all constraints provided by all the other heads. If one looks at any possible computational path of through “filters” that only show the current state and the present position of the th head, and hide the rest of the information in ’s configurations, one will only see legitimate computational paths of .

Since the th head is safe, is always halting, and does not allow to ever repeat its configuration in a computation. But this means that is also unable to loop forever, since the two components of its configuration (the state and the position of its th head) can never be in the same combination of values at two different steps. So can not run for more than steps, where is the length of the input string. ∎

We have proven the following theorem.

Theorem 9.


Note that the following nonregular languages, among others, have linear-time 2nfa’s, and can therefore be verified with arbitrarily small error by constant-randomness, constant-space verifiers:


There are 2dfa’s without risky heads recognizing the languages EQ1 and PAL. We have not been able to find 2nfa’s without risky heads that recognize the languages EQ2 and CERT.

4 Discussion and conclusion

Having determined that , it is natural to ask if any one of these subset relationships can be replaced by equalities. Let us review the evidence we have at hand in this matter.

One approach to prove the claim that constant-space, constant-randomness verifiers can be constructed for every desired positive error bound (i.e. that ) would be to show that equals , i.e. that any 2nfa has a linear-time counterpart recognizing the same language. This, however, is a difficult open question sayyakaryilmaz . As a matter of fact, there are several examples of famous languages in , e.g.

for which we have not been able to construct 2nfa’s with a safe head, and we conjecture that .

We will now show that is contained in a subset of corresponding to a tighter time restriction of on the underlying nondeterministic Turing machine.888Recall that logarithmic-space TM’s require time for recognizing the palindromes language cobham ; melkebeek ; durisgalil , which is easily recognized by a linear-time 2dfa. denotes the class of languages that can be verified by a TM that uses time and space, simultaneously.

Theorem 10.

Proof idea999We thank Martin Kutrib for providing us with an outline of this proof.

Given a 2nfa that runs in linear time, an NTM can simulate it in steps. One such uses counters for keeping the head positions of , and caches for a faster access to the symbols in the vicinity of each head, on a tape with tracks. initializes its caches with a symbol, followed by the first symbols of the input, and puts a mark on symbols to indicate the position of each simulated head. Counters are initialized as for yet another indication of the head positions.

To mimic reading its tape, reads the marked symbols on its caches. To move the simulated heads, both moves the marks on the caches, and adjusts the counters. If a mark reaches the end of its cache, re-caches by copying the symbols centered around the corresponding head from the input to that cache. Counters provide the means for to locate these symbols on the input.

As the analysis will show, the algorithm described for runs within the promised time and space bounds. In the following proof, will have an additional track that has a mark on its th cell to indicate the middle of the caches.


Let be a 2nfa that runs in linear time. An NTM can simulate , by using tracks on its tape to have;

  • digit binary counters, , with their least significant digit on their left end,

  • caches of input excerpts of length, , and

  • a mark on the th cell to indicate the middle.

The work tape alphabet allows to encode those information, where;

  • to represent each

  • to represent each cache, where;

    • , and

    • is a clone of , containing “marked” versions of all ’s symbols.

Initially, all cells of the work tape contain the symbol . The algorithm of is as follows:

  1. “On input of length :

    1. Write 0 to each .

    2. Write on to each .

    3. Write to the th cell of the last track.

    4. Let . Repeat the following until :

    5. 1em0ptScan the caches. Note101010This is done using the states of , and does not use work tape. the marked symbol in as .

    6. 1em0ptGuess a . Reject if the set is empty, or .

    7. 1em0ptFor all , adjust , and move the mark on by .

    8. 1em0ptRe-cache each that has a symbol, as follows:

    9. 2em0ptClear the mark on of .

    10. 2em0ptGo to th cell on the input.

    11. 2em0ptGo to middle of on the work tape.

    12. 2em0ptMove both tape heads left, until the left end of is reached.

    13. 2em0ptCopy symbols from the input to between the # symbols of .

    14. 2em0ptMove both tape heads left, until the middle of is reached.

    15. 2em0ptMark the middle symbol on .

    16. 2em0ptSet to the input head’s position index.

    17. 1em0ptUpdate as .

    18. Accept.”

should carefully prepend/append the left/right end marker to a cache when copying the beginning/end of the input in stage 0m, respectively. should also skip stage 0g for an , if the corresponding movement is done while reading an end marker and attempting a movement beyond it. These details have been omitted from the algorithm to reduce clutter.

Counting up to in binary is a common task across this algorithm, and takes linear time, by a standard result of amortized analysis. Only the stages that take a constant amount of steps are omitted from the following analysis.

Stage 0b takes time as it involves counting up to in binary to find and marking the th cell on the caches. After putting # on both ends, copying in takes more steps. Stage 0c can be performed in steps, by copying # symbols over from a cache, and moving them towards the center one by one until they meet.

Given that runs in linear time, the loop of stage 0d is repeated for at most many times. Stages 10 and 0g take logarithmic time.

The re-caching in stage 0h is to shift the window of input on a cache by , so that the mark will be centered on that cache. Stages 0j and 0p are the most time consuming sub-stages of a re-cache, involving decrementing of down to 1, and setting it back to its original value, respectively. They both take time, for that they count down from or up to at most. Every other sub-stage of a re-cache takes time. As a result, each re-cache takes time.

Re-caches are prohibitively slow. Luckily, since the head marker is shifted to the middle with each re-cache, a subsequent re-cache will not happen on the same cache for at least another steps of the simulation. Moreover, since the number of steps that runs is in , the number of times a cache can be re-cached is in for the entire simulation. Hence, stage 0h’s time cost to is .

Caches and counters occupy cells on ’s tape. Since every stage of runs in time, so does . ∎

It is not known whether contains any languages that are not members of .

If is indeed a proper subset of , studying the effects of imposing an additional time-related bound on the verifier may be worthwhile in the search for a characterization. We conclude by noting the following relationship between runtime, the amount of randomness used, and the probability of being fooled by a certificate to run forever in our setup:

Lemma 11.

Let be a 2pfa verifier flipping coins at most in a private-coin one-way IPS, and recognizing the language . If running on some string of length paired with some certificate always takes steps, then its error is at least .


Let be a 2pfa as described above, recognizing . By an idea introduced in sayyakaryilmaz , we will construct a verifier equivalent to . For , let be the 2dfa verifier that is based on , but hard-wired to assume that its th “coin flip” has the outcome . Construct the 2pfa verifier that flips coins at the beginning of its execution, and obtains the -bit random string . Then, passes the execution to .

Verifiers and have the same control, whenever their random bits are the same. Therefore, they are equivalent.

Each has different configurations, where denotes the length of the input string. Similarly, any collection of distinct has different collective configurations. Let be any one of those collections.

Let and be a nonmember string and its certificate satisfying the premise of the statement. Then, each paired with also runs on for steps. The collection , in that many steps, necessarily repeats a collective configuration.

Consider the prefix of consumed by until the first time a collective configuration of is repeated. Also consider the suffix of consumed by since the first occurrence of the repeated collective configuration. Then, paired with the certificate repeats its configurations indefinitely whenever it chooses any of the to pass the execution to.

Both and paired with loop on with a probability at least . Consequently, their errors are at least . ∎


We are grateful to Martin Kutrib, Neal E. Young, Ryan O’Donnell, and Ryan Williams for their helpful answers to our questions. We also thank the anonymous referees of gezer for their constructive comments.


  • (1) M. U. Gezer, Windable heads and recognizing NL with constant randomness, in: A. Leporati, C. Martín-Vide, D. Shapira, C. Zandron (Eds.), Language and Automata Theory and Applications, Springer International Publishing, Cham, 2020, pp. 184–195.
  • (2) A. Condon, R. Ladner, Interactive proof systems with polynomially bounded strategies, Journal of Computer and System Sciences 50 (3) (1995) 506–518.
  • (3) A. C. C. Say, A. Yakaryılmaz, Finite state verifiers with constant randomness, Logical Methods in Computer Science 10 (3) (Aug. 2014).
  • (4)

    M. Sipser, Introduction to the Theory of Computation, Cengage Learning, 2012.

  • (5) J. Hartmanis, On non-determinancy in simple computing devices, Acta Informatica 1 (4) (1972) 336–344.
  • (6) B. Monien, Two-way multihead automata over a one-letter alphabet, RAIRO. Inform. théor. 14 (1) (1980) 67–82.
  • (7) V. Geffert, A. Okhotin, Transforming two-way alternating finite automata to one-way nondeterministic automata, in: International Symposium on Mathematical Foundations of Computer Science, Springer, 2014, pp. 291–302.
  • (8) C. Dwork, L. Stockmeyer, Finite state verifiers I: The power of interaction, J. ACM 39 (4) (1992) 800–828.
  • (9) A. Condon, The complexity of the max word problem and the power of one-way interactive proof systems, computational complexity 3 (3) (1993) 292–305.
  • (10) A. Cobham, Time and memory capacity bounds for machines which recognize squares or palindromes, IBM Res. Rep. RC-1621 (1966).
  • (11) D. van Melkebeek, Time-space lower bounds for NP-complete problems, in: G. Plun, G. Rozenberg, A. Salomaa (Eds.), Current Trends in Theoretical Computer Science, World Scientific, 2004, pp. 265–291.
  • (12) P. Dúriś, Z. Galil, A time-space tradeoff for language recognition, Mathematical systems theory 17 (1) (1984) 3–12.