# Probabilistic verification of all languages

We present three protocols for verifying all languages: (i) For any unary (binary) language, there is a log-space (linear-space) interactive proof system (IPS); (ii) for any language, there is a constant-space weak-IPS (the non-members may not be rejected with high probability); and, (iii) for any language, there is a constant-space IPS with two provers where the verifier reads the input once. Additionally, we show that uncountably many binary (unary) languages can be verified in constant space and in linear (quadratic) expected time.

• 3 publications
• 5 publications
12/03/2019

### Windable Heads Recognizing NL with Constant Randomness

Every language in NL has a k-head two-way nondeterministic finite automa...
06/22/2020

### Constant-Space, Constant-Randomness Verifiers with Arbitrarily Small Error

We study the capabilities of probabilistic finite-state machines that ac...
11/05/2018

### Almost Optimal Distance Oracles for Planar Graphs

We present new tradeoffs between space and query-time for exact distance...
06/02/2022

### Real-Time, Constant-Space, Constant-Randomness Verifiers

We study the class of languages that have membership proofs which can be...
06/30/2020

### An Approach to Regular Separability in Vector Addition Systems

We study the problem of regular separability of languages of vector addi...
07/27/2018

### Sound Transpilation from Binary to Machine-Independent Code

In order to handle the complexity and heterogeneity of mod- ern instruct...
09/16/2020

### Strong data processing constant is achieved by binary inputs

For any channel P_Y|X the strong data processing constant is defined as ...

## 1 Introduction

When using arbitary real-valued transitions, quantum and probabilistic machines can recognize uncountably many languages with bounded error [1, 6, 7, 13] because it is possible to encode an infinite sequence as a transition value and then determine its -th bit by using some probabilistic or quantum experiments.

For a given alphabet , all strings can be lexicographically ordered. Then, for a language defined on , the membership information of all strings with respect to can be stored as a single probability, say , such that the membership value of -th string can be stored at the -th digit of . Thus, we can determine whether a given string is in or not (with high probability) by learning its corresponding digit in . For this purpose, we can toss a coin coming head with probability exponential many times (in ) and count the total number of heads to guess the corresponding digit with high probability. By using this method, we can easily show that any unary (binary) language can be recognized in linear (exponential) space.

In this paper, we present better space bounds by using interactive proof systems (IPSs). By using fingerprinting method [10], we exponentially reduce the above space bounds by help of a single prover. Then, by adopting the one-way111The verifier always sends the same symbol and so the only useful communication can be done by the prover. protocol given in [4] for every recursive enumerable language, we present a constant-space weak-IPS with two-way communication for every language. Here “weak” means that the non-members may not be rejected with high probability. In [9]

, it was shown that one-way probabilistic finite automata can simulate the computation of a Turing machine by communicating with two provers. We also modify this protocol and extend the same result for every language.

Currently we left open whether there is a constant-space IPS for every language or not. (Remark that the answer is positive for Arthur-Merlin games with constant-space quantum verifiers [13].) But, we obtain some other strong results for constant-space IPSs such that the binary (unary) languages having constant-space linear-time (quadratic-time) IPSs form an uncountable set. Remark that it is also open whether constant-space probabilistic machines can recognize uncountably many languages or not [6].

In the next section, we present the notations and definitions to follow the rest of the paper. Then, we present our results in two sections. The Section 3 is dedicated to the verification of all languages, while in the Section 4 we present our results for uncountably many languages. The latter section is divided into two subsections. In Section 4.1, we present our constant-space protocols for two unary and one binary languages that are used in Section 4.2. In Section 4.2, we present our constant-space protocols verifying uncountably many languages.

## 2 Background

We assume the reader is familiar with the basics of complexity theory and automata theory. We denote the left and the right end-markers as ¢ and , and the blank symbol as . The input alphabet not containing symbols ¢ and is denoted by , denotes the set , the work tape alphabet not containing symbol is denoted by , denotes the set , and the communication alphabet is denoted by . denotes set of all strings (including the empty string ()) defined over . We order the elements of lexicographically and then represent the -th element by , where . For any natural number , denotes the unique binary representation and denotes the reverse binary representation. For any given string , is its length, is its -th symbol from the left (), , and denotes the lexicographical number of , such that for any . Any given input, say , is always placed on the input tape as .

An interactive proof system (IPS) [11, 2] is composed by a prover () and a (probabilistic) verifier (), denoted as pair , who can communicate with each other. The aim of the verifier is to make a decision on a given input and the aim of the prover (assumed to have unlimited computational power) is to convince the verifier to make positive decision. Thus, the verifier should be able to verify the correctness of the information (proof) provided by the prover since the prover may be cheating when the decision should be negative.

In this paper, we focus on memory-bounded verifiers [8] and our verifiers are space–bounded probabilistic Turing machines. The verifier has two tapes: the read-only input tape and the read/write work tape. The communication between the prover and verifier is done via a communication cell holding a single symbol. The prover can see only the given input and the symbols written on the communication cell by the verifier. The prover may know the program of the verifier but may not know which probabilistic choices are done by the verifier. Since the outcomes of probabilistic choices are hidden from the prover, such IPS is called private-coin. If the probabilistic outcomes are not hidden (sent via the communication channel), then it is called public-coin, i.e. the prover can have complete information about the verifier during the computation. Public-coin IPS is also known as Arthur-Merlin games [2].

A space-bounded probabilistic Turing machine (PTM) verifier is a space-bounded PTM extended with a communication cell. Formally, a PTM verifier is a 8-tuple

 (S,Σ,Γ,Υ,δ,s1,sa,sr),

where, S is the set of states, consisting of three disjoint sets: is the set of reading states, is the set of communicating states, and is the set of halting states; is the initial state, and () are the accepting and rejecting states, respectively, and is the transition function composed by and that are responsible for the transitions when in a communicating state and in a reading state, respectively. There is no transition from or since when V enters a halting state ( or ), the computation is terminated.

When in a communicating state, say , writes symbol on the communication cell and the prover writes back a symbol, say . Then, switches to state .

When in a reading state, behaves as an ordinary PTM:

 δr:Sr×~Σ×~Γ×S×~Γ×{←,↓,→}×{←,↓,→}→[0,1].

That is, when is in reading state , reads symbol on the input tape, and reads symbol on the work tape, it enters state , writes on the cell under the work tape head, and then the input tape head is updated with respect to and the work tape head is updated with respect to with probability

 δ(s,σ,γ,s′,γ′,d,d′),

where “” (“” and “”) means the head is moved one cell to the left (the head does not move and the head is moved one cell to the right). To be a well-formed PTM, the following condition must be satisfied: for each triple ,

 ∑s′∈S,γ′∈~Γ,d∈{←,↓,→},d′∈{←,↓,→}δ(s,σ,γ,s′,γ′,d,d′)=1.

In other words, all outgoing transitions for the triple have total probability of 1.

The space used by on is the number of all cells visited on the work tape during the computation with some non-zero probability. The verifier is called to be space bounded machine if it always uses space on any input with length . The language is verifiable by with error bound if

1. there exists an honest prover such that any is accepted by with probability at least by communicating with , and,

2. any is always rejected by with probability at least when communicating with any possible prover (*).

The first property is known as completeness and the second one known as soundness. Generally speaking, completeness means there is a proof for a true statement and soundness means none of the proof works for a false statement. The case when every member is accepted with probability 1 is also called as perfect completeness.

It is also said that there is an IPS (, ) with error bound for language . Remark that all the time and memory bounds are defined for the verifier since we are interested in the verification power of the machines with limited resources.

We also consider so-called weak IPS, which is obtained by replacing the condition 2 above by the following condition.

• Any is accepted by with probability at most when communicating with any possible prover (*).

Therefore, in weak IPSs, the computation may not halt with high probability on non-members, or, in other words, each non-member may not be rejected with high probability.

Due to communications with the prover, the program of the verifier with the possible communications is called protocol. A private-coin protocol is called one-way if the verifier always sends the same symbol to the prover. In this case, we can assume that the prover provides a single string (possible infinite) and this string is consumed in every probabilistic branch. It is also possible that this string (certificate) is placed on a separate one-way read-only tape (certificate tape) at the beginning of the computation and the verifier can read the certificate from this tape.

We also consider interactive proof systems with two provers [3, 14]. IPS with two provers is composed by two provers (, ) and a probabilistic verifier (), denoted as (, , ). The verifier has a communication channel with each prover and one prover does not see the communication with the other prover. The verifier in such IPS has different transition function . When in a communicating state, say , writes symbol on the communication cell of the first prover () and on the communication cell of the second prover (), and the provers and write back symbols, say and , respectively. Then, switches to state .

There are different models of IPS with two provers. In Multi-Prover model by [3] both provers collaborate such that both of them are honest, or both of them are cheating. In Noisy-Oracle model by [14] both provers oppose each other such that at least one of them is honest, and other may be cheating. The latter model can also be formalized as a debate system (see e.g. [5]) where the second prover is called as refuter. In this paper we consider the IPSs with two provers working for both models equally well.

Without communication, a verifier is a PTM. For a PTM, we simply remove the components related to communication in the formal definition (and so PTM does not implement any communicating transition). When there is no communication, then we use term recognition instead of verification.

A PTM without work tape is a two–way probabilistic finite state automaton (2PFA). Any verifier without work tape is a constant-space verifier or 2PFA verifier.

A 2PFA can be extended with integer counters - such model is called two-way probabilistic automaton with counters (2PCA). At each step of computation 2PCA can check whether the value of each counter is zero or not, and then, as a part of a transition, it can update the value of each counter by a value from .

A two-way model is called sweeping if the direction of the input head can be changed only on the end-markers. If the input head is not allowed to move to the left, then the model is called “one-way”.

We denote the set of integers and the set of positive integers . The set is the set of all subsets of positive integers and so it is an uncountable set (the cardinality is ) like the set of real numbers (). The cardinality of or is (countably many).

The membership of each positive integer for any can be represented as a binary probability value:

 pI=0.x101x201x301⋯xi01⋯,    xi=1↔i∈I.

Similarly, the membership of each string for language is represented as a binary probability value:

 pL=0.x101x201x301⋯xi01⋯,    xi=1↔Σ∗(i)∈L.

The coin landing on head with probability (resp., ) is named as (resp., ).

## 3 Verification of all languages

We start with a basic fact presented in our previous paper [6].

###### Fact 1

[6] Let be an infinite binary sequence. If a biased coin lands on head with probability , then the value is determined correctly with probability at least after coin tosses, where is guessed as the -th digit of the binary number representing the total number of heads after the whole coin tosses.

Then, the following results can be obtained straightforwardly.

###### Theorem 3.1

Any unary language is recognized by a linear-space PTMs with bounded error.

###### Proof

Let be our alphabet. For any unary language , we can design a PTM for , say , that uses .

Let be the given input for . The PTM implements the procedure described in Fact 1 in a straightforward way and gives its decision accordingly, which will be correct with probability not less than . The machine only uses linear-size binary counters to implement coin tosses and to count the number of heads (for unary and , ). By repeating the same procedure many times (not depending on ), the success probability is increased arbitrarily close to 1. ∎

###### Remark 1

Let be a -ary language for , where . For any given -ary string , let represent its membership bit in . Then, by using the exactly the same algorithm given in the above proof, we can determine correctly with high probability. However, is exponential in and so the PTM uses exponential space.

###### Corollary 1

Any -ary () language is recognized by an exponential-space PTMs with bounded error.

When interacting with a prover, we can obtain exponentially better space bounds. For this purpose, we use probabilistic fingerprint method: For comparing two -bit numbers, say and , we can randomly pick a -bit prime number for some positive integer and compare with . It is known that [10] if , then clearly , and, if , then with high probability depending on number as specified below.

###### Fact 2

[10] Let be the number of prime numbers not greater than , be the number of prime numbers not greater than and dividing , and be the maximum of over all , , . Then, for any , there is a natural number such that .

###### Theorem 3.2

Any unary language is verified by a log-space PTM with bounded error.

###### Proof

The protocol is two-way. Let be the given input for . (The decision on the empty string is given deterministically.) Remember that the membership bit of is in . Let . We pick a value of (see Fact 2) satisfying the error bound .

The protocol has three phases. In the first phase, there is no communication. The verifier picks two random -bit prime numbers, say and , and then, it calculates and stores in binary on the work tape. The verifier also prepares two binary counters and for storing the total number of coin tosses in modulo and the total number of heads in modulo , respectively, and one “halting” counter .

In the second phase, the verifier asks from the prover to send . Once the verifier receives symbol , the communication is ended in this phase. Therefore, we assume that the prover sends either the finite string for some or an infinite sequence of ’s.

For each received from the prover, the verifier reads the whole input and adds one to with probability . We call it as halting walk. If , the verifier terminates the computation and rejects the input. If the computation is not terminated, the verifier tosses , sends the result to the prover, and increases by 1. If the result is head, the verifier also increases by 1. When the second part is ended, the verifier checks whether previously calculated is equal to , stored on . If they are not equal, then the input is rejected. In other words, if the prover does not send ’s, then the verifier detects this with probability at least .

Let be the binary value stored on and be the total number of heads obtained in the second phase. In the third phase, the verifier asks from the prover to send (the least significant bits are first). By using the input head, the verifier easily reads the -th bit of , say , and also checks whether the length of does not exceed . Meanwhile, the verifier also calculates . If the length of is greater than or , then the input is rejected. In other words, if the prover does not send , then the verifier can catch it with probability . At the end of third phase, the verifier accepts the input if is 1, and, rejects it if is 0.

With respect to Chebyshev’s inequality, the value of reaches 8 after more than ’s with probability

 Pr[|X−E[X]|≥9]≤(164k)⋅(1−164k)⋅16⋅64k92<1681,

where is expected value of . This bound is important, since, for , Fact 2 cannot guarantee the error bound . (Remember that prime numbers and do not exceed for .)

If is not a member, then the accepting probability can be at most .

• If , then the input is rejected with probability at least , and so the accepting probability cannot be greater than .

• Assume that . If the prover does not send , then the input is rejected with probability , and so the accepting probability cannot be greater than .

• Assume that and the verifier sends , then the input is accepted with probability at most .

The expected running time for the non-members is exponential in due to the halting walks.

If is a member, then the honest prover sends all information correctly, and the verifier guesses correctly with probability at least if the computation is not terminated by halting walks. The probability of halting the computation (and rejecting the input) in the second phase is

 Pr[|X−E[X]|≥7]≤(164k)⋅(1−164k)⋅64k72<149.

Therefore, the verifier accepts with probability at least . The expected running time for members is also exponential in .

The verifier uses space and the success probability is increased by repeating the same algorithm. ∎

Due to Remark 1, we can follow the same result also for -ary languages with exponential increase in time and space.

###### Corollary 2

Any -ary () language is verified by a linear-space PTM with bounded error.

Currently we do not know any better space bound for (strong) IPSs. On the other hand, by using more powerful proof systems, we can reduce space complexity to . We first present a 1P4CA algorithm for any language.

###### Theorem 3.3

Any -ary () language is recognized by a 1P4CA with bounded error.

###### Proof

Let be the input alphabet with symbols, and for each , . Let () be the given input string. If , then . If , then is calculated as follows:

 lex(w)=1+n∑i=1ki−1+n∑i=1(lex(w[i])−2)⋅kn−i,

since there are total strings with length less than , and there are total strings with length that are coming before in the lexicographic order.

If is the empty string, then the decision is given deterministically. In the following part, we assume that . Let () represent the value of -th counter. At the beginning of computation . Firstly, reads and sets as follows. reads and sets . After that, for each , reads and multiplies the value by and increases it by with the help of other counters.

We claim that after reading , . We prove this claim by induction on . The basis, when , is trivial, since .

Suppose that for some :

 C1=lex(w[1]w[2]⋯w[m])=1+m∑i=1ki−1+m∑i=1(lex(w[i])−2)⋅km−i

Then reads , and is updated to

 C1 = (1+m∑i=1ki−1+m∑i=1(lex(w[i])−2)⋅km−i)⋅k+(2−k)+(lex(w[m+1])−2) = k+m∑i=1ki+m∑i=1(lex(w[i])−2)⋅km+1−i+(2−k)+(lex(w[m+1])−2) = 2+m+1∑i=2ki−1+m∑i=1(lex(w[i])−2)⋅km+1−i+(lex(w[m+1])−2) = 1+m+1∑i=1ki−1+m+1∑i=1(lex(w[i])−2)⋅km+1−i.

Thus, and so the claim is proven.

After reading , stays on the right end-marker and do the rest of the computation without moving the input head. Let . decreases by 1, and sets and . Then, until hits zero, decreases by 1, and then multiplies by 64 and by 8 with the help of 4th counter. When , we have , , and .

After that tosses times and guesses the value of in by using the number of total heads. Remember that if and only if . Let be the total number of heads in binary. Due to Fact 1, the ()-th bit of is with probability at least . Remark that when counting the total number of heads, the ()-th bit of is flipped after each number of heads. Thus, by switching the values of the 3rd and the 4th counters, determines each block of heads. By using its internal states, keeps a candidate value for , say . sets before the coin tosses and updates it to after each number of heads.

At the end of the coin tosses, if (resp., ), the input is rejected (resp., accepted). The decision will be correct with probability at least . ∎

It is well-known fact that any recursive language is recognized by a deterministic automata with two counters [12]. Based on this fact, Condon and Lipton [4] proved that for any recursively enumerable language , there is a one-way weak-IPS with a 2PFA verifier by presenting a protocol that simulates the computation of a 2P2CA on a given input. We extend this protocol for the verification of any language by also using two-way communication.

###### Theorem 3.4

There is a weak-IPS for any language where is a sweeping PFA.

###### Proof

For any language , simulates the algorithm of 1P4CA described in the proof of Theorem 3.3 on the given input and asks the prover to store the contents of four counters.

For each step of , interacts with the prover. First, asks from the prover the values of four counters as , where is the value of -th counter (). Second, implements the transition of based on the current state, the symbol under the input head, the probabilistic outcome, and the status of the counters. Then, updates the state and head position by itself and sends the updates on the values of the counters to the prover as . Here and it means that the verifier asks from the prover to add to the value of the -th counter, where .

The communication between the verifier and the prover is two-way since may toss during its computation. Without loss of generality, we pick an arbitrary computation path of . For this path, let

 cV=f1,1f1,2f1,3f1,4 f2,1f2,2f2,3f2,4 ⋯ fi,1fi,2fi,3fi,4 ⋯

be the string representing the messages sent by the verifier and let

 cP=as1,1bs1,2cs1,3ds1,4 e as2,1bs2,2cs2,3ds2,4 e ⋯ e asi,1bsi,2csi,3dsi,4 e ⋯

be the string representing the messages sent by the prover. The verifier can check the validity of as described below. For each , can compare , , , and with , , , and , respectively, by using the values .

Let be the parameter to determine the error bound in the following checks. For the validity check of , makes four comparisons in parallel such that one comparison is responsible for one counter. During the -th comparison (), creates two paths with equal probabilities. In the first path, says “” with probability, denoted by ,

 Pr[Aj]=g−1∏i=1y2si,j+2(si+1,j−fi,j),

in the second path, it says “” with probability, denoted by ,

 Pr[Rj]=g−1∏i=1y4si,j+y4(si+1,j−fi,j)2,

where is the total number of computational steps of

. Here each comparison executes two parallel procedures such that the first procedure produces the probabilities for odd

’s and the second procedure produces the probabilities for even ’s.

Once the simulation of is finished, says only “A”s in all comparisons with probability

 Pr[A]=Pr[A1]⋅Pr[A2]⋅Pr[A3]⋅Pr[A4]

and says only “R”s in all comparisons with probability

 Pr[R]=Pr[R1]⋅Pr[R2]⋅Pr[R3]⋅Pr[R4].

It is easy to see that if for each and , then . On the other hand, if for some and , then

 Pr[R]Pr[A]≥y2si,j−2(si+1,j−fi,j)2+y2(si+1,j−fi,j)−2si,j2>12y2

since either or is a negative even integer.

If is finite (a cheating prover may provide an infinite ), then gives a positive decision for with probability and negative decision for with probability . Hence, if is valid, then the probability of positive decision is times of the probability of negative decision. If is not valid, then the probability of negative decision is at least times of the probability of positive decision.

When the computation is finished, if does not give a decision on , moves input head on the left end-marker and restarts the process of computation and interaction. If gives a negative decision on , the input is rejected. If gives a positive decision on , the input is accepted if is computed as 1, and the input is rejected if is computed as 0.

If the prover is honest, then . If , the probability to accept the input is at least and the probability to reject the input is at most . Therefore, the total probability to accept the input is at least

 Pr[A]⋅34Pr[A]⋅34+Pr[A]⋅14+y⋅Pr[A]=34⋅(1+y),

which can be arbitrarily close to by picking sufficient value . If and the prover is honest, then the probability to accept the input is at most and the probability to reject the input is at least . Therefore, the total probability to reject the input is at least

 Pr[R]⋅34+y⋅Pr[R]Pr[R]⋅34+y⋅Pr[R]+Pr[R]⋅14=3+4y4+4y>34.

If the prover is cheating and provides a finite , the probability to reject the input is at least times of the probability to accept the input. Therefore, the total probability to reject the input is at least

 12y1+12y=12y+1,

which can be arbitrarily close to 1 by picking sufficient value . A cheating prover may provide an infinite , in which case does not stop.

can execute the algorithm of calculation of value multiple times in each interaction and in case of positive decision for choose the most frequent outcome for as the decision for recognition, thus, increase the probability of correct decision arbitrarily close to 1. ∎

Now we switch our focus to the IPSs with two provers. It is known that a 1PFA verifier can simulate the work tape of a TM reliably with high probability by interacting with two provers [9]. Since we use this fact later, we present its explicit proof here.

###### Fact 3

[9] A 1PFA verifier can simulate the work tape of a TM reliably with high probability by interacting with provers and .

###### Proof

Let denote the contents of the infinite to the right work tape including the position of head, where for each , denotes -th symbol from the left on work tape. To store the position of head one of ’s is marked. We accomplish this by doubling the alphabet of work tape, so one symbol can simultaneously store the value and marker of presence of head.

At the beginning of computation, secretly picks random values , , and , where each of them is between 0 and and is a predetermined prime number greater than the alphabet of the work tape. The verifier interacts with provers and and asks them to store in the following way: for each odd (resp., even) , asks (resp., ) to store , , , where is picked by randomly, and is a signature. Therefore, each prover stores sequence of triples (, , ) for in ascending order.

To read the contents from the work tape, scans from the left to the right, i.e., for each , requests from the provers to send the triples (, , ). While scans the input, it checks the correctness of signatures. If finds a defect, it rejects the input and also identifies the prover giving the incorrect signature as a cheater.

In order to update the content of the work tape, picks new random , scans the input, for each triple (, , ), generates new , recalculates , and asks the provers to replace the values. If is changed for some , then the values of triple are changed accordingly. Note that one update may include the change of one and the change of the position of work head by one cell, in which case at most 2 sequential ’s are updated. In this case, asks the provers to update the contents of the corresponding triples.

The provers cannot learn the values of and from the information provided by , since, for each , sends that is derived from the computation, random value , and value , which has one-to-one correspondence with , which was also chosen randomly. Note that is also randomly chosen and kept in secret.

Let be the triple provided by to one of the provers, say , for some , and be the values provided by , where at least one of the values is different from the one sent by . Assume that corresponds to and correctly. Therefore, . Then, we have the following relation between and : . Since is a prime number, exactly pairs of ’s satisfy this equation, and there are total different pairs of ’s. Thus, since does not know the values of and , the probability that can provide valid values is .

If both provers are honest, then they send the correct stored values to . This implies that all signature checks will be passed successfully and accepts the interactions. If a prover changes at least one symbol of the contents that entrusted to store, the signature check will fail for the triple of changed symbol with probability at least , in which case rejects the interaction and identifies the cheating prover. Note that the described protocol works equally well in both IPS models with two provers, because any case of cheating is recognized individually. ∎

Now we can present our next result about the verification of every language.

###### Theorem 3.5

There is an IPS with two provers () for any language where is a 1PFA.

###### Proof

A 2PFA verifier, say , can execute the algorithm for the Corollary 1 by interacting with two provers as described in the proof of Fact 3 such that the provers reliably store the contents of the counters for coin tosses and processing the number of heads. If the content of the work tape is changed by a prover, can catch this with probability at least , and so rejects the input with the same probability.

Let be the given input and let represent its membership bit in . If both provers are honest, then correctly performs tosses of and then processes the number of heads. If the bit (()-th bit in the number of heads) is guessed as 1, the input is accepted, and, if is guessed as 0, then the input is rejected. Therefore, the input is verified correctly with probability . If at least one of the provers is not honest, then any change of the contents stored by the prover is detected with probability at least , and the input is rejected. This is true if either one or both provers are cheating.

We can modify and obtain 1PFA verifier . At the beginning of computation, reads the input from left to right and writes it on the work tape (asks the provers to store it). Then, implements the rest of the protocol by staying on the right end-marker.

After writing the input on work tape, like in Theorem 3.4, can execute the algorithm of calculation of value multiple times and increase the probability of correct decision on recognition arbitrarily close to 1. ∎

## 4 Verification of uncountably many languages

### 4.1 Constant-space verification of nonregular languages

In this subsection, we present two nonregular unary languages and one nonregular binary language that can be verified by 2PFAs in quadratic and linear time, respectively. The protocols presented here will be also used in the next subsection.

###### Theorem 4.1

is verifiable by a 2PFA in quadratic expected time with bounded error.

###### Proof

The protocol is one-way and the verifier expects from the prover a string of the form

 (amb)mb

for the members of the language, where .

Let be the given input for (the decisions on the shorter strings are given deterministically) and let be the string provided by the prover. The verifier deterministically checks whether is of the form

 y=am1bam2b⋯bamib⋯ or y=am1bam2b⋯bamtbb

for some . If the verifier sees a defect on , then the input is rejected.

In the remaining part, we assume that is in one of these forms. At the beginning of the computation, the verifier places the input head on the left end-marker and splits computation in four paths with equal probabilities.

In the first path, the verifier reads and in parallel and checks whether is finite, i.e.

 y=am1bam2b⋯bamtbb,

and whether it satisfies the equality , where . If one of the checks fails, the input is rejected. Otherwise, it is accepted.

The second path is very similar to the first path and the following equality is checked:

 n=t∑j=2mj+t∑j=11,

i.e., the verifier skips from and counts ’s instead. If the equality is satisfied, the input is accepted. Otherwise, it is rejected.

The computation in the first and second paths is deterministic (a single decision is given in each) and both paths terminate in linear time.

In the third path, the verifier tries to make the following consecutive comparisons:

 m1=m2,m3=m4,…,m2j−1=m2j,….

For each , the verifier can easily determine whether by attempting to move the input head to the right by squares and then to the left by squares. If the right end-marker is visited (), or the left end-marker is visited earlier than expected () or is not visited (), then the comparison is not successful and so the input is rejected. Otherwise, the comparison is successful and the verifier continues with a random walk (described below) before the next comparison, except that if the last comparison is successful, then the input is accepted without making the random walk.

The aim of the random walk is to determine whether the prover sends a finite string or not, i.e. the prover may cheat by sending the infinite string for some which passes successfully all comparison tests described above.

The random walk starts by placing the input head on the first symbol of the input and terminates after hitting one of the end-markers. During the random walk, the verifier pauses to read the string . It is a well known fact that this walk terminates in expected number of steps and the probability of ending on the right (resp., the left) end-marker is (resp., ).

If the walk ends on the left end-marker, then the verifier continues with the next comparison. If the walk ends on the right end-marker, the verifier checks whether the number of ’s in the remaining part of is less than or not by reading whole input from right to left. If it is less than , then the input is accepted. Otherwise ( contains more than ’s), the input is rejected. In any case, the computation is terminated with probability after the walk.

The fourth path is identical to the third path by shifting the comparing pairs: the verifier tries to make the following consecutive comparisons:

 m2=m3,m4=m5,…,m2j=m2j+1,….

Now, we can analyze the overall protocol. If for some , then the prover provides and the input is accepted in every path and so the overall accepting probability is 1. Thus, every member is accepted with probability 1. Moreover there will be at most random walks and so the overall running time is .

If the input is not a member, then the input is rejected in at least one of the paths. If it is rejected in the first or second path, then the overall rejecting probability is at least . If it is rejected in the third or fourth paths, then the overall rejecting probability cannot be less than as explained below.

We assume that the input is not rejected in the first and second paths. Then, we know that is finite, the number of ’s in is , and is composed by blocks. Since is not a perfect square, there is at least one pair of consecutive blocks that have different number of ’s. Hence, at least one of the comparisons will not be successful, and, the input will be rejected in one of these paths. Let be the minimum index such that the comparison of the -th pair is not successful (in the third or fourth path). Then, . (If not, contains at least blocks and each of these blocks contains ’s, and, this implies that contains more than ’s.) Then, the maximum accepting probability in the corresponding path is bounded from above by

 l∑i=11n(1−1n)i−1=1−(1−1n)l<1−(1−1n)√n2≤14.

(Remember that .) Therefore, the rejecting probability in the third or fourth path is at least , and so, the overall rejecting probability cannot be less than .

The maximum (expected) running time occurs when the prover sends the infinite for some . In this case, the protocol is terminated in the third and fourth paths with probability 1 after random walks, and so, the expected running time is quadratic in , .

By repeating the protocol above many times, say , we obtain a new protocol such that any non-member is rejected with probability arbitrarily close to 1, i.e. . ∎

###### Theorem 4.2

is verifiable by a 2PFA with bounded error in quadratic expected time.

###### Proof

The proof is very similar to the proof given for . The protocol is one-way and the verifier expects to receive the string

 yn=aba64b⋯ba64k−2ba64k−1bb

from the prover for some .

Let be the given input for . (The decisions on the shorter strings are given deterministically.) Let be the string sent by the prover. The verifier deterministically checks whether is of the form

 y=am1bam2b⋯bamib⋯ or y=am1bam2b⋯bamtbb

for some , where . If the verifier sees a defect on , the input is rejected. So, we assume that is in one of these forms in the remaining part.

The verifier splits into three paths with equal probabilities at the beginning of the computation. The first path checks whether is finite and

 n=1+63⋅t∑j=1mj.

If not, the input is rejected (because ). Otherwise, the input is accepted.

The second and third paths are very similar to the third and fourth paths in the proof for . In the second path, the verifier checks

 64⋅m2j−1=m2j

for each . The random walk part is implemented in the same way. The third path is the same except the comparing pairs: the verifier checks

 64⋅m2j=m2j+1

for each .

If , then the honest prover sends

 yn=ab<