1 Introduction
Postselection is the ability to give a decision by assuming that the computation is terminated with predetermined outcome(s) and discarding the rest of the outcomes. In [1], Aaronson introduced boundederror postselecting quantum polynomial time and proved that it is identical to the unboundederror probabilistic polynomial time. Later, postselecting quantum and probabilistic finite automata models have been investigated in [14, 15, 17, 18]. It was proven that postselecting realtime finite automata are equivalent to a restricted variant of twoway finite automata, called restarting realtime automata [16]. Later, it was also shown that these two automata models are also equivalent to the realtime automata that have the ability to send a classical bit through CTCs (closed timelike curves) [11, 12].
In this paper, we focus on boundederror postselecting realtime probabilistic finite automata (PostPFAs) and present many algorithms and protocols by using rationalvalued and realvalued transitions. Even though PostPFA is a restricted variant of twoway probabilistic finite automaton (2PFA), our results may be seen as new evidences that PostPFAs can be as powerful as 2PFAs.
We show that PostPFAs with rationalvalued transitions can recognize different variants of “equality” language . Then, based on these results, we show that they can verify certain unary nonregular languages. Remark that boundederror 2PFAs cannot recognize unary nonregular languages [9].
When using realvalued transitions (socalled magic coins), probabilistic and quantum models can recognize uncountably many languages by using significantly small space and in polynomial time in some cases [13, 5, 6, 7]. In the same direction, we examine PostPFAs using realvalued transitions and show that they can recognize uncountably many binary languages by using an extra counter. When interacting with a prover, we obtain a stronger result, that PostPFAs can recognize uncountably many unary languages. We also present some corollaries for probabilistic counter automata.
In the next section, we provide the notations and definitions used in the paper. Then, we present our results on PostPFAs using rationalvalued transitions in Section 3 and on PostPFAs using realvalued transitions in Section 4. In each section, we also separate recognition and verification results under two subsections.
As a related work, we recently present similar verification results for 2PFAs that run in polynomial expected time in [7]. Even though here we get stronger results for some cases (i.e., PostPFA is a restricted version of 2PFA), if we physically implement PostPFA algorithms and protocols presented in this paper, the expected running time will be exponential.
2 Background
We assume that the reader is familiar with the basics of fundamental computational models and automata theory.
For any alphabet , is the set of all finite strings defined on alphabet including the empty string and is set of all infinite strings defined on alphabet . We fix symbols ¢ and as the left and the right endmarker. The input alphabet not containing ¢ and is denoted and the set is . For any given string , is its length, is its th symbol (), and . For any natural number , denotes unique binary representation.
Our realtime models operate in strict mode: any given input, say , is read as from the left to the right and symbol by symbol without any pause on any symbol.
Formally, a postselecting realtime probabilistic finite state automaton (PostPFA) is a 6tuple
where

is the set of states,

is the transition function described below,

is the starting state, and,

and are the postselecting accepting and rejecting states (), respectively.
We call any state other than or nonpostselecting.
When is in state and reads symbol , then it switches to state
with probability
. To be a wellformed machine, the transition function must satisfy thatLet be the given input. The automaton starts its computation when in state . Then, it reads the input and behaves with respect to the transition function. After reading the whole input,
is in a probability distribution, which can be represented as a stochastic vector, say
. Each entry of represents the probability of being in the corresponding state.Due to postselection, we assume that the computation ends either in or . We denote the probabilities of being in and as and , respectively. It must be guaranteed that . (Otherwise, postselection cannot be done.) Then, the decision is given by normalizing these two values: is accepted and rejected with probabilities
respectively. We also note that the automaton ends its computation in nonpostselecting state(s) (if there is any) with probability , but the ability of making postselection discards this probability (if it is nonzero).
By making a simple modification on a PostPFA, we can obtain a restarting realtime PFA (restartPFA) [16]:

each nonpostselecting state is called restarting state,

postselecting accepting and rejecting states are called accepting and rejecting states, and then,

if the automaton ends in a restarting state, the whole computation is started again from the initial configuration (state).
The analysis of accepting and rejecting probabilities for the input remains the same and so both models have the same accepting (and rejecting) probabilities on every input.
Moreover, if we have for any input , then the automaton is simply a probabilistic finite automaton (PFA) since making postselection or restarting mechanism does not have any effect on the computation or decision.
Language is said to be recognized by a PostPFA with error bound if

any member is accepted by with probability at least , and,

any nonmember is rejected by with probability at least .
We can also say that is recognized by with bounded error or recognized by boundederror PostPFA .
In this paper, we also focus on oneway privatecoin interactive proof systems (IPS) [2], where the verifier always sends the same symbol to prover. Since the protocol is oneway, the whole responses of the prover can be seen as an infinite string and this string is called as (membership) certificate. Since the prover always sends a symbol when requested, the certificates are assumed to be infinite. The automaton reads the provided certificate in oneway mode and so it can make pauses on some symbols of the certificate.
Formally, a PostPFA verifier is a 7tuple
where, different from a PostPFA, is the certificate alphabet, and the transition function is extended as When is in state , reads input symbol , and reads certificate symbol , it switches to state and makes the action on the certificate with probability , where the next (resp., the same) symbol of the certificate is selected for the next step if (resp., ).
To be a well formed machine, the transition function must satisfy that
Let be the given input. For a given certificate, say , starts in state and reads the input and certificate in realtime and oneway modes, respectively. After finishing the input, it gives its decision like a standard PostPFA.
Language is said to be verified by a PostPFA with error bound if the following two conditions (called completeness and soundness) are satisfied:

For any member , there exists a certificate, say , such that accepts with probability at least .

For any nonmember and for any certificate , always rejects with probability at least .
We can also say that is verified by with bounded error. If every member is accepted with probability 1, then it is also said that is verified by with perfect completeness.
A twoway probabilistic finite automaton (2PFA) [10] is a generalization of a PFA which can read the input more than once. For this purpose, the input is written on a tape between two endmarkers and each symbol is accessed by the readonly head of the tape. The head can either stay on the same symbol or move one square to the left or to the right by guaranteeing not to leave the endmarkers. The transition function is extended to determine the head movement after a transition. A 2PFA is called sweeping PFA if the direction of the head is changed only on the endmarkers. The input is read from left to right, and then right to left, and so on.
A 2PFA can also be extended with an integer counter or a working tape  such model is called twoway probabilistic counter automaton (2PCA) or probabilistic Turing machine (PTM), respectively.
A 2PCA reads a single bit of information from the counter, i.e. whether its value is zero or not, as a part of a transition; and then, it increases or decreases the value of counter by 1 or does not change the value after the transition.
The working tape contains only blank symbols at the beginning of the computation and it has a twoway read/write head. On the work tape, a PTM reads the symbol under the head as a part of a transition, and then, it overwrites the symbol under the head and updates the position of head by at most one square after the transition.
Sweeping or realtime (postselecting) variants of these models are defined similarly.
For nonrealtime models, the computation is terminated after entering an accepting or rejecting state. Additionally, for nonrealtime postselecting models, there is another halting state for nonpostselecting outcomes.
A language is recognized by a boundederror PTM (or any other variant of PTM) in space , if the maximum number of visited cells on the work tape with nonzero probability is not more than for any input with length . If we replace the PTM with a counter automaton, then we take the maximum absolute value of the counter.
We denote the set of integers and the set of positive integers . The set is the set of all subsets of positive integers and so it is an uncountable set (the cardinality is ) like the set of real numbers (). The cardinality of or is (countably many).
For , the membership of each positive integer is represented as a binary probability value:
The coin landing on head with probability is named .
3 Rationalvalued postselecting models
In this section, our recognizers and verifiers use only rationalvalued transition probabilities.
3.1 PostPFA algorithms
Here we mainly adopt and also simplify the techniques presented in [8, 3, 16]. We start with a simple language: . It is known that is recognized by PostPFAs with bounded error [16, 18], but we still present an explicit proof which will be used in the other proofs.
Fact 1
For any , is recognized by a PostPFA with error bound .
Proof
Let be the given input for some . Any other input is rejected deterministically.
At the beginning of the computation, splits the computation into two paths with equal probabilities. In the first path, says “” with probability , and, in the second path, it says “” with probability .
In the first path, starts in a state, say . Then, for each symbol 0, it stays in with probability and quits with the remaining probability. Thus, when started in , the probability of being in upon reaching on the right endmarker is
In the second path, we assume that starts in a state, say , and then immediately switches to two different states, say and , with equal probabilities. For each 0 until the symbol 1, stays in with probability and quits with the remaining probability. After reading symbol 1, it switches from to and stays there until the right endmarker. Thus, when started in , the probability of being in upon reaching on the right endmarker is .
When in , stays in on the first block of 0s. After reading symbol 1, it switches from to , and then, for each 0, it stays in with probability and quits with the remaining probability. Thus, when started in , the probability of being in upon reaching on the right endmarker is . Therefore, when started in state , the probability of being in or upon reaching on the right endmarker is
It is easy to see that if , then . On the other hand, if , then
since either or is a negative even integer.
On the right endmarker, enters and with probabilities and , respectively. Hence, if is a member, then is times of , and so, is accepted with probability
If is not a member, then is at least times of , and so, is rejected with probability at least
Thus, the error bound is , i.e.
which is less than when . (Remark that when .)
We continue with language ,
Theorem 3.1
For any , is recognized by a PostPFA with error bound .
Proof
Let be the given input for some , where for each both and are positive integers. Any other input is rejected deterministically.
Similar to previous proof, after reading whole input, says “” with probability
and says “” with probability
Here can easily implement both probabilistic events by help of internal states. As analyzed in the previous proof, for each , either or is at least times greater than . Thus, if is a member, then , and, if is not a member, then
On the right endmarker, enters and with probabilities and , respectively. Hence, we obtain the same error bound as given in the previous proof.
Let be the linear mapping for some nonnegative integers and , and, let be a new language.
Theorem 3.2
For any , is recognized by a PostPFA with error bound .
Proof
Let be the given input for some , where for each both and are positive integers. Any other input is rejected deterministically.
In the above proofs, the described automata make transitions with probabilities or when reading a symbol 0. Here makes some additional transitions:

Before starting to read a block of 0’s, makes a transition with probability or .

After reading a symbol 0, makes a transition with probability or .
Thus, after reading a block of 0’s, can be designed to be in a specific event with probability or , where .
Therefore, is constructed such that, after reading whole input, it says “” with probability
and says “” with probability
Then, for each , if , , and, if ,
As in the above algorithms, on the right endmarker, enters and with probabilities and , respectively. Hence, we obtain the same error bound as given in the previous proofs.
As an application of the last result, we present a PostPFA algorithm for language
which was also shown to be recognized by 2PFAs [8].
Theorem 3.3
For any , is recognized by a PostPFA with error bound .
Proof
Let be the given input for , where . The decision on any other input is given deterministically.
After reading whole input, says “” with probability
and says “” with probability
In the previous languages, the blocks are nicely separated, but for language the blocks are overlapping. Therefore, we modify the previous methods. As described in the first algorithm, splits the computation into two paths with equal probabilities at the beginning of the computation. In the first path, the event happening with probability is implemented by executing two parallel procedures: The first procedure produces the probabilities ’s where
is odd and the second procedure produces the probabilities
’s where is even. Similarly, in the second path, the event happening with probability is implemented by also executing two parallel procedures. Thus, the previous algorithm is also used for by using the solution for overlapping blocks.Fact 2
[8] If a binary language is recognized by a bounded–error PTM in space , then the binary language is recognized by a bounded–error PTM in space , where
Similarly, we can easily obtain the following two corollaries.
Corollary 1
If a binary language is recognized by a boundederror PostPTM in space , then the binary language is recognized by a boundederror PostPTM in space .
Corollary 2
If a binary language is recognized by a boundederror PostPCA in space , then the binary language is recognized by a boundederror PostPCA in space .
3.2 PostPFA protocols
In this section, we present PostPFA protocols for the following two nonregular unary languages: and . These languages are known to be verified by 2PFA verifiers [7] and private alternating realtime automata [4]. Here, we use similar protocols but with certain modifications for PostPFAs.
Theorem 3.4
is verified by a PostPFA with perfect completeness, where .
Proof
Let be the th shortest member of () and let be the given string for . (If the input is empty string or 0, then it is rejected deterministically.)
The verifier expects the certificate to be composed by block(s) followed by symbol , and each block has form of except the last one which is 1. The verifier also never checks a new symbol on the certificate after reading a symbol. Let be the given certificate in this format:
where for each , , and . Any other certificate is detected deterministically, and then, the input is rejected. Let and .
The verifier checks that (1) is twice of for each , (2) each block except the last one contains at least one 0 symbol, (3) the last block is 1, and (4) . Remark that these conditions are satisfied only for members: The expected certificate for is
and the length of all blocks and a single symbol is . In other words, , , …, .
At the beginning of the computation, splits the computation into two paths with equal probabilities, called the accepting path and the main path. In the accepting path, the computation ends in with probability and in some nonpostselecting state with the remaining probability. Since there are blocks, it is easy to obtain this probability. This is the path in which enters . Therefore, (the accepting path is selected with probability ).
During reading the input and the certificate, the main path checks (1) whether , (2) each block of the certificate except the last one contains at least one 0 symbol, and (3) the last block is 1. If one of checks fails, the computation ends in state . The main path also creates subpaths for checking whether , , …, . After the main path starts to read a block starting with 0 symbol, it creates a subpath with half probability and stays in the main path with remaining probability. Thus, the main path reaches the right endmarker with probability . On the other hand, the th subpath is created with probability , where .
The first subpath tries to read symbols from the input. If there are exactly symbols, i.e. , then the test is successful and the computation is terminated in an nonpostselecting state. Otherwise, the test is failed and the computation is terminated in state .
The second path is created after reading symbols from the input. Then, the second subpath also tries to read symbols from the input. If there are exactly symbols, i.e. , then the test is successful and the computation is terminated in an nonpostselecting state. Otherwise, the test is failed and the computation is terminated in state .
The other subpaths behave exactly in the same way. The last (()th) subpath checks whether . If all previous tests are successful, then .
It is clear that if is a member, say , and reads and , then . On the other hand, neither the main path nor any subpath enters state with some nonzero probability. Therefore, any member is accepted with probability 1.
If is not a member, then one of the checks done by the main path and the subpaths is failed and so enters with nonzero probability. The probability of being in at the end, i.e. , is at least . Thus,
Therefore, any nonmember is rejected with probability at least .
In the above proof, the verifier can also check deterministically whether the number of blocks is a multiple of or not for some . Thus, we can easily conclude the following result.
Corollary 3
is verified by a PostPFA with perfect completeness.
Theorem 3.5
is verified by a PostPFA with perfect completeness, where .
Proof
The proof is very similar to the above proof. Let be the th shortest member of (). Let be the given input for . (The decisions on the shorter strings are given deterministically.) The verifier expects to obtain a certificate composed by blocks:
where is () if is odd (even). Let . The verifier never reads a new symbol after reading on the certificate.
The verifier checks the following equalities:
and
If we substitute with in the above equalities, then we obtain that and so .
At the beginning of the computation, splits into the accepting path and the main path with equal probabilities, and, as a result of the accepting path, it always enters with probability .
In the following paths, if the comparison is successful, then the computation is terminated in a nonpostselecting state, and, if it is not successful, then the computation is terminated in state . The main path checks the equality .
For each , the main path also creates a subpath with probability and remains in the main path with the remaining probability. The th subpath checks the equality
where is added twice.
If all comparisons in the subpaths are successful, then we have
for some . Additionally, if the comparison in the main path is successful, then we obtain that . Thus, . Therefore, any member is accepted with probability 1 by help of the proof composed by blocks and the length of each block is .
If is not a member, then one of the comparisons will not be successful. (If all are successful, then, as described above, the certificate should have blocks of length and the input has length .) The minimum value of is at least and so . Therefore, any nonmember is rejected with probability at least .
4 Postselecting models using magic coins
In this section, we allow recognizers and verifiers to use realvalued transition probabilities. We use a fact presented in our previous paper [5].
Fact 3
[5] Let be an infinite binary sequence. If a biased coin lands on head with probability , then the value is determined correctly with probability at least after coin tosses, where is guessed as the th digit of the binary number representing the total number of heads after the whole coin tosses.
4.1 Algorithms using magic coins
Previously, we obtained the following result.
Fact 4
[5] Bounded–error linear–space sweeping PCAs can recognize uncountably many languages in subquadratic time.
For the language recognized by a sweeping PCA, we can easily design a sweeping PCA that recognizes by using the same idea given for PostPFAs. Since PostPFAs are equivalent to restartPFAs and restartPFAs can also be implemented by sweeping PFAs, we can reduce linear space to logarithmic space given in the above result with exponential slowdown, i.e. padding part of the input can be recognized by restartPFA with exponential expected time.
Corollary 4
Boundederror logspace sweeping PCAs can recognize uncountably many languages in exponential expected time.
We can iteratively apply this idea and obtain new languages with better and better space bounds. We can define as for and then we can follow that can be recognized by a boundederror sweeping PCA that uses space on the counter.
Corollary 5
The cardinality of languages recognized by boundederror sweeping PCAs with arbitrary small nonconstant space bound is uncountably many.
Now, we show how to obtain the same results by restricting sweeping reading mode to the restarting realtime reading mode or realtime reading mode with postselection. We start with the recognition of the following nonregular binary language, a modified version of [5]:
Theorem 4.1
For any , is recognized by linearspace PostPCA with error bound .
Proof
Let be the given input of the form
where , and are positive integers, is divisible by 6, and for and . (Otherwise, the input is rejected deterministically.)
splits computation into four paths with equal probabilities. In the first path, with the help of the counter, makes the following comparisons:

for each , whether ,

for each , whether .
In the second path, with the help of the counter, makes the following comparisons:

for each , whether ,

whether (this also helps to set the counter to 0 for the upcoming comparisons),

for each , whether .
In the third path, checks whether . In the fourth path checks, whether .
It is easy to see that all comparisons are successful if and only if .
If every comparison in a path is successful, then enters with probability in the path. If it is not, then enters with probability 1 in the path. Therefore, if , then is accepted with probability 1 since . If , then the maximum accepting probability is obtained when enters only in one of the paths. That is, . Thus, is rejected with probability at least . The error bound is .
Theorem 4.2
Linearspace PostPCAs can recognize uncountably many languages with error bound .
Proof
Let be the th shortest member of for . For any , we define the following language:
We follow our result by presenting a PostPCA, say , to recognize , where . Let be the given input of the form
where , and are positive integers, is divisible by 6, and for and . (Otherwise, the input is rejected deterministically.)
At the beginning of the computation, splits into two paths with equal probabilities. In the first path, executes the PostPCA for described in the proof above with the following modification: in each path of , if every comparison is successful, then enters state with probability ( enters path with probability , and then enters state with probability ), and, if it is not, then enters state with probability 1.
In the second path, sets the value of counter to by reading the part of the input . Remark that if , is for some . Then, attempts to toss times. After each coin toss, if the result is a head (resp., tail), then moves on the input two symbols (resp., one symbol). If is the number of total heads, then reads symbols. During attempt to read symbols, if the input is finished, then the computation ends in state with probability 1 in this path. Otherwise, guesses the value with probability at least (described in details at the end of the proof) and gives a parallel decision with probability , i.e. if the guess is 1 (resp., 0), then it enters state (resp., ) with probability .
If , then the probability of entering state is in the first path and at least in the second path. The probability of entering in the second path is at most . Thus, is accepted with probability at least .
If , then we have two cases. Case 1: . In this case, the probability of entering state is in the first path and at most in the second path. The probability of entering in the second path is at least . Thus, is rejected with probability .
Case 2: . In this case, the probability of entering state is at least in the first path and this is at least 4 times of the total probability of entering state , which can be at most
for . Then, the input is rejected with probability greater than .
As can be seen from the above analysis, when , guessing the correct value of is insignificant. Therefore, in the following part, we assume that when explaining how to guess correctly. Thus, we assume that :
for . In the second path, tosses times and it can read symbols from the input. In other words, it reads symbols from the part . Here we use the analysis similar to one presented in [7]. We can write as
where ,