1 Introduction
Postselection.
Postselection is the power of discarding all runs of a computation in which an undesirable event occurs. This concept was introduced to the field of quantum complexity theory by Aaronson [1]. While unrealistic, postselection turned out to be an extremely useful tool to obtain new and simpler proofs of major results about classical computation, and also prove new results about quantum complexity classes. The most celebrated result is arguably the identity proved by Aaronson [1], which shows that the class of problems that can be solved by a boundederror polynomialtime quantum algorithm with postselection () is equal to the class of problems that can be solved by unboundederror polynomialtime classical algorithms (), and thus makes possible to bridge quantum complexity classes and classical complexity classes.
Spacebounded quantum complexity classes.
The study of spacebounded quantum Turing machines was initiated by Watrous
[16]. Watrous showed in particular that any quantum Turing machine running in space can be simulated by an unboundederror probabilistic Turing machine running in space . This result implies the identity , where denotes the class of problems that can be solved by unboundederror logarithmicspace quantum Turing machines, and denotes the class of problems that can be solved by unboundederror logarithmicspace classical Turing machines. The main open question of the field is whether boundederror quantum Turing machines can be simulated spaceefficiently by boundederror classical Turing machines.A major step towards establishing the superiority of spacebounded quantum Turing machines over spacebounded classical (boundederror) Turing machines has been the construction by TaShma [14] of logarithmicspace quantum algorithms for inverting wellconditioned matrices (it is unknown how to perform the same task classically in logarithmic space). While TaShma’s quantum algorithm used intermediate measurements, a version of this quantum algorithm without measurement was later constructed by Fefferman and Lin [6] (see also [5] for a related result on spaceefficient error reduction for unitary quantum computation). Very recent works [7, 8] have further showed that many other problems from linear algebra involving wellconditioned matrices can be solved as well in logarithmic space by quantum algorithms, and additionally showed that intermediate measurements can be removed from any spacebounded quantum computation.
Our results.
In view of the impact of the concept of postselection to quantum complexity theory and in view of the surge of recent activities on spacebounded quantum complexity classes, a natural question is investigating the power of postselection for spacebounded quantum complexity classes. To our knowledge, this question has not been investigated so far in the literature (while the notion of postselection was previously studied in quantum automata theory [21]). In this paper, we tackle this question and obtain the following result (here denotes the class of problems that can be solved by a boundederror polynomialtime logarithmicspace quantum Turing machine that uses postselection — see Section 2 for a formal definition):
Main Theorem: .
This result thus gives a spacebounded version of the result mentioned above for polynomialtime complexity classes. This enables us to bridge quantum complexity classes and classical complexity classes for spacebounded computation as well, and thus suggests that postselection may become a useful tool to analyze spacebounded (quantum and classical) computation as well. Actually, as a byproduct of our main result, we also obtain the fact that coincides with the class of problems that can be solved by boundederror logarithmicspace quantum algorithms that has no time bound (namely, the boundederror logarithmicspace quantum algorithms are as computationally powerful as the unboundederror ones under no time restriction).
We additionally present several results about logarithmicspace quantum computation with postselection in Section 4.
Overview of our techniques.
As for the result proved by Aaronson [1], the nontrivial part of the proof of our main theorem is the simulation of a probabilistic machine by a postselecting quantum simulation machine. The simulation technique given in [1]
requires a polynomial amount of qubits, and thus cannot be used in our setting since we are limited to a logarithmic amount of qubits. Therefore, we propose a different simulation, which is composed of three parts. First, we show how to simulate the computation of a logarithmicspace probabilistic Turing machine by a logarithmicwidth probabilistic circuit
(Section 3.1). Note that the computation process of is represented by a mixture , which means that the configuration is inwith probability
. (It can be written as when the mixed state formalism [11] is used.) Here, we can assume that there are unique accepting and rejecting configurations and . Thus, the final mixture of can be represented in the form of , where if the input is a yesinstance, and if it is a noinstance. Second, we give a simulation of the probabilistic circuit by a logarithmicspace quantum Turing machine with postselection (Section 3.2). Note that this simulation is done in a coherent manner. Namely, if the mixture of at some step is , the quantum state of at the corresponding simulation step should be the normalized state of . Thus, produces the normalized state of as the final outcome. In fact, we use the power of postselection for this simulation, and the final outcome can be obtained after postselection with an exponentially small probability. Then, the third part is fairly similar to the approach used in [1]: using polynomial number of states constructed from the same number of copies of , we use repetition and postselection to increase the success probability of the simulation (Section 3.3).2 Preliminaries
2.1 Spacebounded probabilistic Turing machines
A classical spacebounded Turing machine has an input tape and a work tape. Both tapes are infinite and their cells are indexed by integers, each of which contains the blank symbol unless it is overwritten with a different symbol. The input tape has a readonly head and the work tape has a read/write head. Each head can access a single cell in each time step and, after each transition, it can stay on the same cell, move one cell to the right, or move one cell to the left.
The input alphabet is denoted and the work tape alphabet is denoted , none of which contains the blank symbol. Moreover, and . For a given string , represents the length of .
Formally, a (spacebounded) probabilistic Turing machine (PTM) is a 7tuple
where is the set of (internal) states, is the initial state, and () are the accepting and rejecting states, respectively, and is the transition function described below.
At the beginning of the computation, the given input, say , is placed on the input tape between the first cell and the th cell, the input tape head and the work tape head are placed on the cells indexed by 0s, and the state is set to . In each step, evolves with respect to the transition function and the computation is terminated after entering or . In the former (latter) case, the decision of “acceptance” (“rejection”) is made. It must be guaranteed that the input tape head never visits the cells indexed by and . The formal definition of is as follows:
Suppose that is in and reads and on the input and work tapes, respectively. Then, in one step, the new state is set to , the symbol is written on the cell under the work tape head, and the positions of the input and work tape heads are respectively updated with respect to and , with probability
where the input (work) tape head moves one cell to the left if () and one cell to the right if (). Remark that any transition with zero probability is never implemented. To be a wellformed PTM, for each triple ,
For a given input , can follow more than one computation path. A computation path either halts with a decision or runs forever. A halting path is called accepting (rejecting) if the decision of “acceptance” (“rejection”) is made on this path. The accepting (rejecting) probability of on is the cumulative sum over all accepting (rejecting) paths.
A language is said to be recognized by PTM with unbounded error if and only if any is accepted by with probability more than and any is accepted with probability less than . A language is said to be recognized by PTM with error bound if and only if any is accepted by with probability at least and any is rejected with probability at least . When is a constant (independent of the input), it is said that is recognized by with bounded error. As a special case, if all nonmembers of are accepted with probability 0, then it is called onesided boundederror. A PTM making only deterministic transitions (i.e., such that the range of the transition function is ) is a deterministic Turing machine (DTM).
The range of the transition function can also be defined as , and thus the PTM, called rational valued PTM, can make more than one transition with rational valued probabilities in each step. Remark that all results presented in this paper are also followed for rational valued PTMs. A nondeterministic Turing machine (NTM) can be defined as a rational valued PTM and a language is said to be recognized by a NTM if and only if for any member there is at least one accepting path and for any nonmember there is no accepting path (or equivalently any member is accepted with nonzero probability and any nonmember is accepted with zero probability).
A language is recognized by a machine in (expected) time and space if the machine, on a given input , runs no more than (expected) time steps and visits no more than different cells on its work tape with nonzero probability.
The class ( and ) is the set of languages recognized by unboundederror PTMs (DTMs and NTMs) in logarithmic space (with no time restriction). It is shown that each of these classes coincides with the subclass such that the running time of the corresponding machines is polynomially bounded (note that the proof is nontrivial for [10]).
The class () is the set of languages recognized by boundederror PTMs (onesided boundederror PTMs) in polynomial time and logarithmic space. On contrary to the above three classes , it is unknown that these two classes are the same as their corresponding classes such that the underlying machines have no time restriction, which we denote by ().
Any language is in [2] if and only if there exists a polynomialtime logarithmicspace PTM such that any is accepted by with probability and any is accepted by with probability other than .
2.2 Turing machines with postselection
A postselecting PTM (PostPTM) has the ability to discard some predetermined outcomes and then makes its decision with the rest of the outcomes, which is guaranteed to happen with nonzero probability (see [1, 21]). Formally, a PostPTM is a modified PTM with three halting states. A PTM has the accepting state and the rejecting state as the halting states. A PostPTM has an additional halting state called the nonpostselecting halting state. In this paper, we require that a PostPTM must halt its computation absolutely, i.e., there is no infinite loop.
For a given input , let ( and ) be the probability of PostPTM ending in ( and ). Since halts absolutely, we know that
Due to postselection, we discard the probability and then make a normalization on and for the final decision. Thus, the input is accepted (rejected) by with probability
The postselecting counterparts of and are and , respectively. (For instance, is in if and only if there are a polynomialtime logarithmicspace PostPTM and a constant such that is at least when is in , and is at least when is not in ). Let denote the class of languages recognized with no error (or exactly) by polynomialtime logarithmicspace PostPTMs (i.e., is in if and only if there is a polynomialtime logarithmicspace PostPTM such that and when is in , and and when is not in ).
2.3 Spacebounded quantum Turing machines
The initial quantum Turing machine (QTM) models (e.g., [4, 3, 16]) were defined fully quantum. While quantum circuits have been used more widely in literature, QTMs are still the main computational models when investigating space bounded complexity classes. However, their definitions have been modified since 90s (e.g., [17, 15, 14, 6]). The main modifications are that the computation is governed classically and the quantum part can be seen like a quantum circuit. This paper follows these modifications. To be more precise, our QTM is a PTM augmented with a quantum tape. Here, the quantum tape is designed like a quantum circuit, i.e., it contains a qubit (or qudit) in each tape cell and it can have more than one tape head so that a quantum gate can be applied to a few qubits at the same time.
We remark that the result given in this paper can also be obtained by the other spacebounded QTMs defined in literature [18, 17, 20, 15, 14], where algebraic numbers are used as transition values. The main advantage of the aforementioned modifications in QTMs is to simplify the proofs and the descriptions of quantum algorithms.
Formally, a (spacebounded) QTM is a 9tuple
where, different from the PTMs, the transition function is composed by two functions and that are responsible for the transitions on quantum and classical parts, respectively, and is the set of contents of a classical register storing quantum measurement outcomes. (Similarly to the PTMs, is the set of internal states, is the input alphabet, is the work tape alphabet, and , , and are respectively the initial state, the accepting state, and the rejecting state.) As the physical structure, additionally has a quantum tape with heads, and the classical register storing a value in , where are constants (independent of the input given to ). The quantum tape heads are numbered from 1 to . For simplicity, we assume that the quantum tape contains only qubits (with states and ) in its cells. Each cell is set to at the beginning of the computation. For a given input , the classical part is initialized as described for PTMs. The tape heads on the quantum tape are placed on the qubits numbered from 0 to .
The overall computation of is governed classically. Each transition of has two phases, quantum and classical, which alternate. We define the transition functions and different from the transition functions of PTMs. Suppose that is in and reads and , respectively. For each triple , can be the identity operator, a projective measurement (in the computational basis), or a unitary operator. If it is the identity operator, the quantum phase is skipped by setting the value of the classical register to 1 (in ). If the quantum operator is unitary, then the corresponding unitary operator is applied to the qubits under the heads on the quantum tape, and the value in the classical register is set to 1 (in ). If it is a measurement operator, then the corresponding projective measurement is done on the qubits under the heads on the quantum tape, and the measurement outcome, represented by an integer between 1 and (in ), is written in the classical register, where is the total number of all possible measurement outcomes of the measurement operator.
After the quantum phase, the classical phase is implemented. For each quadruple , returns the new state, the symbol written on the work tape, and updates of all heads, where .
The termination of the computation of is the same as the PTMs, i.e., done by entering the accepting state or the rejecting state . One time step corresponds to a single transition. We add the number of qubits visited with nonzero probability during the computation (as well as the number of cells visited on the classical work tape) to the space usage.
Remark that any QTM using superoperators can be simulated by a QTM using unitary operators and measurements with negligible memory and time overheads, i.e., by using extra quantum and classical states, any superoperator can be implemented by unitary operators and measurements in constant steps (e.g. [11, 13]).
Since the computation of the QTM defined above is controlled classically, a postselecting QTM (PostQTM) can be defined similar to PostPTMs: the PostQTM has an additional classical halting state , and any computation that ends in is discarded when calculating the overall accepting and rejecting probability on the given input.
The quantum counterparts of (), , , , , , and are (), , , ^{1}^{1}1Note that is the quantum counterpart of based on the criterion by the accepting probabilities of the underlying machine, not the certificatebased counterpart ()., , , and , respectively, where QTMs use algebraic numbers as transition amplitudes.
3 Main Result
In this section, our main theorem () is proved. We start with the easy inclusion.
Theorem 1:
Proof: Any polynomialtime logarithmicspace PostQTM can be easily converted to a polynomialtime logarithmicspace QTM such that enters the accepting and rejecting states with equal probability when enters the nonpostselecting halting state. Thus the balance between accepting and rejecting probabilities is preserved, and thus the language recognized by with boundederror is recognized by with unbounded error.
In the rest of this section, we give the proof of the following inclusion.
Theorem 2: .
As described in Section 1, the proof of Theorem 2 consists of three parts, each of which will be given in the next three subsections. We start by giving an overview of the first part. Let be a language in . Then there exists a PTM recognizing with unbounded error such that on input halts in steps by using at most space for some fixed positive integers and .
Without loss of generality, we can assume that always splits into two paths in every step, the work tape alphabet of has only two symbols and , and halts only when the work tape contains only blanks and both tape heads are placed on the 0th cells, i.e., there exist a single accepting and a single rejecting configurations. Let be the number of internal states.
We fix as the given input with length . Any configuration of is represented by a 4tuple of binary strings
where is the internal state, is the position of the input head, is the content of the work tape, and is the position of the work tape. (We also assume that is always a binary string, which does not contain any blank symbol.) The set of all configurations is denoted by , i.e., for some polynomial in . The length of any configuration is
Based on
, we define a stochastic matrix
, called the configuration matrix, whose columns and rows are indexed by configurations and its th entry represents the probability going from to . Then, the whole computation of on can be traced by andimensional column vector, called
configuration vector:where and
represents the probability distribution of the configurations after the
th step. Here, is the initial configuration vector having a single nonzero entry, that is 1, corresponding to the initial configuration, and is the final configuration vector having at most two nonzero entries that keep the overall accepting and rejecting probabilities:Since the computation is split into two paths in each step with equal probability, the overall accepting () and rejecting probabilities () are respectively of the forms
where , , and .
We present a simulation of the above matrixvector multiplication in logarithmic space. It is clear that keeping all entries of a single configuration vector separately requires polynomial space in . On the other hand, a single configuration can be kept in logarithmic space. Therefore, we keep a mixture of configurations as a single summation for any time step. In other words, we can keep as
where each coefficient represents the probability of being in the corresponding configuration . The transition from to can be obtained in a single step by applying . However, in our simulation, we can do this in substeps. The idea is as follows: In the th substep, we check whether our mixture has or not. If it exists, then is evolved to and that are the configurations obtained from in a single step when the outcome of the coin is respectively heads or tails. In this way, from the mixture corresponding to , we obtain the next mixture:
Then, the final mixture is
where and are the accepting and rejecting configurations, respectively.
We present the details of this simulation in the following subsection.
3.1 Probabilistic circuit
In this subsection, it is shown that we can construct, in deterministic logarithmic space, a logarithmicwidth and polynomialdepth probabilistic circuit that simulates on .
Note that a logarithmicspace DTM can easily output each element of . Moreover, for any , it can also easily output two possible next configurations and such that switches from to if the result of the coin flip is heads and it switches from to if the result of the coin flip is tails.
A logarithmicspace DTM described below can output the desired probabilistic circuit with width where (i) the first bit is named as the random bit that is used for coin flip, (ii) the second and third bits are named as the block control bit and the configuration control bit that are used to control the transition between the configurations in each time step, and (iii) the rest of the bits hold a configuration of on .
The circuit consists of blocks, and outputs the blocks. Each block corresponds to a single time step of on :
where each block is identical, i.e., each block implements the transition matrix operating on configurations. Remark that, after , we have the mixture representing .
Before each block, the random bit is set to 0 or 1 with equal probability and the block control bit is set to 1. As long as the block control bit is 1, the configurations are checked one by one in the block. Once it is set to 0, the remaining configurations are skipped.
Any block is composed by parts where each part corresponds to a single configuration:
Here, implements the transitions from in a single step. In , we do the following items:

If the block control bit is 0, then SKIP the remaining items. Otherwise, CONTINUE.

SET the configuration control bit to 1 (here we assume that is in ).

It checks whether is in .

If is not in , SET the configuration control bit to 0 and SKIP the remaining items. (Remark that the block control bit is still 1 in this case, and thus the next configuration will be checked in .)

Otherwise (i.e., if indeed is in ), CONTINUE.


SWITCH from to if the random bit is 0 and SWITCH from to if the random bit is 1.

SET the block control bit to 0.
After all blocks, outputs the last block called decision block. In the last block, it is checked whether the last configuration is or . If it is (resp. ), then the first bit of the decision block is set to 1 (resp. 0).
For the above operations, we can use some gates operating on no more than four bits that are the first three bits and one bit from the rest in each time. With sequential gates, we can determine whether we are in or not. Similarly, with sequential gates, we can implement the transition from to and, with another sequential gates, we can implement the transition from to . Here, using sequential gates allows us to keep the size of any gate no more than 4 bits as shown in Fig. 1 (where denote the sequential gates).
When physically implementing the above circuit , before each block, the circuit will be in a single configuration, and during executing the block, only the part corresponding to this configuration will be active, and thus the circuit will switch to one of the two possible next configurations. After the decision block, we will observe the first bit as 1 and 0 with probabilities and , respectively.
Remark that the set of all possible gates which can be used in the above circuit is finite and independent of the input . The only probabilistic gate is a single bit operator implementing a fair coin toss. The rest of gates are deterministic and basically they are controlled operators with maximum dimension of .
Before continuing with the quantum part, we make further simplifications on . As 2bit AND and OR gates^{2}^{2}2We assume that these gates are represented by matrices. and 1bit NOT gate form a universal gate set for classical circuits, each deterministic gate (operating at most 4 bits) can be replaced by some finite numbers of NOT, AND, OR, and some 1bit resetting gates with help of a few extra auxiliary bits used for intermediate calculations, which are appended to the bottom part of the circuit. Let be the new set of our gates, where implements the fair coin by outputting the values 0 and 1 with equal probability, and the values are used by the deterministic gates whenever it is needed.
We denote the simplified circuit as or shortly as . Let be the width of (note that ). Thus, we have such that the probability of observing 1 (resp. 0) on the first bit is (resp. ).
3.2 QTM part
In this subsection, we give a logarithmicspace postselecting QTM that simulates the computation of in a coherent manner, as described in Section 1.
A logarithmicspace (postselecting) QTM can trace the computation of on its quantum tape by help of its classical part. Since the circuit is deterministic logarithmicspace constructible, the classical part of the QTM helps to create the parts of on the quantum tape whenever it is needed. Moreover, any mixture of the configurations in is kept in a pure state of qubits (described below).
The QTM uses active qubits on the quantum tape for tracing on the input. The last two qubits are auxiliary, and the first qubits are used to keep the probabilistic state of . We consider the quantum tape as a logarithmicwidth quantum circuit simulating .
For each gate of , say , we apply a unitary gate (operator) operating on at most 4 qubits, say . Therefore, we use 4 tape heads on the quantum tape.
During the simulation, the first qubits are always kept in a superposition and after each unitary operator the last qubit or the last two qubits are always measured. If the outcome is or , then the computation continues. Otherwise, the computation is terminated in the nonpostselecting state.
In the probabilistic circuit , is applied on the first qubit. For each , we apply
on the first and the last qubits, measure the last qubit, and continue if is observed:
Thus, coinflipping operator can be easily implemented.
For the other operators (including the ones given below), we use the techniques given in [12]. For any (), we apply unitary operator acting on four qubits. Before applying , the quantum part is in
since the last two qubits are measured before and any outcome other than is discarded by entering the nonpostselecting state. Thus, only entries of affects the above quantum state. We construct step by step as follows. These 16 entries are set to the corresponding values from . Thus, the probabilistic state, which is kept in the pure state, can be traced exactly up to some normalization factor.
Without loss of generality, we assume that (by reordering the quantum states) these 16 values are placed in the top left corner. Then, is of the form
where is the normalization factor and all , , and are matrices.
The entries of are set in order to make the first four rows pairwise orthogonal:
where the values are set column by column. The values of , , and are set to the appropriate values such that the first row becomes orthogonal to the second, the third, and the fourth ones, respectively. Similarly, we set the values of the second and third columns. Since is composed by integers, is also composed by integers.
The entries of are set in order to make the first four rows with equal length, say , which is a square of an integer:
where diagonal entries are picked as the square roots of some integers. Remark that the entries of does not change the pairwise orthogonality of the first four rows. Moreover, at this point, the first four rows become pairwise orthonormal (due to normalization factor
). One can easily fill up the rest of the matrix with some arbitrary algebraic numbers in order to have a complete unitary matrix.
Since the set of depends on the transitions of the PTM , each can be kept in the description of the QTM.
By using the above quantum operators, we can simulate with exponentially small probability. Only note that, due to normalization factors, the computation is terminated in the nonpostselecting state with some probabilities after applying each unitary gate.
At the end of the simulation of , we separate the first qubit from the rest of qubits, each of which is set to . Then, we have this unnormalized quantum state in the first qubit:
The operator maps the above quantum state to
Since this operator can be also implemented with postselection by using an extra qubit, the new unnormalized quantum state is set to .
If , then the quantum state , that is the normalized version of , is identical to . If , then the quantum state lies between and , and thus it is closer to compare to . If , then lies between and , and thus it is closer to compare to .
The measurement in basis means that we rotate the quantum state with angle in counterclockwise direction and then make a measurement in basis. Thus, observing () in the former case is equivalent to observing (resp., ) in the latter case.
After making a measurement in basis, we can easily distinguish the cases whether is close to 0 or is close to 1 with bounded error. In the case of when is close to , the probability of observing these basis states can be very close to each other. In Section 3.3, we use a modified version of the trick used by Aaronson [1] to increase the success probability. Actually, we will need to use the above QTM times sequentially in logarithmic space.
3.3 Executing a series of QTMs
Let be our integer parameter from the set . For each , we consider a QTM as follows. First, we execute the above QTM in Section 3.2, and then transform to
in () iterations. In each iteration, we combine the first qubit with another qubit in state , apply the quantum operator
and then the second qubit is measured. If the measurement outcome is , then the computation continues. Otherwise, the computation is terminated by entering the nonpostselecting state. (By induction, we can easily see that .) Note that for each , the QTM can be done in logarithmic space as the QTM described in Section 3.2 is done in space, and the counter for the iteration for creating needs space as well.
By substituting , the quantum state can be rewritten as
It is easy to see that

when , the normalized state of lies in the first quadrant, and thus it is closer to , and

when , lies in the fourth quadrant, and thus it is closer to .
Case : As (recall that is the accepting probability of the PTM on input that halts in steps),
Thus, there exists a value of , say , such that
Then, since and , the quantum state lies between and , and since and , lies between and (see Fig. 2). Thus, the probability of observing after measuring in basis is always greater than
since .
Case : The case is similar to the previous case. There exists a value of , say , such that
Then, the quantum state lies between and and lies between and (see Fig. 2). Thus the probability of observing when measuring in basis is always greater than
Now the overall quantum algorithm is as follows:

Prepare counter to . For each , the following steps are implemented.

We execute the above QTM , and make the measurement at the end in basis. (Note that the execution can be discarded by entering the nonpostselecting state in the procedure of Section 3.2.)

If the measurement result corresponds to , then we reset the quantum register to all (note that this is possible using the classical control since all the non qubits are induced only by postselection, and thus we know what states they are in), and add to .

If the measurement result corresponds to , then we reset the quantum register to all , and add to .


If (namely, we observe in all executions), then the input is rejected.

If (namely, we observe in all executions), then the input is accepted.

Otherwise (namely, if we observe the outcomes and at least once in some executions), the computation is terminated in the nonpostselecting state.
Note that the overall quantum algorithm is implemented in logarithmic space since the counter is clearly implemented in space, and is also implemented in space, and each iteration of step 1 is done by the reuse of the classical and quantum registers.
The analysis of the algorithm is as follows:

When , the probability of observing is always greater than in each execution and at least once it is times more. Thus, if , the probability of observing all ’s is at least times more than the probability of observing all ’s after all executions.

When , the probability of observing is always greater than in each execution and at least once it is times more. Thus, if , the probability of observing all ’s is at least times more than the probability of observing all ’s after all executions.
Therefore, after normalizing the final accepting and rejecting postselecting probabilities, it follows that is recognized by a polynomialtime logarithmicspace postselecting QTM with error bound . This completes the proof of Theorem 2. (The error bound can easily be decreased by using the standard probability amplification techniques.)
3.4 Additional result
Additionally, we can show that is contained in the class of languages recognized by logarithmic space boundederror QTMs that halt in expected exponential time.
Theorem 3: and , where () is the class of languages recognized by logarithmic space boundederror PTMs (QTMs) that halt in expected exponential time.
Proof.
Let be a polynomialtime logarithmicspace PostPTM. By restarting the whole computation from the beginning instead of entering the nonpostselecting state, we can obtain a logarithmicspace exponentialtime PTM from , i.e., (i) the restarting mechanism does not require any extra space, and, (ii) since produces no less than exponentially small halting probability in polynomial time, halts with probability 1 in exponential expected time. Both machines recognize the same language with the same error bound since the restarting and postselecting mechanism can be used interchangeably [19, 21], i.e., the accepting and rejecting probabilities by and are the same on every input. Thus, we can conclude that . In the same way, we can obtain that . ∎
As by definition and Watrous showed [17], our main result () leads to the following equivalence among , and .
Corollary 1: .
We leave open whether is contained in .
4 Related Results
In this section, we provide several results on logarithmicspace complexity classes with postselection. The first result is a characterization of by logarithmicspace complexity classes.
Theorem 4: .
Proof.
We start with the first equality . Let . Since [9], is also in . Then, there exist polynomialtime logarithmicspace NTMs and recognizing and . Based on and , we can construct a polynomialtime logarithmicspace PostPTM such that executes and with equal probability on the given input. Then, accepts the input if accepts and rejects the input if accepts. Any other outcome is discarded by . Therefore, (i) any is accepted with nonzero probability and rejected with zero probability by , and, (ii) any is accepted with zero probability and rejected with nonzero probability by . Thus, is recognized by with no error, and thus .
Let . Then, there exists a polynomialtime logarithmicspace PostPTM recognizing with no error. Based on , we can construct a polynomialtime logarithmicspace NTM such that executes on the given input and switches to the rejecting state if ends in the nonpostselecting halting state. Thus, accepts all and only strings in . Therefore, .
Now we are done with equality . It is trivial that . To complete the proof, it is enough to show that . If a language is recognized by a polynomialtime logarithmicspace PostPTM with onesided boundederror, then it is also recognized by a polynomialtime logarithmicspace NTM where is modified from such that if enters the nonpostselelecting state, then enters the rejecting state. ∎
By using the same argument, we can also obtain the following result on quantum class (note that the first equality comes from [16, 7]).
Theorem 5: .
As will be seen below, the relation between and seems different from the relation between their classical counterparts since and may be different classes. Remark that it is also open whether is a proper subset of or not.
By using the quantum simulation given in Section 3, we can obtain the following result.
Theorem 6: .
Proof.
It is easy to see that . Let be a language in and be a polynomialtime logarithmicspace PostQTM recognizing with onesided boundederror. By changing the transitions to the nonpostselecting state of to the rejecting state, we can obtain a polynomialtime logarithmicspace NQTM recognizing , and thus . Since [16, 7], we obtain .
Now we prove the other direction. Let be in . Then there exists a polynomialtime logarithmicspace PTM that accepts any nonmember of with probability and any member with probability different from . Let be a given input with length .
We use the simulation given in Section 3. We make the same assumptions on the PTM except that accepts some string with probability and never accepts any string with probability in the following interval
for some fixed integer . This condition is trivial if the running time never exceeds , i.e., the total number of probabilistic branches never exceeds .
Then, we construct a polynomialtime logarithmicspace PostQTM as described in Section 3 with the following unnormalized final quantum state:
where is the accepting probability of . We measure this qubit and accept (reject) the input, if we observe (). All the other outcomes are discarded by entering the nonpostselecting state.
It is clear that for any nonmember of , is always equal to , and thus the QTM accepts the input with zero probability and rejects the input with some nonzero probability. Therefore, any nonmember of is rejected with probability 1.
On the other hand, for any member, the amplitude of is at least twice of the amplitude of , and thus the accepting probability is at least four times more than the rejecting probability. Thus, any member is accepted with probability at least . The success probability can be increased by using the standard probability amplification techniques. ∎
Acknowledgements
Part of this research was done while Yakaryılmaz was visiting Kyoto University in November 2016 and March 2017. Yakaryılmaz was partially supported by ERC Advanced Grant MQC and ERDF project Nr. 1.1.1.5/19/A/005 “Quantum computers with constant memory”. Le Gall was supported by JSPS KAKENHI grants Nos. JP19H04066, JP20H05966, JP20H00579, JP20H04139, JP21H04879 and by the MEXT Quantum Leap Flagship Program (MEXT QLEAP) grants No. JPMXS0118067394 and JPMXS0120319794. Nishimura was supported by JSPS KAKENHI grants Nos. JP19H04066, JP20H05966, JP21H04879 and by the MEXT QLEAP grants No. JPMXS0120319794.
References
 [1] Scott Aaronson. Quantum computing, postselection, and probabilistic polynomialtime. Proceedings of the Royal Society A, 461(2063):3473–3482, 2005.
 [2] Eric Allender and Mitsunori Ogihara. Relationships among PL, #L, and the determinant. RAIRO Theoretical Informatics and Applications, 30(1):1–21, 1996.
 [3] Ethan Bernstein and Umesh Vazirani. Quantum complexity theory. SIAM Journal on Computing, 26(5):1411–1473, 1997.
 [4] David Deutsch. Quantum theory, the ChurchTuring principle and the universal quantum computer. Proceedings of the Royal Society of London A, 400:97–117, 1985.
 [5] Bill Fefferman, Hirotada Kobayashi, Cedric YenYu Lin, Tomoyuki Morimae, and Harumichi Nishimura. Spaceefficient error reduction for unitary quantum computations. In Proceedings of the 43rd International Colloquium on Automata, Languages, and Programming, volume 55 of LIPIcs, pages 14:1–14:14, 2016.
 [6] Bill Fefferman and Cedric YenYu Lin. A complete characterization of unitary quantum space. In Proceedings of the 9th Innovations in Theoretical Computer Science Conference, volume 94 of LIPIcs, pages 4:1–4:21, 2018.
 [7] Bill Fefferman and Zachary Remscrim. Eliminating intermediate measurements in spacebounded quantum computation. In Proceedings of the 53rd Annual ACM Symposium on Theory of Computing, to appear, 2021. Also available at arXiv:2006.03530.
 [8] Uma Girish, Ran Raz, and Wei Zhan. Quantum logspace algorithm for powering matrices with bounded nom. In Proceedings of the 48th International Colloquium on Automata, Languages, and Programming, to appear, 2021. Also available at arXiv:2006.04880.
 [9] Neil Immerman. Nondeterministic space is closed under complementation. SIAM Journal on Computing, 17(5):935–938, 1988.
 [10] Hermann Jung. On probabilistic time and space. In Automata, Languages and Programming, volume 194 of LNCS, pages 310–317. Springer, 1985.
 [11] Michael A. Nielsen and Isaac L. Chuang. Quantum Computation and Quantum Information. Cambridge University Press, 2000.
 [12] A. C. Cem Say and Abuzer Yakaryılmaz. Computation with narrow CTCs. In Unconventional Computation, volume 6714 of LNCS, pages 201–211. Springer, 2011.
 [13] A. C. Cem Say and Abuzer Yakaryılmaz. Quantum finite automata: A modern introduction. In Computing with New Resources, volume 8808 of LNCS, pages 208–222. Springer, 2014.

[14]
Amnon TaShma.
Inverting well conditioned matrices in quantum logspace.
In
Symposium on Theory of Computing Conference
, pages 881–890. ACM, 2013.  [15] Dieter van Melkebeek and Thomas Watson. Timespace efficient simulations of quantum computations. Theory of Computing, 8(1):1–51, 2012.
 [16] John Watrous. Spacebounded quantum complexity. Journal of Computer and System Sciences, 59(2):281–326, 1999.
 [17] John Watrous. On the complexity of simulating spacebounded quantum computations. Computational Complexity, 12(12):48–84, 2003.
 [18] John Watrous. Encyclopedia of Complexity and System Science, chapter Quantum computational complexity. Springer, 2009. Also available at arXiv:0804.3401.
 [19] Abuzer Yakaryılmaz and A. C. C. Say. Succinctness of twoway probabilistic and quantum finite automata. Discrete Mathematics and Theoretical Computer Science, 12(2):19–40, 2010.
 [20] Abuzer Yakaryılmaz and A. C. Cem Say. Unboundederror quantum computation with small space bounds. Information and Computation, 279(6):873–892, 2011.
 [21] Abuzer Yakaryılmaz and A. C. Cem Say. Proving the power of postselection. Fundamenta Informaticae, 123(1):107–134, 2013.