DeepAI

# Quantum Logarithmic Space and Post-selection

Post-selection, the power of discarding all runs of a computation in which an undesirable event occurs, is an influential concept introduced to the field of quantum complexity theory by Aaronson (Proceedings of the Royal Society A, 2005). In the present paper, we initiate the study of post-selection for space-bounded quantum complexity classes. Our main result shows the identity PostBQL=PL, i.e., the class of problems that can be solved by a bounded-error (polynomial-time) logarithmic-space quantum algorithm with post-selection (PostBQL) is equal to the class of problems that can be solved by unbounded-error logarithmic-space classical algorithms (PL). This result gives a space-bounded version of the well-known result PostBQP=PP proved by Aaronson for polynomial-time quantum computation. As a by-product, we also show that PL coincides with the class of problems that can be solved by bounded-error logarithmic-space quantum algorithms that have no time bound.

• 27 publications
• 10 publications
• 5 publications
06/30/2022

### Space-Bounded Unitary Quantum Computation with Postselection

Space-bounded computation has been a central topic in classical and quan...
10/12/2022

### Post-Quantum Zero-Knowledge with Space-Bounded Simulation

The traditional definition of quantum zero-knowledge stipulates that the...
03/07/2020

### Quantum Random Access Stored-Program Machines

Random access machines (RAMs) and random access stored-program machines ...
11/23/2021

We show how to translate a subset of RISC-V machine code compiled from a...
10/26/2017

### Quantum versus Classical Online Algorithms with Advice and Logarithmic Space

In this paper, we consider online algorithms. Typically the model is inv...
03/22/2020

### The Power of a Single Qubit: Two-way Quantum Finite Automata and the Word Problem

The two-way finite automaton with quantum and classical states (2QCFA), ...
10/20/2020

### Inequalities for space-bounded Kolmogorov complexity

There is a parallelism between Shannon information theory and algorithmi...

## 1 Introduction

#### Post-selection.

Post-selection is the power of discarding all runs of a computation in which an undesirable event occurs. This concept was introduced to the field of quantum complexity theory by Aaronson [1]. While unrealistic, post-selection turned out to be an extremely useful tool to obtain new and simpler proofs of major results about classical computation, and also prove new results about quantum complexity classes. The most celebrated result is arguably the identity proved by Aaronson [1], which shows that the class of problems that can be solved by a bounded-error polynomial-time quantum algorithm with post-selection () is equal to the class of problems that can be solved by unbounded-error polynomial-time classical algorithms (), and thus makes possible to bridge quantum complexity classes and classical complexity classes.

#### Space-bounded quantum complexity classes.

The study of space-bounded quantum Turing machines was initiated by Watrous

[16]. Watrous showed in particular that any quantum Turing machine running in space can be simulated by an unbounded-error probabilistic Turing machine running in space . This result implies the identity , where denotes the class of problems that can be solved by unbounded-error logarithmic-space quantum Turing machines, and denotes the class of problems that can be solved by unbounded-error logarithmic-space classical Turing machines. The main open question of the field is whether bounded-error quantum Turing machines can be simulated space-efficiently by bounded-error classical Turing machines.

A major step towards establishing the superiority of space-bounded quantum Turing machines over space-bounded classical (bounded-error) Turing machines has been the construction by Ta-Shma [14] of logarithmic-space quantum algorithms for inverting well-conditioned matrices (it is unknown how to perform the same task classically in logarithmic space). While Ta-Shma’s quantum algorithm used intermediate measurements, a version of this quantum algorithm without measurement was later constructed by Fefferman and Lin [6] (see also [5] for a related result on space-efficient error reduction for unitary quantum computation). Very recent works [7, 8] have further showed that many other problems from linear algebra involving well-conditioned matrices can be solved as well in logarithmic space by quantum algorithms, and additionally showed that intermediate measurements can be removed from any space-bounded quantum computation.

#### Our results.

In view of the impact of the concept of post-selection to quantum complexity theory and in view of the surge of recent activities on space-bounded quantum complexity classes, a natural question is investigating the power of post-selection for space-bounded quantum complexity classes. To our knowledge, this question has not been investigated so far in the literature (while the notion of post-selection was previously studied in quantum automata theory [21]). In this paper, we tackle this question and obtain the following result (here denotes the class of problems that can be solved by a bounded-error polynomial-time logarithmic-space quantum Turing machine that uses post-selection — see Section 2 for a formal definition):

Main Theorem: .

This result thus gives a space-bounded version of the result mentioned above for polynomial-time complexity classes. This enables us to bridge quantum complexity classes and classical complexity classes for space-bounded computation as well, and thus suggests that post-selection may become a useful tool to analyze space-bounded (quantum and classical) computation as well. Actually, as a by-product of our main result, we also obtain the fact that coincides with the class of problems that can be solved by bounded-error logarithmic-space quantum algorithms that has no time bound (namely, the bounded-error logarithmic-space quantum algorithms are as computationally powerful as the unbounded-error ones under no time restriction).

We additionally present several results about logarithmic-space quantum computation with post-selection in Section 4.

#### Overview of our techniques.

As for the result proved by Aaronson [1], the nontrivial part of the proof of our main theorem is the simulation of a probabilistic machine by a post-selecting quantum simulation machine. The simulation technique given in [1]

requires a polynomial amount of qubits, and thus cannot be used in our setting since we are limited to a logarithmic amount of qubits. Therefore, we propose a different simulation, which is composed of three parts. First, we show how to simulate the computation of a logarithmic-space probabilistic Turing machine by a logarithmic-width probabilistic circuit

(Section 3.1). Note that the computation process of is represented by a mixture , which means that the configuration is in

with probability

. (It can be written as when the mixed state formalism [11] is used.) Here, we can assume that there are unique accepting and rejecting configurations and . Thus, the final mixture of can be represented in the form of , where if the input is a yes-instance, and if it is a no-instance. Second, we give a simulation of the probabilistic circuit by a logarithmic-space quantum Turing machine with post-selection (Section 3.2). Note that this simulation is done in a coherent manner. Namely, if the mixture of at some step is , the quantum state of at the corresponding simulation step should be the normalized state of . Thus, produces the normalized state of as the final outcome. In fact, we use the power of post-selection for this simulation, and the final outcome can be obtained after post-selection with an exponentially small probability. Then, the third part is fairly similar to the approach used in [1]: using polynomial number of states constructed from the same number of copies of , we use repetition and post-selection to increase the success probability of the simulation (Section 3.3).

## 2 Preliminaries

### 2.1 Space-bounded probabilistic Turing machines

A classical space-bounded Turing machine has an input tape and a work tape. Both tapes are infinite and their cells are indexed by integers, each of which contains the blank symbol unless it is overwritten with a different symbol. The input tape has a read-only head and the work tape has a read/write head. Each head can access a single cell in each time step and, after each transition, it can stay on the same cell, move one cell to the right, or move one cell to the left.

The input alphabet is denoted and the work tape alphabet is denoted , none of which contains the blank symbol. Moreover, and . For a given string , represents the length of .

Formally, a (space-bounded) probabilistic Turing machine (PTM) is a 7-tuple

 M=(S,Σ,Γ,δ,si,sa,sr),

where is the set of (internal) states, is the initial state, and () are the accepting and rejecting states, respectively, and is the transition function described below.

At the beginning of the computation, the given input, say , is placed on the input tape between the first cell and the -th cell, the input tape head and the work tape head are placed on the cells indexed by 0s, and the state is set to . In each step, evolves with respect to the transition function and the computation is terminated after entering or . In the former (latter) case, the decision of “acceptance” (“rejection”) is made. It must be guaranteed that the input tape head never visits the cells indexed by and . The formal definition of is as follows:

 δ:S×~Σ×~Γ×S×~Γ×{−1,0,1}×{−1,0,1}→{0,12,1}.

Suppose that is in and reads and on the input and work tapes, respectively. Then, in one step, the new state is set to , the symbol is written on the cell under the work tape head, and the positions of the input and work tape heads are respectively updated with respect to and , with probability

 δ(s,σ,γ,s′,γ′,di,dw),

where the input (work) tape head moves one cell to the left if () and one cell to the right if (). Remark that any transition with zero probability is never implemented. To be a well-formed PTM, for each triple ,

 ∑s′∈S,γ′∈~Γ,di∈{−1,0,1},dw∈{−1,0,1}δ(s,σ,γ,s′,γ′,di,dw)=1.

For a given input , can follow more than one computation path. A computation path either halts with a decision or runs forever. A halting path is called accepting (rejecting) if the decision of “acceptance” (“rejection”) is made on this path. The accepting (rejecting) probability of on is the cumulative sum over all accepting (rejecting) paths.

A language is said to be recognized by PTM with unbounded error if and only if any is accepted by with probability more than and any is accepted with probability less than . A language is said to be recognized by PTM with error bound if and only if any is accepted by with probability at least and any is rejected with probability at least . When is a constant (independent of the input), it is said that is recognized by with bounded error. As a special case, if all non-members of are accepted with probability 0, then it is called one-sided bounded-error. A PTM making only deterministic transitions (i.e., such that the range of the transition function is ) is a deterministic Turing machine (DTM).

The range of the transition function can also be defined as , and thus the PTM, called rational valued PTM, can make more than one transition with rational valued probabilities in each step. Remark that all results presented in this paper are also followed for rational valued PTMs. A nondeterministic Turing machine (NTM) can be defined as a rational valued PTM and a language is said to be recognized by a NTM if and only if for any member there is at least one accepting path and for any non-member there is no accepting path (or equivalently any member is accepted with nonzero probability and any non-member is accepted with zero probability).

A language is recognized by a machine in (expected) time and space if the machine, on a given input , runs no more than (expected) time steps and visits no more than different cells on its work tape with non-zero probability.

The class ( and ) is the set of languages recognized by unbounded-error PTMs (DTMs and NTMs) in logarithmic space (with no time restriction). It is shown that each of these classes coincides with the subclass such that the running time of the corresponding machines is polynomially bounded (note that the proof is nontrivial for  [10]).

The class () is the set of languages recognized by bounded-error PTMs (one-sided bounded-error PTMs) in polynomial time and logarithmic space. On contrary to the above three classes , it is unknown that these two classes are the same as their corresponding classes such that the underlying machines have no time restriction, which we denote by ().

Any language is in [2] if and only if there exists a polynomial-time logarithmic-space PTM such that any is accepted by with probability and any is accepted by with probability other than .

### 2.2 Turing machines with post-selection

A postselecting PTM (PostPTM) has the ability to discard some predetermined outcomes and then makes its decision with the rest of the outcomes, which is guaranteed to happen with non-zero probability (see [1, 21]). Formally, a PostPTM is a modified PTM with three halting states. A PTM has the accepting state and the rejecting state as the halting states. A PostPTM has an additional halting state called the non-postselecting halting state. In this paper, we require that a PostPTM must halt its computation absolutely, i.e., there is no infinite loop.

For a given input , let ( and ) be the probability of PostPTM ending in ( and ). Since halts absolutely, we know that

 pacc,M(x)+prej,M(x)+pnpost,M(x)=1.

Due to post-selection, we discard the probability and then make a normalization on and for the final decision. Thus, the input is accepted (rejected) by with probability

 ~pacc,M(x):=pacc,M(x)pacc,M(x)+prej,M(x)   (~prej,M(x):=prej,M(x)pacc,M(x)+prej,M(x)).

The postselecting counterparts of and are and , respectively. (For instance, is in if and only if there are a polynomial-time logarithmic-space PostPTM and a constant such that is at least when is in , and is at least when is not in ). Let denote the class of languages recognized with no error (or exactly) by polynomial-time logarithmic-space PostPTMs (i.e., is in if and only if there is a polynomial-time logarithmic-space PostPTM such that and when is in , and and when is not in ).

### 2.3 Space-bounded quantum Turing machines

The initial quantum Turing machine (QTM) models (e.g., [4, 3, 16]) were defined fully quantum. While quantum circuits have been used more widely in literature, QTMs are still the main computational models when investigating space bounded complexity classes. However, their definitions have been modified since 90s (e.g., [17, 15, 14, 6]). The main modifications are that the computation is governed classically and the quantum part can be seen like a quantum circuit. This paper follows these modifications. To be more precise, our QTM is a PTM augmented with a quantum tape. Here, the quantum tape is designed like a quantum circuit, i.e., it contains a qubit (or qudit) in each tape cell and it can have more than one tape head so that a quantum gate can be applied to a few qubits at the same time.

We remark that the result given in this paper can also be obtained by the other space-bounded QTMs defined in literature [18, 17, 20, 15, 14], where algebraic numbers are used as transition values. The main advantage of the aforementioned modifications in QTMs is to simplify the proofs and the descriptions of quantum algorithms.

Formally, a (space-bounded) QTM is a 9-tuple

 M=(S,Σ,Γ,δq,δc,si,sa,sr,Ω),

where, different from the PTMs, the transition function is composed by two functions and that are responsible for the transitions on quantum and classical parts, respectively, and is the set of contents of a classical register storing quantum measurement outcomes. (Similarly to the PTMs, is the set of internal states, is the input alphabet, is the work tape alphabet, and , , and are respectively the initial state, the accepting state, and the rejecting state.) As the physical structure, additionally has a quantum tape with heads, and the classical register storing a value in , where are constants (independent of the input given to ). The quantum tape heads are numbered from 1 to . For simplicity, we assume that the quantum tape contains only qubits (with states and ) in its cells. Each cell is set to at the beginning of the computation. For a given input , the classical part is initialized as described for PTMs. The tape heads on the quantum tape are placed on the qubits numbered from 0 to .

The overall computation of is governed classically. Each transition of has two phases, quantum and classical, which alternate. We define the transition functions and different from the transition functions of PTMs. Suppose that is in and reads and , respectively. For each triple , can be the identity operator, a projective measurement (in the computational basis), or a unitary operator. If it is the identity operator, the quantum phase is skipped by setting the value of the classical register to 1 (in ). If the quantum operator is unitary, then the corresponding unitary operator is applied to the qubits under the heads on the quantum tape, and the value in the classical register is set to 1 (in ). If it is a measurement operator, then the corresponding projective measurement is done on the qubits under the heads on the quantum tape, and the measurement outcome, represented by an integer between 1 and (in ), is written in the classical register, where is the total number of all possible measurement outcomes of the measurement operator.

After the quantum phase, the classical phase is implemented. For each quadruple , returns the new state, the symbol written on the work tape, and updates of all heads, where .

The termination of the computation of is the same as the PTMs, i.e., done by entering the accepting state or the rejecting state . One time step corresponds to a single transition. We add the number of qubits visited with non-zero probability during the computation (as well as the number of cells visited on the classical work tape) to the space usage.

Remark that any QTM using superoperators can be simulated by a QTM using unitary operators and measurements with negligible memory and time overheads, i.e., by using extra quantum and classical states, any superoperator can be implemented by unitary operators and measurements in constant steps (e.g. [11, 13]).

Since the computation of the QTM defined above is controlled classically, a postselecting QTM (PostQTM) can be defined similar to PostPTMs: the PostQTM has an additional classical halting state , and any computation that ends in is discarded when calculating the overall accepting and rejecting probability on the given input.

The quantum counterparts of (), , , , , , and are (), , , 111Note that is the quantum counterpart of based on the criterion by the accepting probabilities of the underlying machine, not the certificate-based counterpart ()., , , and , respectively, where QTMs use algebraic numbers as transition amplitudes.

The following relations on logarithmic space quantum and classical complexity classes are already known [9, 16, 17, 7]:

 L⊆NL=coNL⊆coC=L=NQL⊆PL=PQL.
 L⊆BPL⊆BQL⊆PL=PQL.

## 3 Main Result

In this section, our main theorem () is proved. We start with the easy inclusion.

Theorem 1:

Proof: Any polynomial-time logarithmic-space PostQTM can be easily converted to a polynomial-time logarithmic-space QTM such that enters the accepting and rejecting states with equal probability when enters the non-postselecting halting state. Thus the balance between accepting and rejecting probabilities is preserved, and thus the language recognized by with bounded-error is recognized by with unbounded error.

In the rest of this section, we give the proof of the following inclusion.

Theorem 2: .

As described in Section 1, the proof of Theorem 2 consists of three parts, each of which will be given in the next three subsections. We start by giving an overview of the first part. Let be a language in . Then there exists a PTM recognizing with unbounded error such that on input halts in steps by using at most space for some fixed positive integers and .

Without loss of generality, we can assume that always splits into two paths in every step, the work tape alphabet of has only two symbols and , and halts only when the work tape contains only blanks and both tape heads are placed on the 0-th cells, i.e., there exist a single accepting and a single rejecting configurations. Let be the number of internal states.

We fix as the given input with length . Any configuration of is represented by a 4-tuple of binary strings

 (s,hin,w,hwk),

where is the internal state, is the position of the input head, is the content of the work tape, and is the position of the work tape. (We also assume that is always a binary string, which does not contain any blank symbol.) The set of all configurations is denoted by , i.e., for some polynomial in . The length of any configuration is

 l=⌈logm⌉+⌈logn⌉+⌈dlogn⌉+⌈log(dlogn)⌉∈O(logn).

Based on

, we define a stochastic matrix

, called the configuration matrix, whose columns and rows are indexed by configurations and its -th entry represents the probability going from to . Then, the whole computation of on can be traced by an

-dimensional column vector, called

configuration vector:

 vl+1=Pxvl,

where and

represents the probability distribution of the configurations after the

-th step. Here, is the initial configuration vector having a single nonzero entry, that is 1, corresponding to the initial configuration, and is the final configuration vector having at most two nonzero entries that keep the overall accepting and rejecting probabilities:

 vnk=Pnkxv0.

Since the computation is split into two paths in each step with equal probability, the overall accepting () and rejecting probabilities () are respectively of the forms

 A′2nk and R′2nk,

where , , and .

We present a simulation of the above matrix-vector multiplication in logarithmic space. It is clear that keeping all entries of a single configuration vector separately requires polynomial space in . On the other hand, a single configuration can be kept in logarithmic space. Therefore, we keep a mixture of configurations as a single summation for any time step. In other words, we can keep as

 vi[1]C1+vi[2]C2+⋯+vi[nk]Cnk,

where each coefficient represents the probability of being in the corresponding configuration . The transition from to can be obtained in a single step by applying . However, in our simulation, we can do this in sub-steps. The idea is as follows: In the -th sub-step, we check whether our mixture has or not. If it exists, then is evolved to and that are the configurations obtained from in a single step when the outcome of the coin is respectively heads or tails. In this way, from the mixture corresponding to , we obtain the next mixture:

 vi+1[1]C1+vi+1[2]C2+⋯+vi+1[nk]Cnk.

Then, the final mixture is

 ACa+RCr,

where and are the accepting and rejecting configurations, respectively.

We present the details of this simulation in the following subsection.

### 3.1 Probabilistic circuit

In this subsection, it is shown that we can construct, in deterministic logarithmic space, a logarithmic-width and polynomial-depth probabilistic circuit that simulates on .

Note that a logarithmic-space DTM can easily output each element of . Moreover, for any , it can also easily output two possible next configurations and such that switches from to if the result of the coin flip is heads and it switches from to if the result of the coin flip is tails.

A logarithmic-space DTM described below can output the desired probabilistic circuit with width where (i) the first bit is named as the random bit that is used for coin flip, (ii) the second and third bits are named as the block control bit and the configuration control bit that are used to control the transition between the configurations in each time step, and (iii) the rest of the bits hold a configuration of on .

The circuit consists of blocks, and outputs the blocks. Each block corresponds to a single time step of on :

 block1,block2,…,blocknk,

where each block is identical, i.e., each block implements the transition matrix operating on configurations. Remark that, after , we have the mixture representing .

Before each block, the random bit is set to 0 or 1 with equal probability and the block control bit is set to 1. As long as the block control bit is 1, the configurations are checked one by one in the block. Once it is set to 0, the remaining configurations are skipped.

Any block is composed by parts where each part corresponds to a single configuration:

 part1,part2,…,partN.

Here, implements the transitions from in a single step. In , we do the following items:

1. If the block control bit is 0, then SKIP the remaining items. Otherwise, CONTINUE.

2. SET the configuration control bit to 1 (here we assume that is in ).

3. It checks whether is in .

• If is not in , SET the configuration control bit to 0 and SKIP the remaining items. (Remark that the block control bit is still 1 in this case, and thus the next configuration will be checked in .)

• Otherwise (i.e., if indeed is in ), CONTINUE.

4. SWITCH from to if the random bit is 0 and SWITCH from to if the random bit is 1.

5. SET the block control bit to 0.

After all blocks, outputs the last block called decision block. In the last block, it is checked whether the last configuration is or . If it is (resp. ), then the first bit of the decision block is set to 1 (resp. 0).

For the above operations, we can use some gates operating on no more than four bits that are the first three bits and one bit from the rest in each time. With sequential gates, we can determine whether we are in or not. Similarly, with sequential gates, we can implement the transition from to and, with another sequential gates, we can implement the transition from to . Here, using sequential gates allows us to keep the size of any gate no more than 4 bits as shown in Fig. 1 (where denote the sequential gates).

When physically implementing the above circuit , before each block, the circuit will be in a single configuration, and during executing the block, only the part corresponding to this configuration will be active, and thus the circuit will switch to one of the two possible next configurations. After the decision block, we will observe the first bit as 1 and 0 with probabilities and , respectively.

Remark that the set of all possible gates which can be used in the above circuit is finite and independent of the input . The only probabilistic gate is a single bit operator implementing a fair coin toss. The rest of gates are deterministic and basically they are controlled operators with maximum dimension of .

Before continuing with the quantum part, we make further simplifications on . As 2-bit AND and OR gates222We assume that these gates are represented by matrices. and 1-bit NOT gate form a universal gate set for classical circuits, each deterministic gate (operating at most 4 bits) can be replaced by some finite numbers of NOT, AND, OR, and some 1-bit resetting gates with help of a few extra auxiliary bits used for intermediate calculations, which are appended to the bottom part of the circuit. Let be the new set of our gates, where implements the fair coin by outputting the values 0 and 1 with equal probability, and the values are used by the deterministic gates whenever it is needed.

We denote the simplified circuit as or shortly as . Let be the width of (note that ). Thus, we have such that the probability of observing 1 (resp. 0) on the first bit is (resp. ).

### 3.2 QTM part

In this subsection, we give a logarithmic-space postselecting QTM that simulates the computation of in a coherent manner, as described in Section 1.

A logarithmic-space (postselecting) QTM can trace the computation of on its quantum tape by help of its classical part. Since the circuit is deterministic logarithmic-space constructible, the classical part of the QTM helps to create the parts of on the quantum tape whenever it is needed. Moreover, any mixture of the configurations in is kept in a pure state of qubits (described below).

The QTM uses active qubits on the quantum tape for tracing on the input. The last two qubits are auxiliary, and the first qubits are used to keep the probabilistic state of . We consider the quantum tape as a logarithmic-width quantum circuit simulating .

For each gate of , say , we apply a unitary gate (operator) operating on at most 4 qubits, say . Therefore, we use 4 tape heads on the quantum tape.

During the simulation, the first qubits are always kept in a superposition and after each unitary operator the last qubit or the last two qubits are always measured. If the outcome is or , then the computation continues. Otherwise, the computation is terminated in the non-postselecting state.

In the probabilistic circuit , is applied on the first qubit. For each , we apply

 U0=12⎛⎜ ⎜ ⎜⎝111111−1−11−11−11−1−11⎞⎟ ⎟ ⎟⎠

on the first and the last qubits, measure the last qubit, and continue if is observed:

 U0|00⟩=12|00⟩+12|01⟩+12|10⟩+12|11⟩ post-selection−−−−−−−−−−−−−−→1√2|0⟩+1√2|1⟩.
 U0|10⟩=12|00⟩−12|01⟩+12|10⟩−12|11⟩ post-selection−−−−−−−−−−−−−−→1√2|0⟩+1√2|1⟩.

Thus, coin-flipping operator can be easily implemented.

For the other operators (including the ones given below), we use the techniques given in [12]. For any (), we apply unitary operator acting on four qubits. Before applying , the quantum part is in

 ∑a,b∈{0,1}αa,b|ab00⟩,

since the last two qubits are measured before and any outcome other than is discarded by entering the non-postselecting state. Thus, only entries of affects the above quantum state. We construct step by step as follows. These 16 entries are set to the corresponding values from . Thus, the probabilistic state, which is kept in the pure state, can be traced exactly up to some normalization factor.

Without loss of generality, we assume that (by reordering the quantum states) these 16 values are placed in the top left corner. Then, is of the form

 1e(GjG′jG′′j0∗∗∗∗),

where is the normalization factor and all , , and are matrices.

The entries of are set in order to make the first four rows pairwise orthogonal:

 G′j=⎛⎜ ⎜ ⎜ ⎜⎝1000γ1,2100γ1,3γ2,310γ1,4γ2,4γ3,40⎞⎟ ⎟ ⎟ ⎟⎠,

where the values are set column by column. The values of , , and are set to the appropriate values such that the first row becomes orthogonal to the second, the third, and the fourth ones, respectively. Similarly, we set the values of the second and third columns. Since is composed by integers, is also composed by integers.

The entries of are set in order to make the first four rows with equal length, say , which is a square of an integer:

 G′′j=⎛⎜ ⎜ ⎜ ⎜⎝γ10000γ20000γ30000γ4⎞⎟ ⎟ ⎟ ⎟⎠,

where diagonal entries are picked as the square roots of some integers. Remark that the entries of does not change the pair-wise orthogonality of the first four rows. Moreover, at this point, the first four rows become pair-wise orthonormal (due to normalization factor

). One can easily fill up the rest of the matrix with some arbitrary algebraic numbers in order to have a complete unitary matrix.

Since the set of depends on the transitions of the PTM , each can be kept in the description of the QTM.

By using the above quantum operators, we can simulate with exponentially small probability. Only note that, due to normalization factors, the computation is terminated in the non-postselecting state with some probabilities after applying each unitary gate.

At the end of the simulation of , we separate the first qubit from the rest of qubits, each of which is set to . Then, we have this unnormalized quantum state in the first qubit:

 (1−A)|0⟩+A|1⟩=(1−AA).

The operator maps the above quantum state to

 |~u⟩=⎛⎜ ⎜⎝12+A12−A⎞⎟ ⎟⎠.

Since this operator can be also implemented with post-selection by using an extra qubit, the new unnormalized quantum state is set to .

If , then the quantum state , that is the normalized version of , is identical to . If , then the quantum state lies between and , and thus it is closer to compare to . If , then lies between and , and thus it is closer to compare to .

The measurement in basis means that we rotate the quantum state with angle in counter-clockwise direction and then make a measurement in basis. Thus, observing () in the former case is equivalent to observing (resp., ) in the latter case.

After making a measurement in basis, we can easily distinguish the cases whether is close to 0 or is close to 1 with bounded error. In the case of when is close to , the probability of observing these basis states can be very close to each other. In Section 3.3, we use a modified version of the trick used by Aaronson [1] to increase the success probability. Actually, we will need to use the above QTM times sequentially in logarithmic space.

### 3.3 Executing a series of QTMs

Let be our integer parameter from the set . For each , we consider a QTM as follows. First, we execute the above QTM in Section 3.2, and then transform to

 |~up⟩=⎛⎜ ⎜ ⎜⎝12+A2nk−p(12−A)⎞⎟ ⎟ ⎟⎠

in () iterations. In each iteration, we combine the first qubit with another qubit in state , apply the quantum operator

 12⎛⎜ ⎜ ⎜ ⎜⎝1√300√3−10000200002⎞⎟ ⎟ ⎟ ⎟⎠,

and then the second qubit is measured. If the measurement outcome is , then the computation continues. Otherwise, the computation is terminated by entering the non-postselecting state. (By induction, we can easily see that .) Note that for each , the QTM can be done in logarithmic space as the QTM described in Section 3.2 is done in space, and the counter for the iteration for creating needs space as well.

By substituting , the quantum state can be rewritten as

 |~up⟩=⎛⎜ ⎜ ⎜ ⎜ ⎜⎝12+A′2nk2nk−p(12−A′2nk)⎞⎟ ⎟ ⎟ ⎟ ⎟⎠=⎛⎜ ⎜ ⎜ ⎜ ⎜⎝12+A′2nk2nk−p(2nk−2A′2nk+1)⎞⎟ ⎟ ⎟ ⎟ ⎟⎠=⎛⎜ ⎜ ⎜ ⎜⎝12+A′2nk2nk−2A′2p+1⎞⎟ ⎟ ⎟ ⎟⎠.

It is easy to see that

• when , the normalized state of lies in the first quadrant, and thus it is closer to , and

• when , lies in the fourth quadrant, and thus it is closer to .

Case : As (recall that is the accepting probability of the PTM on input that halts in steps),

 2nk−2A′2p+1≥22p+1.

Thus, there exists a value of , say , such that

 2nk−2A′2p′+1∈[1,2].

Then, since and , the quantum state lies between and , and since and , lies between and (see Fig. 2). Thus, the probability of observing after measuring in basis is always greater than

 2534>710

since .

Case : The case is similar to the previous case. There exists a value of , say , such that

 2nk−2A′2p′′+1∈[−2,−1].

Then, the quantum state lies between and and lies between and (see Fig. 2). Thus the probability of observing when measuring in basis is always greater than

 2534>710.

Now the overall quantum algorithm is as follows:

1. Prepare counter to . For each , the following steps are implemented.

1. We execute the above QTM , and make the measurement at the end in basis. (Note that the execution can be discarded by entering the non-postselecting state in the procedure of Section 3.2.)

2. If the measurement result corresponds to , then we reset the quantum register to all (note that this is possible using the classical control since all the non- qubits are induced only by post-selection, and thus we know what states they are in), and add to .

3. If the measurement result corresponds to , then we reset the quantum register to all , and add to .

2. If (namely, we observe in all executions), then the input is rejected.

3. If (namely, we observe in all executions), then the input is accepted.

4. Otherwise (namely, if we observe the outcomes and at least once in some executions), the computation is terminated in the non-postselecting state.

Note that the overall quantum algorithm is implemented in logarithmic space since the counter is clearly implemented in space, and is also implemented in space, and each iteration of step 1 is done by the reuse of the classical and quantum registers.

The analysis of the algorithm is as follows:

• When , the probability of observing is always greater than in each execution and at least once it is times more. Thus, if , the probability of observing all ’s is at least times more than the probability of observing all ’s after all executions.

• When , the probability of observing is always greater than in each execution and at least once it is times more. Thus, if , the probability of observing all ’s is at least times more than the probability of observing all ’s after all executions.

Therefore, after normalizing the final accepting and rejecting postselecting probabilities, it follows that is recognized by a polynomial-time logarithmic-space postselecting QTM with error bound . This completes the proof of Theorem 2. (The error bound can easily be decreased by using the standard probability amplification techniques.)

Additionally, we can show that is contained in the class of languages recognized by logarithmic space bounded-error QTMs that halt in expected exponential time.

Theorem 3: and , where () is the class of languages recognized by logarithmic space bounded-error PTMs (QTMs) that halt in expected exponential time.

###### Proof.

Let be a polynomial-time logarithmic-space PostPTM. By restarting the whole computation from the beginning instead of entering the non-postselecting state, we can obtain a logarithmic-space exponential-time PTM from , i.e., (i) the restarting mechanism does not require any extra space, and, (ii) since produces no less than exponentially small halting probability in polynomial time, halts with probability 1 in exponential expected time. Both machines recognize the same language with the same error bound since the restarting and postselecting mechanism can be used interchangeably [19, 21], i.e., the accepting and rejecting probabilities by and are the same on every input. Thus, we can conclude that . In the same way, we can obtain that . ∎

As by definition and Watrous showed  [17], our main result () leads to the following equivalence among , and .

Corollary 1: .

We leave open whether is contained in .

## 4 Related Results

In this section, we provide several results on logarithmic-space complexity classes with post-selection. The first result is a characterization of by logarithmic-space complexity classes.

Theorem 4: .

###### Proof.

We start with the first equality . Let . Since  [9], is also in . Then, there exist polynomial-time logarithmic-space NTMs and recognizing and . Based on and , we can construct a polynomial-time logarithmic-space PostPTM such that executes and with equal probability on the given input. Then, accepts the input if accepts and rejects the input if accepts. Any other outcome is discarded by . Therefore, (i) any is accepted with nonzero probability and rejected with zero probability by , and, (ii) any is accepted with zero probability and rejected with nonzero probability by . Thus, is recognized by with no error, and thus .

Let . Then, there exists a polynomial-time logarithmic-space PostPTM recognizing with no error. Based on , we can construct a polynomial-time logarithmic-space NTM such that executes on the given input and switches to the rejecting state if ends in the non-postselecting halting state. Thus, accepts all and only strings in . Therefore, .

Now we are done with equality . It is trivial that . To complete the proof, it is enough to show that . If a language is recognized by a polynomial-time logarithmic-space PostPTM with one-sided bounded-error, then it is also recognized by a polynomial-time logarithmic-space NTM where is modified from such that if enters the non-postselelecting state, then enters the rejecting state. ∎

By using the same argument, we can also obtain the following result on quantum class (note that the first equality comes from [16, 7]).

Theorem 5: .

As will be seen below, the relation between and seems different from the relation between their classical counterparts since and may be different classes. Remark that it is also open whether is a proper subset of or not.

By using the quantum simulation given in Section 3, we can obtain the following result.

Theorem 6: .

###### Proof.

It is easy to see that . Let be a language in and be a polynomial-time logarithmic-space PostQTM recognizing with one-sided bounded-error. By changing the transitions to the non-postselecting state of to the rejecting state, we can obtain a polynomial-time logarithmic-space NQTM recognizing , and thus . Since [16, 7], we obtain .

Now we prove the other direction. Let be in . Then there exists a polynomial-time logarithmic-space PTM that accepts any non-member of with probability and any member with probability different from . Let be a given input with length .

We use the simulation given in Section 3. We make the same assumptions on the PTM except that accepts some string with probability and never accepts any string with probability in the following interval

 (12−12nk,12+12nk)

for some fixed integer . This condition is trivial if the running time never exceeds , i.e., the total number of probabilistic branches never exceeds .

Then, we construct a polynomial-time logarithmic-space PostQTM as described in Section 3 with the following unnormalized final quantum state:

 (2A−12−nk),

where is the accepting probability of . We measure this qubit and accept (reject) the input, if we observe (). All the other outcomes are discarded by entering the non-postselecting state.

It is clear that for any non-member of , is always equal to , and thus the QTM accepts the input with zero probability and rejects the input with some non-zero probability. Therefore, any non-member of is rejected with probability 1.

On the other hand, for any member, the amplitude of is at least twice of the amplitude of , and thus the accepting probability is at least four times more than the rejecting probability. Thus, any member is accepted with probability at least . The success probability can be increased by using the standard probability amplification techniques. ∎

## Acknowledgements

Part of this research was done while Yakaryılmaz was visiting Kyoto University in November 2016 and March 2017. Yakaryılmaz was partially supported by ERC Advanced Grant MQC and ERDF project Nr. 1.1.1.5/19/A/005 “Quantum computers with constant memory”. Le Gall was supported by JSPS KAKENHI grants Nos. JP19H04066, JP20H05966, JP20H00579, JP20H04139, JP21H04879 and by the MEXT Quantum Leap Flagship Program (MEXT Q-LEAP) grants No. JPMXS0118067394 and JPMXS0120319794. Nishimura was supported by JSPS KAKENHI grants Nos. JP19H04066, JP20H05966, JP21H04879 and by the MEXT Q-LEAP grants No. JPMXS0120319794.

## References

• [1] Scott Aaronson. Quantum computing, postselection, and probabilistic polynomial-time. Proceedings of the Royal Society A, 461(2063):3473–3482, 2005.
• [2] Eric Allender and Mitsunori Ogihara. Relationships among PL, #L, and the determinant. RAIRO Theoretical Informatics and Applications, 30(1):1–21, 1996.
• [3] Ethan Bernstein and Umesh Vazirani. Quantum complexity theory. SIAM Journal on Computing, 26(5):1411–1473, 1997.
• [4] David Deutsch. Quantum theory, the Church-Turing principle and the universal quantum computer. Proceedings of the Royal Society of London A, 400:97–117, 1985.
• [5] Bill Fefferman, Hirotada Kobayashi, Cedric Yen-Yu Lin, Tomoyuki Morimae, and Harumichi Nishimura. Space-efficient error reduction for unitary quantum computations. In Proceedings of the 43rd International Colloquium on Automata, Languages, and Programming, volume 55 of LIPIcs, pages 14:1–14:14, 2016.
• [6] Bill Fefferman and Cedric Yen-Yu Lin. A complete characterization of unitary quantum space. In Proceedings of the 9th Innovations in Theoretical Computer Science Conference, volume 94 of LIPIcs, pages 4:1–4:21, 2018.
• [7] Bill Fefferman and Zachary Remscrim. Eliminating intermediate measurements in space-bounded quantum computation. In Proceedings of the 53rd Annual ACM Symposium on Theory of Computing, to appear, 2021. Also available at arXiv:2006.03530.
• [8] Uma Girish, Ran Raz, and Wei Zhan. Quantum logspace algorithm for powering matrices with bounded nom. In Proceedings of the 48th International Colloquium on Automata, Languages, and Programming, to appear, 2021. Also available at arXiv:2006.04880.
• [9] Neil Immerman. Nondeterministic space is closed under complementation. SIAM Journal on Computing, 17(5):935–938, 1988.
• [10] Hermann Jung. On probabilistic time and space. In Automata, Languages and Programming, volume 194 of LNCS, pages 310–317. Springer, 1985.
• [11] Michael A. Nielsen and Isaac L. Chuang. Quantum Computation and Quantum Information. Cambridge University Press, 2000.
• [12] A. C. Cem Say and Abuzer Yakaryılmaz. Computation with narrow CTCs. In Unconventional Computation, volume 6714 of LNCS, pages 201–211. Springer, 2011.
• [13] A. C. Cem Say and Abuzer Yakaryılmaz. Quantum finite automata: A modern introduction. In Computing with New Resources, volume 8808 of LNCS, pages 208–222. Springer, 2014.
• [14] Amnon Ta-Shma. Inverting well conditioned matrices in quantum logspace. In

Symposium on Theory of Computing Conference

, pages 881–890. ACM, 2013.
• [15] Dieter van Melkebeek and Thomas Watson. Time-space efficient simulations of quantum computations. Theory of Computing, 8(1):1–51, 2012.
• [16] John Watrous. Space-bounded quantum complexity. Journal of Computer and System Sciences, 59(2):281–326, 1999.
• [17] John Watrous. On the complexity of simulating space-bounded quantum computations. Computational Complexity, 12(1-2):48–84, 2003.
• [18] John Watrous. Encyclopedia of Complexity and System Science, chapter Quantum computational complexity. Springer, 2009. Also available at arXiv:0804.3401.
• [19] Abuzer Yakaryılmaz and A. C. C. Say. Succinctness of two-way probabilistic and quantum finite automata. Discrete Mathematics and Theoretical Computer Science, 12(2):19–40, 2010.
• [20] Abuzer Yakaryılmaz and A. C. Cem Say. Unbounded-error quantum computation with small space bounds. Information and Computation, 279(6):873–892, 2011.
• [21] Abuzer Yakaryılmaz and A. C. Cem Say. Proving the power of postselection. Fundamenta Informaticae, 123(1):107–134, 2013.