Quantum Online Streaming Algorithms with Constant Number of Advice Bits

02/13/2018 ∙ by Kamil Khadiev, et al. ∙ 0

Online algorithms are known model that is investigated with respect to a competitive ratio typically. In this paper, we investigate streaming algorithms as a model for online algorithms. We focus on quantum and classical online algorithms with advice. Special problem P_k was considered. This problem is obtained using "Black Hats Method". There is an optimal quantum online algorithm with single advice bit and single qubit of memory for the problem. At the same time, there is no optimal deterministic or randomized online algorithm with sublogarithmic memory for the same problem even the algorithm gets o( n) advice bits. Additionally, we show that quantum online algorithm with a constant number of qubits can be better than deterministic online algorithm even if deterministic one has a constant number of advice bits and unlimited computational power.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Online algorithm is a well-known computational model for solving optimization problems. The defining property of this model is following. An algorithm reads an input piece by piece and should return output variables after some of input variables immediately, even if the answer depends on whole input. An online algorithm should return an output for minimizing an objective function. There are different methods to define the effectiveness of algorithms [12, 15], but the most standard is the competitive ratio [21]. It is the ratio of the cost of the algorithm’s solution and the cost of a solution of an optimal offline algorithm. Typically, online algorithms have unlimited computational power and the main restriction is a lack of knowledge on future input variables. Many problems can be formulated in this terms. At the same time it is quite interesting to solve online minimization problem in a case of big input stream. We consider a big stream such that it cannot be stored in a memory. In that case we can discuss online algorithms with restricted memory. In the paper we consider a streaming algorithm as an online algorithm and it uses only bits of memory, for given integer . This classical model were considered in [10, 18, 13, 26]. Automata for online minimization problems were considered in [23]. We are interested in investigation of a new model of online algorithms, quantum online algorithms that use a power of quantum computing for solving online minimization problem. This model was introduced in [26]. It is already known that quantum online algorithms can be better than classical ones in a case of sublogarithmic an polylogarithmic memory [26, 25]. Some researchers considered quantum online algorithms with repeated test [32].

In this paper we focus on quantum online algorithm that reads input only once and has adviser. In the model with adviser [27, 11], online algorithm gets some bits of advice about an input. Trusted Adviser sending these bits knows the whole input and has an unlimited computational power. The question is “how many advice bits are enough to reduce competitive ratio or to make the online algorithm as the same effective as the offline algorithm?”. This question has different interpretations. One of them is “How many information an algorithm should know about a future for solving a problem effectively?”. Another one is “How many bits of pre-processed information we should send by expensive channel before start of computation for solving a problem effectively?”. Researchers pay attention to deterministic and probabilistic or randomized online algorithms with advice [19, 27]. The advice complexity of online streaming algorithms was investigated in [25]. It is known that in a case of polylogarithmic space quantum online streaming algorithm can be better than deterministic ones even if they get advice bits. In that paper authors considered “Black Hats Method” for constructing online minimization problem. In this paper we add some new properties of “Black Hats Method” and prove following results.

1. Let we allow algorithms to use only constant memory. We consider a special problem BHPM. The problem has a quantum online streaming algorithm with better competitive ratio than any classical (deterministic or randomized) online streaming algorithms even if it gets non constant number of advice bits.

2. problem has quantum online streaming algorithm with constant memory . The algorithm has better competitive ratio than any deterministic online algorithm even if it gets constant number of advice bits and has unlimited computational power.

3. Let we allow algorithms to use only bits of memory. BHR problem from [25] has a quantum online algorithm with better competitive ratio than any randomized online streaming algorithms even if it gets non constant number of advice bits. Note that same superiority over deterministic online streaming algorithms was proven in [25].

Online streaming algorithms are similar to traditional versions of streaming algorithms [28, 17], Branching programs [31] and automata [8, 9]. Researchers also compare classical and quantum cases for these models [3, 5, 4, 16, 30, 24, 8, 9, 28, 1, 20, gy2017, gy2018, gy2015].

The paper is organized in the following way. We present definitions in Section 2. New results on Black Hats Method are described in Section 3. A discussion on superiority of quantum vs classical algorithms is presented in Section 4.

2 Preliminaries

Let us define an online optimization problem. We give all following definitions with respect to [27, 26, 25, 32]. An online minimization problem consists of a set of inputs and a cost function. Every input is a sequence of requests . Furthermore, a set of feasible outputs (or solutions) is associated with every ; every output is a sequence of answers . The cost function assigns a positive real value to every input and any feasible output . For every input , we call any feasible output for that has the smallest possible cost (i. e., that minimizes the cost function) an optimal solution for .

Let us define an online algorithm for this problem as an algorithm which gets requests from one by one and should return answers from immediately, even if optimal solution can depend on future requests. A deterministic online algorithm computes the output sequence such that is computed from . This setting can also be regarded as a request-answer game: an adversary generates requests, and an online algorithm has to serve them one at a time [7].

We use competitive ratio as a main measure of quality for the online algorithm. It is the ratio of the cost of the algorithm’s solution and the cost of a solution of an optimal offline algorithm in the worst case. We say that online deterministic algorithm is -competitive if there exists a non-negative constant such that, for every and for any input , we have: where is an optimal offline algorithm for the problem; and is length of . We also call the competitive ratio of . If , then is called strictly -competitive; is optimal if it is strictly -competitive.

Let us define an online algorithm with advice. We can say that advice is some information about future input. An online algorithm with advice computes the output sequence such that is computed from , where is the message from Adviser, who knows the whole input. is -competitive with advice complexity if there exists a non-negative constant such that, for every and for any input , there exists some such that and .

Next, let us define a randomized online algorithm. A randomized online algorithm computes the output sequence such that is computed from , where is the content of a random tape, i. e., an infinite binary sequence, where every bit is chosen uniformly at random and independently of all the others. By

we denote the random variable expressing the cost of the solution computed by

on . is -competitive in expectation if there exists a non-negative constant such that, for every , .

We use streaming algorithms for online minimization problem as online algorithms with restricted memory. You can read more about streaming algorithms in literature [29, 6]. Shortly, these are algorithms that use small size of memory and read input variables one by one. Suppose is deterministic online streaming algorithm with bits of memory that process input . Then we can describe a state of memory of before reading input variable

by a vector

. The algorithm computes output such that depends on and ; depends on and . Randomized online streaming algorithm and deterministic online streaming algorithm with advice have similar definitions, but with respect to definitions of corresponding models of online algorithms.

Now we are ready to define a quantum online algorithm. You can read more about quantum computation in [9]. A quantum online algorithm computes the output sequence such that depends on . The algorithm can measure qubits several times during a computation. Note that quantum computation is a probabilistic process. is -competitive in expectation if there exists a non-negative constant such that, for every , .

Let us consider a quantum online streaming algorithm. For a given , a quantum online algorithm with qubits is defined on input and outputs . The algorithm is a triple where are (left) unitary matrices representing the transitions. Here is applied on the -th step. is an initial vector from the -dimensional Hilbert space over the field of complex numbers. where is a function that converts the result of measurement to an output variable. For any given input , the computation of on can be traced by a -dimensional vector from Hilbert space over the field of complex numbers. The initial one is . In each step the input variable is tested and then unitary operator is applied: where represents the state of the system after the -th step. The algorithm can measure one of qubits or more on any step after unitary transformation; or it can skip a measurement. Suppose that is in the state before a measurement and measures the -th qubit. Let states with numbers correspond to value of the -th qubit, and states with numbers correspond to value of the qubit. The result of the measurement of the qubit is

with probability

and with probability . If measured qubits on the -th step, then it gets number as a result and returns .

There are OBDD models and automata models that are good abstractions for streaming algorithms. You can read more about classical and quantum OBDDs in [31, 30, 2, 3, 5, 4, 24]. Formal definition of OBDDs and automata are given in Appendix 0.A. Following relations between automata, id-OBDDs (OBDD that reads input variables in the natural order) and streaming algorithms are folklore:

Lemma 1

If a quantum (probabilistic) id-OBDD of width computes a Boolean function , then there is a quantum (randomized) streaming algorithm computing that uses qubits (bits). If a quantum (probabilistic) automaton with size recognizes a language , then there is a quantum (randomized) streaming algorithm recognizing that uses qubits (bits). If any deterministic (probabilistic) id-OBDD computing a Boolean function has width at least , then any deterministic (randomized) streaming algorithm computing uses at least bits. If any deterministic (probabilistic) automaton recognizing a language has width at least , then any deterministic (randomized) streaming algorithm recognizing uses at least bits.

Let us describe “Black Hats Method” method from [25] that allows to construct hard online minimization problems. In the paper we discuss Boolean function , but in fact we consider a family of Boolean functions , for . We use notation for if length of is and it is clear from the context.

Suppose we have Boolean function and positive integers , where mod . Then online minimization problem is following. We have guardians and prisoners. They stay one by one in a line like , where is a guardian, is a prisoner. Prisoner has input of length and computes function . If the result is , then a prisoner paints his hat black, otherwise he paints it white. Each guardian wants to know a parity of a number of following black hats. We separate sequential guardians into blocks. The cost of a block is if all guardians of the block are right, and otherwise.

Let us define the problem formally:

Definition 1 (Black Hats Method)

We have a Boolean function . Then online minimization problem , for positive integers , where mod , is following: suppose we have input and positive integers , where . Let is always such that , where , for . Let be an output and be an output bits corresponding to input variables with value (in other words, output variables for guardians). Output corresponds to an input variable , where . Let . We separate all output variables to blocks of length . The cost of -th block is , where , if for ; and otherwise. Cost of the whole output is .

3 New Results for Black Hats Method

Following results are known from [25].

Theorem 3.1 ([25])

Let we allow all algorithms to use at most bits of memory. Let Boolean function be such that there are no deterministic streaming algorithms that compute . Then, there is no -competitive deterministic online streaming algorithm computing , where .

Theorem 3.2 ([25])

Let we allow all algorithms to use at most bits of memory. Let Boolean function be such that there are no randomized streaming algorithms that compute . Then there is no -competitive in expectation randomized online streaming algorithm computing , where .

Theorem 3.3 ([25])

Let Boolean function be such that there are no deterministic streaming algorithms that compute using less than bits of memory. Then for any deterministic online streaming algorithm using space less than bits, advice bits and solving , there is an input such that , for . A competitive ratio of the algorithm is .

Let us present new results on Black Hats Method. First one is randomized analog of Theorem 3.3. The following theorem shows that advice bits cannot help more than for guardians. The proof is based on ideas from [25, 22, 14]. We use function in the claim of the following theorem: , if ; and , otherwise.

Theorem 3.4

Let Boolean function be such that there are no randomized streaming algorithms that compute using space less than bits. Then for any randomized online streaming algorithm using space less than bits, advice bits and solving , there is an input such that , for . A competitive ratio of the algorithm is

. (See Appendix 0.B)

Assume that we have a randomized or quantum streaming algorithm that computes Boolean function . Then we can show that one advice qubit is enough for solving Black Hats problem for with high probability. The proof technique is similar to proofs from [25].

Theorem 3.5

Let Boolean function be such that there is a randomized (quantum) streaming algorithm that computes with bounded error using bits (qubits) of memory, where . Then there is a randomized (quantum) online streaming algorithm using at most bits (qubits) of memory, single advice bit and solving such that for any input and we have . If then .

Proof

Let us present randomized online streaming algorithm :

Step 1 Algorithm gets as the advice bit. The algorithm stores current result in a bit . Then returns .

Step 2 The algorithm reads and computes , where is the result of computation for on the input . Then returns .

Step reads , computes and returns .

A quantum algorithm is similar, but has one additional action: it measures the whole quantum memory after each step and sets it to state before a next step. We can do such initialization, because we know the result of measurement for the previous step. See Appendix 0.C for analysis of expected cost and detailed description of quantum algorithm.

We can also modify the method to capture the case of several advice bits. Suppose we have different instances of . Then we interleave prisoners and guardians from different instances (here and mean -th guardian and prisoner of -th instance, respectively):

Formal definition is following:

Definition 2 (Interleaved Black Hats method)

Let be a non-constant Boolean function. Online minimization problem for integers where is the following. Suppose we have input such that where for , , and . Let be an output and be output bits corresponding to input variables with value (guardians). Let , where and . We separate all output variables into blocks of length . The cost of -th block is , where if for all and , and otherwise. Cost of the whole output is .

Note that .

Theorem 3.6

(See Appendix 0.D). Let a Boolean function be such that there is a quantum streaming algorithm that computes exactly (with zero-error) using qubits of memory. Then there is a quantum online streaming algorithm that solves using at most qubits of memory and advice bits. The algorithm has competitive ratio in expectation .

There is no deterministic online algorithm computing that uses advice bits and has competitive ratio .

4 Applications

In this section we show some applications of above results.

Let us discuss function from [8, 5, 4]. Feasible inputs for the problem are such that , where is the number of s and . .

Lemma 2

For any , there are infinitely many n such that any online streaming algorithm computing the partial function uses at least bits of memory.(Based on [8, 5, 4] and Lemma 1)

Let us apply Black Hats Method to function. See Appendix 0.E for the proof of Claim 1 from the following theorem. Other claims follow from Theorems 3.5,3.3,3.1,3.6 and Lemma 1.

Theorem 4.1

Suppose , , and mod ; then

1. There are quantum online streaming algorithms and for . The algorithm gets advice bit, uses qubit of memory and it is optimal. The algorithm does not get advice bits, uses qubit of memory and has competitive ratio in expectation .

2. There is no deterministic streaming algorithm using bits of memory, advice bits and solving that is -competitive for , . There is no randomized streaming algorithm using bits of memory, advice bits and solving that is -competitive for , .

3. There is no deterministic online algorithm with unlimited computational power solving that is -competitive, for .

4. Suppose , . Then there is a quantum online streaming algorithm with qubits and advice bits such that the algorithm computes with competitive ratio in expectation . Any deterministic online algorithm computing with unlimited computation power such that the algorithm has competitive ratio .

Let us discuss a case of polylogarithmic memory. As in [25] for this case we consider Boolean function from [30]. Let be the standard basis of . Let and denote the subspaces spanned by the first and last of these basis vectors. Let . The input for the function consists of boolean variables , which are interpreted as universal - codes for three unitary -matrices A, B, C, where . The function takes the value if the Euclidean distance between and is at most . Otherwise the function is undefined.

Suppose , , and mod . The following facts are known from [25]:

1. There is a quantum online streaming algorithm using qubits that solving that has competitive ratio in expectation .

2. There is no deterministic online streaming algorithm using bits of memory, advice bits and solving that is -competitive, for , where .

In this paper we present a quantum algorithm with single advice bit and analog of Claim 2 for randomized case. Proof of these claims are based on Theorems 3.4,3.5.

Theorem 4.2

3. There is a quantum online streaming algorithms for such that the algorithm gets advice bit, uses qubit of memory and has competitive ratio in expectation , for .

4. There is no randomized streaming algorithm using bits of memory, advice bits and solving that is -competitive for

, .

References

  • [1] F. Ablayev, A. Ambainis, K. Khadiev, and A. Khadieva. Lower bounds and hierarchies for quantum automata communication protocols and quantum ordered binary decision diagrams with repeated test. pages 197–211, 2018.
  • [2] F. Ablayev, A. Gainutdinova, and M. Karpinski. On computational power of quantum branching programs. In FCT, volume 2138 of LNCS, pages 59–70. Springer, 2001.
  • [3] F. Ablayev, A. Gainutdinova, M. Karpinski, C. Moore, and C. Pollett. On the computational power of probabilistic and quantum branching program. Information and Computation, 203(2):145–162, 2005.
  • [4] F. Ablayev, A. Gainutdinova, K. Khadiev, and A. Yakaryılmaz. Very narrow quantum obdds and width hierarchies for classical obdds. Lobachevskii Journal of Mathematics, 37(6):670–682, 2016.
  • [5] F. Ablayev, A. Gainutdinova, K. Khadiev, and A. Yakaryılmaz. Very narrow quantum obdds and width hierarchies for classical obdds. In DCFS, volume 8614 of LNCS, pages 53–64. Springer, 2014.
  • [6] Charu C Aggarwal and Chandan K Reddy. Data clustering: algorithms and applications. CRC press, 2013.
  • [7] Susanne Albers. BRICS, Mini-Course on Competitive Online Algorithms. Aarhus University, 1996.
  • [8] A. Ambainis and A. Yakaryılmaz. Superiority of exact quantum automata for promise problems. Information Processing Letters, 112(7):289–291, 2012.
  • [9] Andris Ambainis and Abuzer Yakaryılmaz. Automata and quantum computing. Technical Report 1507.01988, arXiv, 2015.
  • [10] L. Becchetti and E. Koutsoupias. Competitive analysis of aggregate max in windowed streaming. In Automata, Languages and Programming: ICALP 2009 Proceedings, Part I, pages 156–170. Springer, 2009.
  • [11] J. Boyar, L.M Favrholdt, C. Kudahl, K.S Larsen, and J.W Mikkelsen. Online algorithms with advice: A survey. ACM Computing Surveys (CSUR), 50(2):19, 2017.
  • [12] Joan Boyar, Sandy Irani, and Kim S Larsen. A comparison of performance measures for online algorithms. In Workshop on Algorithms and Data Structures, pages 119–130. Springer, 2009.
  • [13] Joan Boyar, Kim S Larsen, and Abyayananda Maiti. The frequent items problem in online streaming under various performance measures. International Journal of Foundations of Computer Science, 26(4):413–439, 2015.
  • [14] Cynthia D. and Larry S. A time complexity gap for two-way probabilistic finite-state automata. SIAM Journal on Computing, 19(6):1011–1123, 1990.
  • [15] Reza Dorrigiv and Alejandro López-Ortiz. A survey of performance measures for on-line algorithms. SIGACT News, 36(3):67–81, 2005.
  • [16] A. Gainutdinova. Comparative complexity of quantum and classical obdds for total and partial functions. Russian Mathematics, 59(11):26–35, 2015.
  • [17] D. Gavinsky, J. Kempe, I. Kerenidis, R. Raz, and R. De Wolf. Exponential separations for one-way quantum communication complexity, with applications to cryptography. In

    Proceedings of the thirty-ninth annual ACM symposium on Theory of computing

    , pages 516–525. ACM, 2007.
  • [18] Y. Giannakopoulos and E. Koutsoupias. Competitive analysis of maintaining frequent items of a stream. Theoretical Computer Science, 562:23–32, 2015.
  • [19] JI Hromkovic. Zámecniková. design and analysis of randomized algorithms: Introduction to design paradigms, 2005.
  • [20] R. Ibrahimov, K. Khadiev, K. Prūsis, and A. Yakaryılmaz. Zero-error affine, unitary, and probabilistic obdds. arXiv:1703.07184, 2017.
  • [21] Anna R Karlin, Mark S Manasse, Larry Rudolph, and Daniel D Sleator. Competitive snoopy caching. In Foundations of Computer Science, 1986., 27th Annual Symposium on, pages 244–254. IEEE, 1986.
  • [22] K. Khadiev, R. Ibrahimov, and A. Yakaryılmaz. New size hierarchies for two way automata. arXiv, 2018.
  • [23] K. Khadiev and A. Khadieva. Quantum automata for online minimization problems. In Ninth Workshop on NCMA 2017 Short Papaers, pages 25–33. Institute fur Computersprachen TU Wien, 2017.
  • [24] K. Khadiev and A. Khadieva. Reordering method and hierarchies for quantum and classical ordered binary decision diagrams. In Computer Science – Theory and Applications: CSR 2017 Proceedings, volume 10304 of LNCS, pages 162–175. Springer, 2017.
  • [25] K. Khadiev, A. Khadieva, D. Kravchenko, A. Rivosh, and I. Yamilov, R.and Mannapov. Quantum versus classical online algorithms with advice and logarithmic space. arXiv:1710.09595, 2017.
  • [26] K. Khadiev, A. Khadieva, and I. Mannapov. Quantum online algorithms with respect to space complexity. arXiv:1709.08409, 2017.
  • [27] Dennis Komm. An Introduction to Online Computation: Determinism, Randomization, Advice. Springer, 2016.
  • [28] François Le Gall. Exponential separation of quantum and classical online space complexity. In Proceedings of the eighteenth annual ACM symposium on Parallelism in algorithms and architectures, pages 67–73. ACM, 2006.
  • [29] S. Muthukrishnan et al. Data streams: Algorithms and applications. Foundations and Trends® in Theoretical Computer Science, 1(2):117–236, 2005.
  • [30] Martin Sauerhoff and Detlef Sieling. Quantum branching programs and space-bounded nonuniform quantum complexity. Theoretical Computer Science, 334(1):177–225, 2005.
  • [31] Ingo Wegener. Branching Programs and Binary Decision Diagrams: Theory and Applications. SIAM, 2000.
  • [32] Q. Yuan. Quantum online algorithms. University of California, Santa Barbara, 2009.

Appendix 0.A Definition of OBDD and Automaton

OBDD is a restricted version of a branching program (BP). BP over a set of Boolean variables is a directed acyclic graph with two distinguished nodes (a source node) and (a sink node). We denote it or just . Each inner node of is associated with a variable . A deterministic has exactly two outgoing edges labeled and respectively for each node . The program computes Boolean function () as follows: for each we let iff there exists at least one path (called accepting path for ) such that all edges along this path are consistent with . A size of branching program is a number of nodes. Ordered Binary Decision Diagram (OBDD) is a BP with following restrictions:

(i) Nodes can be partitioned into levels such that belongs to the first level and sink node belongs to the last level . Nodes from level have outgoing edges only to nodes of level , for .

(ii)All inner nodes of one level are labeled by the same variable.

(iii)Each variable is tested on each path only once.

A width of a program is OBDD reads variables in its individual order . We consider only natural order . In this case we denote model as id-OBDD.

Probabilistic OBDD (POBDD) can have more than two edges for a node, and we choose one of them using probabilistic mechanism. POBDD computes Boolean function with bounded error if probability of the right answer is at least .

Let us define a quantum OBDD. It is given in different terms, but you can see that they are equivalent, see [2] for more details. For a given , a quantum OBDD of width defined on , is a 4-tuple where

are ordered pairs of (left) unitary matrices representing transitions. Here

or is applied on the -th step. A choice is determined by the input bit. The vector is a initial vector from the -dimensional Hilbert space over the field of complex numbers. where corresponds to the initial node. is a set of accepting nodes. is a permutation of . It defines the order of input bits.

For any given input , the computation of on can be traced by the -dimensional vector from Hilbert space over the field of complex numbers. The initial one is . In each step , , the input bit is tested and then the corresponding unitary operator is applied: where represents the state of the system after the -th step. We can measure one of qubits. Let the program be in state before measurement and let us measure the -th qubit. Let states with numbers correspond to value of the -th qubit, and states with numbers correspond to value of the -th qubit. The result of measurement of -th qubit is with probability and with probability . The program measures all qubits in the end of the computation process. The program accepts the input (returns on the input) with probability , for .

if accepts input with probability at least , and if accepts input with probability at most , for . We say that a function is computed by with bounded error if there exists an such that for any . We can say that computes with bounded error .

Automata. We can say that automaton is id-OBDD such that transition function for each level is same. Here id-OBDD is OBDD with order

Appendix 0.B The Proof of Theorem 3.4

0.b.1 Definitions

Let be a partition of set into two parts, for . Below we use equivalent notations for Boolean functions and . Let be a subfunction of Boolean function for some partition , where . Function is obtained from by fixing , then . Let be a number of different subfunctions with respect to partition .

is a number of equivalence classes for a function that is computed by a randomized streaming algorithm that uses bits of memory. From [22, 14] we know that .

0.b.2 Proof

Let us proof that if the algorithm gets advice bits, then there is an input such that at least prisoners cannot be computed with bounded error. We prove it by induction.

Firstly, let us prove for . Then Adviser can send all results of guardians and the algorithm returns all answers right.

Secondly, let us prove for . It means that the algorithm has not any advice and we get situation from Theorem 3.2.

Thirdly, let us prove for other cases. Assume that we already proved the claim for any pair such that , and at least one of these inequalities is strict. We focus on the first prisoner.

Assume that there is an input for the first prisoner such that this prisoner cannot compute an answer with bounded error. Then we use this input and get situation for prisoners and advice bits. In that case prisoners are wrong, plus the first one is also wrong.

Assume that the algorithm always can compute answer with bounded error for the first prisoner. So we can describe the process of communication with Adviser in the following way: Adviser separates all possible inputs into non-overlapping groups . After that he sends number of the group that contains current input to the algorithm. Then the algorithm processes the input with knowledge that an input can be only from this group.

Let us consider three sets of groups: such that is an input for the first prisoner and , such that is an input for the first prisoner and , . Let , for some . If , then as we take any input from any group . Hence we have at most possible groups for Adviser. We can say that Adviser can encode it using bits. Therefore, we get situation for advice bits and prisoners, where the claim is true. If , then as we take any input from any group . Hence we have at most possible groups for Adviser and same situation, where the claim is true.

Let . Suppose that the algorithm uses bits of memory, where . Let us focus on position such that . The algorithm knows about previous input only state of memory and number of a group that comes from Adviser . Note that , where is a randomized streaming algorithm that can be obtained from on the input of the first prisoner. This inequality is true, because cannot compute using bits otherwise we can construct such . By the Pigeonhole principle we can choose two inputs such that and . At the same time, and correspond to the same class of equivalence of memory’s state for the algorithm. Hence there is an input such that .

Inputs and cannot be at a same group of inputs that was sent by Adviser. Suppose that these two inputs are in a same group. Then the algorithm is in a state of memory that belongs to the same class of equivalence. So, tha algorithm processes only and returns the same result with bounded error for both inputs. At the same time .

Let sets be such that all groups from contain as the first prisoner’s input and all groups from contain as the first prisoner’s input. Let us choose the smallest set from and ; and choose corresponding input or as . So, size of this set is less or equal to . Therefore, we have situation for prisoners and advice bits.

Therefore, we know that if the algorithm uses advice bits then at least prisoners return wrong answers. Hence, the best strategy for adviser is sending right answers for guardians. If the algorithm knows answer for -th guardian, then it can just ignore a result of -th prisoner. We can show that guardian that do not get answer from Adviser (“unknown” guardians) cannot be computed with bounded error. Therefore, they can be only guessed with probability . We can use the proof technique as in Theorem 11 from [25] . We use exactly the same approach for all segments between “known” guardians.

So, because of properties of cost function, the best strategy is a sending information about all guardians of a block. Hence, the algorithm can get full blocks and cost for each of them will be , for . If then one block has both “known” and “unknown” guardians. All “unknown” guardians can be only guessed with probability . So the expected cost of such block is . Other blocks contain only “unknown” guardians. These blocks have expected cost . Therefore, we can construct an input such that it’s expected cost is , for . The competitive ratio of the algorithm is .

Appendix 0.C The End of The Proof of Theorem 3.5

0.c.1 The Quantum Online Streaming Algorithm

Let us describe the quantum online streaming algorithm :

Step 1 Algorithm gets as an advice bit. The algorithm stores current result in qubit . Then the algorithm measures and gets with probability . Then returns the result of the measurement as .

Step 2 The algorithm reads and computes as a result of CNOT or XOR of and , where is the result of computation for on the input . uses register of qubits for processing . Then the algorithm returns a result of a measurement for as . After that measures all qubits of and sets to . The algorithm can do it, because it knows a result of the measurement and can rotate each qubit such that the qubit becomes .

Step The algorithm reads and computes as a result of CNOT or XOR of and . Algorithm uses register of bits on processing . Then returns a result of the measurement for as . After that measures and sets to

0.c.2 Analysis of Expected Cost

Let us compute a cost of the output for this algorithm. Let us consider a new cost function . For this function, a “right” block costs and a “wrong” block costs . In that case . Therefore, in the following proof we can consider only function.

Firstly, let us compute that is the probability that block is a “right” block (or costs ). Let . So, if -th block is “right”, then all prisoners inside the block return right answers. A probability of this event is .

Let . If -th block is “right”, then two conditions should be true:

(i) All prisoners inside the block should return right answers.

(ii) If we consider a number of preceding guardians that return wrong answer plus if the preceding prisoner has an error. Then this number should be even.

A probability of the first condition is . Let us compute a probability of the second condition.

Let be the number of errors before -th guardian. It is a number of errors for previous prisoners. Let be a probability that is even. Therefore, is a probability that

is odd.

If there is an error in a computation of -th prisoner then should be odd. If there is no error for -th prisoner then should be even. Hence, . Note that , because the first guardian gets the answer as the advice bit.