The no-cloning theorem states that every unitary transform cannot clone an arbitrary quantum state. Hiowever some unitary transforms can clone a subset of pure quantum states. For example, given basis states there is a unitary transform that transforms each to . In addition, there exists several generalizations to the no-cloning theorem, showing that imperfect clones can be made. In (Bužek and Hillery, 1996), a universal cloning machine was introduced that can clone an arbitrary state with the fidelity of 5/6.
In this paper, we look at the no-cloning theorem from an algorithmic perspective. We introduce the notion of the algorithmic mutual information, , between two quantum states. This is a symmetric measure that enjoys conservation inequalities over unitary transforms and partial traces. Quantum algorithmic information upper bounds the amount of classical algorithmic information between POVM measurements of two quantum states.
Given this information function, a natural question to pose is whether a considerable portion of pure states can use a unitary transform to produce two states that share a large amount of information. This paper answers this question in the negative. Only a very sparse set of pure states can, given any unitary transform, duplicate algorithmic information.
This result is achieved in a two step process. In the first step, we show that only a small minority of pure states have non negligible self information. This fact is interesting in it own right, since we show that most pure states have high quantum algorithmic entropy. In the second step, we show that the information between any two states produced from a unitary tranform and the quantum state is upper bounded by the self information of . More specifically,
If for unitary transform , then .
The details of the above statements can be found in Theorems 7 and 12. These two results combined together imply that on average, states can only duplicate a negligible amount of information. However the basis states, , can use a unitary transform to clone at least information, where is the Kolmogorov complexity measure.
In addition to this algorithmic take on the no-cloning theorem, we provide some other results as well. We define the notion of randomness of one quantum state with respect to another, possibly non computable, quantum state. We show conservation of randomness with respect to unitary transformations and partial traces. We prove a chain rule inequality with respect to quantum algorithmic entropy. We show that POVM measurements do not increase the deficiency of randomness of a quantum state with respect to another quantum state.
2 Related Work
The study of Kolmogorov complexity originated from the work of (Kolmogorov, 1965). The canonical self-delimiting form of Kolmogorov complexity was introduced in (Zvonkin and Levin, 1970) and (Chaitin, 1975)
. The universal probabilitywas introduced in (Solomonoff, 1964).
More information about the history of the concepts used in this paper can be found in the textbook (Li and Vitányi, 2008). Quantum algorithmic probability was studied in (Gács, 2001). A type of quantum complexity dependent on descriptive complexity was introduced in(Vitanyi, 2000). Another variant, quantum Kolmogorov complexity, was developed in(Berthiaume et al., 2001)
. Quantum Kolmogorov complexity uses a universal quantum Turing machine. The extension of Gács entropy to infinite Hilbert spaces can be seen in(Benatti et al., 2014). In (Benatti et al., 2006), a quantum version of Brudno’s theorem is proven, connecting the Von Neumann entropy rate and two notions of quantum Kolmogorov complexity. In (Nies and Scholz, 2018), quantum Martin Löf sequences were introduced.
3 Conventions and Kolmogorov Complexity Tools.
Let , , be the set of natural numbers, bits and finite sequences. The th bit of a sequence is . for . if statement holds, else . , , , and , , , and , , and denote , , , and , , , and , , respectively. To explicitly specify a constant dependent on parameters , we use the notation .
For Turing machine , we say program outputs string , with , if outputs after reading bits of from the input tape and halts. Otherwise if reads bits or it never halts, then . By this definition is a prefix algorithm. Auxiliary inputs to are denoted by . Our is universal, i.e. minimizes (up to ) Kolmogorov complexity . This measure is . The universal probability of an element relative to string is . We omit empty . By the coding theorem, . When we say that universal Turing machine is relativized to an elementary object, this means that an encoding of the object is provided to the universal Turing machine on an auxilliary tape.
4 Quantum States
We deal with finite dimensional Hilbert spaces , with bases . We assume and the bases for are the beginning of that of . An qubit space is denoted by , where qubit space has bases and . For we use to denote . The space has dimensions and we identify it with .
A pure quantum state of length
is represented as a unit vector in. Its corresponding element in the dual space is denoted by
. The tensor product of two vectors is denoted by. The inner product of and is denoted by .
The transpose of a matrix is denoted by . The tensor product of two matrices is denoted by . The trace of a matrix is denoted by and for tensor product space , the partial trace is denoted by . For positive semidefinite matrices, iff is positive semidefinite. Mixed states are represented by density matrices, which are, self adjoint, positive semidefinite, operators of trace 1. A semi-density matrix has non-negative trace less than or equal to 1.
A pure quantum state and (semi)density matrix are called elementary if their real and imaginary components have rational coefficients. Elementary objects can be encoded into strings or integers and be the output of halting programs. Therefore one can use the terminology and , and also and . Algorithmic quantum entropy, also known as Gács entropy, is defined using the following universal semi-density matrix, parametered by , with
The parameter represents the number of qubits. We use to denote the matrix over the Hilbert space denoted by symbol . The Gács entropy of a mixed state , conditioned on is defined by . We use the following notation for pure states, with . For empty we omit. This definition of algorithmic entropy generalizes in (Gács, 2001) to mixed states.
We say program lower computes positive semidefinite matrix if, given as input to universal Turing machine , the machine reads bits and outputs, with or without halting, a sequence of elementary semi-density matrices such that and . A matrix is lower computable if there is a program that lower computes it. The matrix is universal in that it multiplicatively dominates all lower computable semi-density matrices, as shown in the following theorem, which will be used throughout this paper.
Theorem ((Gács, 2001), Theorem 2)
If lower computes semi-density matrix , then .
5 Addition Theorem
The addition theorem for classical entropy asserts that the joint entropy for a pair of random variables is equal to the entropy of one plus the conditional entropy of the other, with. For algorithmic entropy, the chain rule is slightly more nuanced, with . An analogous relationship cannot be true for Gács entropy, , since as shown in Theorem 15 of (Gács, 2001), there exists elementary where can be arbitrarily large, and . However, the following theorem shows that a chain rule inequality does hold for .
For matrix , let be the submatrix of starting at position . For example for the matrix
has , , , .
For matrix and matrix , let be the matrix whose entry is equal to . For any matrix , in can be seen that . Furthermore if is lower computable and is elementary, then is lower computable.
For elementary semi density matrices , we use to denote the encoding of the pair of an encoded and an encoded natural number .
Theorem 1 (Addition Inequality).
For semi-density matrices , , elementary ,
Let be the universal lower computable semi density matrix over the space of 2n qubits, . Let be the universal matrix of the space over qubits. We define the following bilinear function over complex matrixes of size , with . For fixed , is of the form . The matrix has trace equal to
using Theorem 14 of (Gács, 2001), which states . By the definition of , since and are positive semi-definite, it must be that is positive semi-definite. Since the trace of is , it must be that up to a multiplicative constant, is a semi-density matrix. Since is lower computable and is elementary, by the definition of , is lower computable relative to the string . Therefore we have that . So we have that . ∎
6 Deficiency of Randomness and Information
In this section, we extend algorithmic conservation of randomness and information to the quantum domain. We also present lower and upper bounds for the amount of self algorithmic information that a mixed quantum state can have.
The classical deficiency of randomness of a semimeasure with respect to a computable probability measure is denoted by . This term enjoys conservation inequalities, where for any computable transform , .
For semi-density matrix , a matrix is a -test, , if it is lower computable and . In (Gács, 2001), the universal randomness test of with respect to elementary was defined as , where is an enumeration of . Paralleling the classical definition, the deficiency of randomness of with respect to was defined as .
For non computable , is not necessarily enumerable, and thus a universal lower computable randomness test does not necessarily exist, and cannot be used to define the deficiency of randomness. So in this case, the deficiency of randomness is instead defined using an aggregation of -tests, weighted by their lower algorithmic probabilities. This is reminiscient of the definition of in (Levin, 1984), which is an aggregation of integral tests, weighted by their algorithmic probabilities. The lower algorithmic probability of a lower computable matrix is . Let .
The deficiency of randomness of with respect to is .
By definition, is universal, since for every lower computable -test , . So, relativized to invertible elementary , by Theorem 17 of (Gács, 2001), is equal, up to a multiplicative constant to the universal lower computable test, and also . This parallels the classical definition of .
For semi-density matrix , relativized to unitary transform , .
For every string that lower computes , there is a string of the form , that lower computes . This string uses the helper code , and on the auxilliary tape, to take the intermediary outputs of and outputs the intermediary output . Since the complexity of is a constant, .
Theorem 2 (Conservation of Randomness, Unitary Transform).
For semi-density matrices and , relativized to elementary unitary transform ,
If , then . This is because by assumption . So by the cyclic property of trace . Therefore since is lower computable, . From proposition 1, . So we have the following inequality
The other inequality follows from using the above reasoning with , , and . ∎
Conservation of randomness occurs also over a partial trace, as shown in the following theorem. Deficiency of randomness decreases with respect to the reduced quantum states.
Theorem 3 (Conservation of Randomness, Partial Trace).
For , for the space of qubits, , relativized to and , .
If , then , where is the identity operator over . This is because . Since is lower computable, . Also . So
For a pair of random variables, , , their mutual information is defined to be . This represents the amount of correlation between and . Another intrepretation is that the mutual information between and is the reduction in uncertainty of after being given access to .
Quantum mutual information between two subsystems described by states and of a composite system described by a joint state is , where is the Von Neumman entropy. Quantum mutual information measures the correlation between two quantum states.
The algorithmic information between two strings is defined to be . By definition, it measures the amount of compression two strings achieve when grouped together.
The three definitions above are based off the difference between a joint aggregate and the separate parts. Another approach is to define information between two semi-density matrices as the deficiency of randomness over , with the mutual information of and being . This is a counter argument for the hypothesis that the states are independently chosen according to the universal semi-density matrix . This parallels the classical algorithmic case, where . In fact, using this definition, all the theorems in Section 6 can be proven. However to achieve the conservation inequalities in Section 7, a further refinement is needed, with the restriction of the form of the tests. Let be the set of all lower computable matrices , such that . Let .
The mutual information between two semi-density matrices , is defined to be .
Up to an additive constant, information is symmetric.
This follows from the fact that for every , the matrix . Furthermore, since , this guarantees that , thus proving the theorem. ∎
Classical algorithmic information non-growth laws asserts that the information between two strings cannot be increased by more than a constant depending on the computable transform , with . Conservation inequalities have been extended to probabilistic transforms and infinite sequences. The following theorem shows information non-growth in the quantum case; information cannot increase under an elementary unitary transform. The general form of the proof to this theorem is analogous to the proof of Corollary 1 in (Levin, 1984).
Theorem 5 (Conservation of Information, Unitary transform).
For semi-density matrices and , relativized to elementary unitary transform , .
Given density matrices , , and , we define . Thus . The semi-density matrix is lower semicomputable, so therefore and also . So if then there is a positive constant , where . So we have
Using the reasoning of Theorem 2 on the unitary transform and we have that . Therefore we have that . The other inequality follows from using the same reasoning with and . ∎
6.2 Self Information
For classical algorithmic information, , for all . As shown in this section, this property differs from the quantum case, where there exists quantum states with high descriptional complexity and negligible self information. In fact this is the case for most pure states. The following theorem states that the information between two elementary states is not more than the combined length of their descriptions.
For elementary and , .
Assume not. Then for any positive constant , there exists semi-density matrices and , such that . By the definition of , and . Therefore by the definition of the Kronecker product, there is some positive constant such that for all and , , and similarly . By the definition of , it must be that . However for , there exists a and a , such that , causing a contradiction.
Let be the uniform distribution on the unit sphere of .
A POVM is a finite or infinite set of positive definite matrices such that . For a given semi-density matrix , a POVM induces a semi measure over integers, where . This can be seen as the probability of seeing measurement given quantum state and measurement . An elementary POVM has a program such that outputs an enumeration of , where each is elementary. Theorem 8 shows that measurements can increase only up to a constant factor, the deficiency of randomness of a quantum state. Note that the term represents the classical deficiency of randomness of a semimeasure with respect to a computable probability measure, as defined in the beginning of Section 6.
For semi-density matrices , , relativized to elementary and POVM ,
, where the matrix has , since is lower computable and . So . Since , . ∎
The information between two quantum states is lower bounded by the classical information of two measurements of those states, as shown in the following theorem. The following theorem also implies that pure states that are close to simple rotations of complex basis, i.e. unentangled, states will have high self information. However such states are sparse. On average, a quantum state will have negligible self information.
For semi density matrices , , and , relativized to elementary POVM ,
Since is lower semi-computable and , , and so . Let . , because it is lower semicomputable and . Therefore
For semi density matrices , , relativized to elementary POVM ,
This follows from , where . The reasoning of theorem 9 can then be used.
For basis state , .
Corollary 2 shows that for basis states , a unitary transform that produces from each will duplicate at least quantum algorithmic information.
7 Algorithmic No-Cloning Theorem
We show that the amount of quantum algorithmic information that can be replicated is bounded by the amount of self information that a state has. As shown in Theorem 7, the amount of pure states with high self information is very small. The following theorem states that information non growth holds with respect to partial traces.
Theorem 10 (Conservation of Information, Partial Trace).
For , and the space of qubits , relativized to and , .
There is a positive constant where if is in then is in , where is the identity operator over . Using Theorem 14 of (Gács, 2001) which states , we have that . It is easy to see that . So
For a density matrix over the space of qubits , .
For lower computable semi-density matrix and elementary semi-density matrix ,