A Correlation Measure Based on Vector-Valued L_p-Norms

05/21/2018
by   Mohammad Mahdi Mojahedian, et al.
0

In this paper, we introduce a new measure of correlation for bipartite quantum states. This measure depends on a parameter α, and is defined in terms of vector-valued L_p-norms. The measure is within a constant of the exponential of α-Rényi mutual information, and reduces to the trace norm (total variation distance) for α=1. We will prove some decoupling type theorems in terms of this measure of correlation, and present some applications in privacy amplification as well as in bounding the random coding exponents. In particular, we establish a bound on the secrecy exponent of the wiretap channel (under the total variation metric) in terms of the α-Rényi mutual information according to Csiszár's proposal.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

10/19/2018

The total variation distance between high-dimensional Gaussians

We prove a lower bound and an upper bound for the total variation distan...
11/26/2018

Divergence radii and the strong converse exponent of classical-quantum channel coding with constant compositions

There are different inequivalent ways to define the Rényi mutual informa...
01/05/2018

Optimal Utility-Privacy Trade-off with the Total Variation Distance as the Privacy Measure

Three reasons are provided in favour of L^1-norm as a measure of privacy...
05/12/2020

Strong Asymptotic Composition Theorems for Sibson Mutual Information

We characterize the growth of the Sibson mutual information, of any orde...
09/27/2018

Multi-variate correlation and mixtures of product measures

Total correlation (`TC') and dual total correlation (`DTC') are two clas...
09/26/2018

A new Gini correlation between quantitative and qualitative variables

We propose a new Gini correlation to measure dependence between a catego...
11/20/2021

Localized Mutual Information Monitoring of Pairwise Associations in Animal Movement

Advances in satellite imaging and GPS tracking devices have given rise t...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

In this paper, for any we introduce a new measure of correlation by

(1)

where denotes a certain norm which for reduces to the -norm. Since Rényi mutual information (according to Sibson’s proposal) can also be expressed in terms of the -norm our measure of correlation is also related Rényi mutual information.

The main motivation for introducing these measures of correlation, particularly for , is their applications in decoupling theorems. The point is that the average of , when is the outcome of a certain random CPTP map applied on the bipartite quantum state , can be bounded by where is a constant. Thus our measures of correlation can be used to prove decoupling type theorems in information theory.

Decoupling theorems have already found several applications in information theory. Most achievability results in quantum information theory are based on the phenomenon of decoupling (see [1] and references therein). Also, in classical information theory the OSRB method of [2] provides a similar decoupling-type tool for proving achievability results. The advantage of our decoupling theorem based on the measure , comparing to previous ones, is that it works for all values of . Given the relation between and Rényi mutual information mentioned above, the parameters appearing in our decoupling theorem would be related to -Rényi mutual information, which for reduces to Shannon’s mutual information. Therefore, we can use our decoupling theorems not only for proving achievability results but also for proving interesting bounds on the random coding exponents. We demonstrate this application via the examples of entanglement generation via a noisy quantum communication channel, and secure communication over a (classical) wiretap channel. In particular, we show a bound on the secrecy exponent of random coding over a wiretap channel in terms of Rényi mutual information according to Csiszár’s proposal.

Another application of our new measures of correlation is in secrecy. To measure the security of a communication system, one has to quantify the amount of information leaked to an eavesdropper. While the common security metric for measuring the leakage is mutual information (see e.g., see [3]) or the total variation distance [2, 4], there have been few recent works that motivate and define other measures of correlation to quantify leakage [5, 6, 7, 8, 9, 10, 11, 12]. Herein, we suggest the use of our metric instead of mutual information because it is a stronger metric and has a better rate-security tradeoff curve. To explain the rate-security tradeoff, consider a secure transmission protocol over a communication channel, achieving a communication rate of with certified leakage of at most

according to the mutual information metric. Now, if the transmitter obtains a classified message for which leakage

is no longer acceptable, it can sacrifice communication rate for improved transmission security. We show that the rate-security tradeoff with the mutual information metric is far worse than that of our metric. We will discuss this fact in more details via the problem of privacy amplification.

The definition of our measure of correlation is based on the theory of vector-valued spaces. These spaces are generalizations of the spaces and are defined via the theory of complex interpolation

. Then the proofs of our main theorems are heavily based on the interpolation theory. In particular, we use the Riesz-Thorin interpolation theorem several times, in order to establish an inequality for all

by interpolating between and .

In the following section, we review some notations and introduce vector-valued norms. Section 3 introduces our new measure of correlation and presents some of its properties. Section 4 contains the main technical results of this paper. Section 5 and Section 6 contain some applications of our results in privacy amplification as well as in bounding the random coding exponents.

2 Vector-valued norms

For a finite set let to be the vector space of functions . For any and we define

This quantity for satisfies the triangle inequality and turns into a normed space. The dual of -norm is the -norm where is the Hölder conjugate of given by

(2)

More generally, for any with and any we have

where .

Suppose that is another set and we equip the vector space with the -norm. The question is how we can naturally define a -norm on the space that is compatible with the norm of the individual spaces . By compatible we mean that if with and (i.e., ) then

(3)

To this end, any vector can be taught of as a collection of vectors for any , where . Let us denote . Then we may define

This definition of the -norm satisfies (3). Moreover, when , this -norm coincides with the usual -norm. Finally, it is not hard to verify that the -norm, for , is indeed a norm and satisfies the triangle inequality.

The -norm can also be defined in the non-commutative case. Suppose that is a Hilbert space of finite dimension . Let to be the space of linear operators acting on . Again we can define

where , and is the adjoint of . For this equips with a norm, called the Schatten norm, that satisfies the triangle inequality. Hölder’s inequality is also satisfied for Schatten norms [13]: if with , then for we have

(4)

Our notation in the non-commutative case can be made compatible with the commutative case. By abuse of notation, an element can be taught of as a diagonal matrix of the form

acting on the Hilbert space with the orthonormal basis . Therefore, can be taught of as a subspace of . We also have

Now the question is how we can define the -norm in the non-commutative case. Let us start with the easy case of . Then, following the above notation, can be written as

with . Similar to the fully commutative case we can define

(5)

Now let us turn to the fully non-commutative case. In this case, the definition of the -norm is not easy and is derived from interpolation theory [14]. Here, we present an equivalent definition provided in [15] (see also [16]). We also focus on the case of that we need in this paper. In this case, since there exists such that . Then for any we define

(6)

where the infimum is taken over all density matrices111A density matrix is a positive semidefinite operator with trace one. and is the identity operator. In the following, for simplicity we sometimes suppress the identity operators in expressions of the form and write . Therefore,

When , the -norm satisfies the triangle inequality and is a norm. Some remarks are in line.

Remark 1.

As in the commutative case, the order of subsystems in the above definition is important, i.e., and are different.

Remark 2.

From Hölder’s inequality (4), one can derive that if , then

Remark 3.

When , the above definition of -norm coincides with that of (5). This can be shown by trying to optimize the choices of in (6), which can be taken to be diagonal.

Remark 4.

When the -norm coincides with the usual -norm [14, 15]:

Remark 5.

When is positive semidefinite, in (6) we may assume that , see [16]. That is, when is positive semidefinite we have

where

(7)

We will compare our measure of correlation with Rényi mutual information which interestingly can also be written in terms of -norms. For the sandwiched -Rényi relative entropy is defined by222All the logarithms in this paper are in base two.

where is the Hölder conjugate of given by (2). The -Rényi mutual information (Sibson’s proposal) for is given by333See [17] for different definitions and properties of Rényi mutual information.

Using the definition of and Remark 5 we find that

In particular, for classical random variables

and

with joint distribution

we have

(8)

Finally the -Rényi conditional entropy is defined by

(9)

We finish this section by stating a lemma about the monotonicity of the -norm.

Lemma 6.

For any and any density matrix the function is non-decreasing on .

Proof.

Let , and let be such that . Using Hölder’s inequality for arbitrary density matrices we have

Taking infimum over we obtain the desired result. ∎

2.1 Completely bounded norm

The completely bounded norm of a super-operator is defined by

where the supremum is taken over all auxiliary Hilbert spaces with arbitrary dimension and is the identity super-operator. In the above definition, we may replace with any , see [14]. That is, for any we have

(10)

We say that a super-operator between spaces with certain norms is a complete contraction if its completely bounded norm is at most .

Lemma 7.

For any and we have

Proof.

First of all the swap super-operator is a complete contraction [14], i.e.,

Therefore, it suffices to show that

Equivalently we need to show that

Using (10) we have

Next since is completely positive and we have [16]

We are done.

3 A new measure of correlation

In this section, we define our measure of correlation and study some of its properties.

Definition 1.

Let be an arbitrary bipartite density matrix. For any we define444When we have .

(11)
(12)

where , and are the marginal states on and subsystems, respectively.

As will be seen below, is a measure of correlation while is a related quantity that may be thought of as a conditional entropy.

By Remark 4 when , and can be expressed in terms of the -norm:

(13)
(14)

In the classical case when

is a joint probability distribution we have

and

As an immediate property of the above definitions, both and are non-negative. Moreover, since they are defined in terms of a norm, we have if and only if , and if and only if .

Proposition 8.

For any the functions

and

are non-decreasing. In particular, for any we have

Proof.

For the monotonicity of , in Lemma 6 put and . For the other monotonicity let and . ∎

We now prove the main property of and , namely their monotonicity under local operations.

Theorem 9 (Monotonicity under local operations).
  • For any and all CPTP maps and we have

    where .

  • For any and any CPTP map

    where and is the identity super-operator.

Proof.

For (i) we compute

where denotes the super-operator norm:

Now using equation (3.5) and Theorem 13 of [16] we have

On the other hand, using Lemma 9 of [18] (see also [19]) we have

Moreover, by Lemma 5 of [16] we have

since is CPTP. We conclude that, .

The proof of (ii) is similar, so we skip it. ∎

We now state the relation between and Rényi information measures.

Proposition 10.

For any bipartite density matrix we have

where is the Hölder conjugate of . For we have

where .

Proof.

By the triangle inequality we have

Moreover, by Remark 2 we have

These give the first inequality. The proof of the second inequality is similar. ∎

Theorem 11.

Let be a tripartite density matrix. Then the followings hold:

  • For any we have

  • Assume that . Then for any we have

    Moreover, if is classical (and ) then

    where we define

Proof.

The proof of (ii) is immediate once we have (i) since if then

Moreover, when is classical and

with , we have

So we only need to prove (i).

Define by

We claim that

(15)

Since vector valued -spaces form an interpolation family [14], by the Riesz-Thorin theorem (see Appendix A) it suffices to prove this for and . For by the triangle inequlality we have

where the second inequality comes from the fact that that is easy to verify. We now prove the inequality for . We compute

Then (15) holds for all and for any we have

Letting

in the above inequality we obtain the desired result.

The next theorem gives a “weak converse” of the above inequalities.

Theorem 12.

For every tripartite density matrix and the followings hold:

  • .

  • If then