# A Correlation Measure Based on Vector-Valued L_p-Norms

In this paper, we introduce a new measure of correlation for bipartite quantum states. This measure depends on a parameter α, and is defined in terms of vector-valued L_p-norms. The measure is within a constant of the exponential of α-Rényi mutual information, and reduces to the trace norm (total variation distance) for α=1. We will prove some decoupling type theorems in terms of this measure of correlation, and present some applications in privacy amplification as well as in bounding the random coding exponents. In particular, we establish a bound on the secrecy exponent of the wiretap channel (under the total variation metric) in terms of the α-Rényi mutual information according to Csiszár's proposal.

Comments

There are no comments yet.

## Authors

• 1 publication
• 6 publications
• 9 publications
• 6 publications
• 12 publications
10/19/2018

### The total variation distance between high-dimensional Gaussians

We prove a lower bound and an upper bound for the total variation distan...
11/26/2018

### Divergence radii and the strong converse exponent of classical-quantum channel coding with constant compositions

There are different inequivalent ways to define the Rényi mutual informa...
01/05/2018

### Optimal Utility-Privacy Trade-off with the Total Variation Distance as the Privacy Measure

Three reasons are provided in favour of L^1-norm as a measure of privacy...
05/12/2020

### Strong Asymptotic Composition Theorems for Sibson Mutual Information

We characterize the growth of the Sibson mutual information, of any orde...
09/27/2018

### Multi-variate correlation and mixtures of product measures

Total correlation (TC') and dual total correlation (DTC') are two clas...
09/26/2018

### A new Gini correlation between quantitative and qualitative variables

We propose a new Gini correlation to measure dependence between a catego...
11/20/2021

### Localized Mutual Information Monitoring of Pairwise Associations in Animal Movement

Advances in satellite imaging and GPS tracking devices have given rise t...
##### This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

## 1 Introduction

In this paper, for any we introduce a new measure of correlation by

 Vα(A;B)=∥∥(IB⊗ρA−(α−1)/2α)ρBA(IB⊗ρA−(α−1)/(2α))−ρB⊗ρ1/αA∥∥(1,α), (1)

where denotes a certain norm which for reduces to the -norm. Since Rényi mutual information (according to Sibson’s proposal) can also be expressed in terms of the -norm our measure of correlation is also related Rényi mutual information.

The main motivation for introducing these measures of correlation, particularly for , is their applications in decoupling theorems. The point is that the average of , when is the outcome of a certain random CPTP map applied on the bipartite quantum state , can be bounded by where is a constant. Thus our measures of correlation can be used to prove decoupling type theorems in information theory.

Decoupling theorems have already found several applications in information theory. Most achievability results in quantum information theory are based on the phenomenon of decoupling (see [1] and references therein). Also, in classical information theory the OSRB method of [2] provides a similar decoupling-type tool for proving achievability results. The advantage of our decoupling theorem based on the measure , comparing to previous ones, is that it works for all values of . Given the relation between and Rényi mutual information mentioned above, the parameters appearing in our decoupling theorem would be related to -Rényi mutual information, which for reduces to Shannon’s mutual information. Therefore, we can use our decoupling theorems not only for proving achievability results but also for proving interesting bounds on the random coding exponents. We demonstrate this application via the examples of entanglement generation via a noisy quantum communication channel, and secure communication over a (classical) wiretap channel. In particular, we show a bound on the secrecy exponent of random coding over a wiretap channel in terms of Rényi mutual information according to Csiszár’s proposal.

Another application of our new measures of correlation is in secrecy. To measure the security of a communication system, one has to quantify the amount of information leaked to an eavesdropper. While the common security metric for measuring the leakage is mutual information (see e.g., see [3]) or the total variation distance [2, 4], there have been few recent works that motivate and define other measures of correlation to quantify leakage [5, 6, 7, 8, 9, 10, 11, 12]. Herein, we suggest the use of our metric instead of mutual information because it is a stronger metric and has a better rate-security tradeoff curve. To explain the rate-security tradeoff, consider a secure transmission protocol over a communication channel, achieving a communication rate of with certified leakage of at most

according to the mutual information metric. Now, if the transmitter obtains a classified message for which leakage

is no longer acceptable, it can sacrifice communication rate for improved transmission security. We show that the rate-security tradeoff with the mutual information metric is far worse than that of our metric. We will discuss this fact in more details via the problem of privacy amplification.

The definition of our measure of correlation is based on the theory of vector-valued spaces. These spaces are generalizations of the spaces and are defined via the theory of complex interpolation

. Then the proofs of our main theorems are heavily based on the interpolation theory. In particular, we use the Riesz-Thorin interpolation theorem several times, in order to establish an inequality for all

by interpolating between and .

In the following section, we review some notations and introduce vector-valued norms. Section 3 introduces our new measure of correlation and presents some of its properties. Section 4 contains the main technical results of this paper. Section 5 and Section 6 contain some applications of our results in privacy amplification as well as in bounding the random coding exponents.

## 2 Vector-valued Lp norms

For a finite set let to be the vector space of functions . For any and we define

 ∥f∥p:=(∑a∈A|f(a)|p)1p.

This quantity for satisfies the triangle inequality and turns into a normed space. The dual of -norm is the -norm where is the Hölder conjugate of given by

 1p+1p′=1. (2)

More generally, for any with and any we have

 ∥fg∥p≤∥f∥q⋅∥g∥r,

where .

Suppose that is another set and we equip the vector space with the -norm. The question is how we can naturally define a -norm on the space that is compatible with the norm of the individual spaces . By compatible we mean that if with and (i.e., ) then

 ∥f⊗g∥(p,q)=∥f∥p⋅∥g∥q. (3)

To this end, any vector can be taught of as a collection of vectors for any , where . Let us denote . Then we may define

 ∥h∥(p,q):=∥t∥p=(∑a∥ha∥pq)1/p.

This definition of the -norm satisfies (3). Moreover, when , this -norm coincides with the usual -norm. Finally, it is not hard to verify that the -norm, for , is indeed a norm and satisfies the triangle inequality.

The -norm can also be defined in the non-commutative case. Suppose that is a Hilbert space of finite dimension . Let to be the space of linear operators acting on . Again we can define

 ∥M∥p=(\rm tr(|M|p))1p,

where , and is the adjoint of . For this equips with a norm, called the Schatten norm, that satisfies the triangle inequality. Hölder’s inequality is also satisfied for Schatten norms [13]: if with , then for we have

 ∥MN∥p≤∥M∥q⋅∥N∥r. (4)

Our notation in the non-commutative case can be made compatible with the commutative case. By abuse of notation, an element can be taught of as a diagonal matrix of the form

 fA=∑af(a)|a⟩⟨a|,

acting on the Hilbert space with the orthonormal basis . Therefore, can be taught of as a subspace of . We also have

 ∥fA∥p=(\rm tr(|fA|p))1p=(∑a|f(a)|p)1p.

Now the question is how we can define the -norm in the non-commutative case. Let us start with the easy case of . Then, following the above notation, can be written as

 MAB=∑a|a⟩⟨a|⊗Ma,

with . Similar to the fully commutative case we can define

 ∥MAB∥(p,q)=(∑a∥Ma∥pq)1/p. (5)

Now let us turn to the fully non-commutative case. In this case, the definition of the -norm is not easy and is derived from interpolation theory [14]. Here, we present an equivalent definition provided in [15] (see also [16]). We also focus on the case of that we need in this paper. In this case, since there exists such that . Then for any we define

 ∥MAB∥(p,q)=infσA,τA∥∥(σ−12rA⊗IB)MAB(τ−12rA⊗IB)∥∥q, (6)

where the infimum is taken over all density matrices111A density matrix is a positive semidefinite operator with trace one. and is the identity operator. In the following, for simplicity we sometimes suppress the identity operators in expressions of the form and write . Therefore,

 ∥MAB∥(p,q)=infσA,τA∥∥σ−12rAMABτ−12rA∥∥q.

When , the -norm satisfies the triangle inequality and is a norm. Some remarks are in line.

###### Remark 1.

As in the commutative case, the order of subsystems in the above definition is important, i.e., and are different.

###### Remark 2.

From Hölder’s inequality (4), one can derive that if , then

 ∥MA⊗MB∥(p,q)=∥MA∥p∥MB∥q.
###### Remark 3.

When , the above definition of -norm coincides with that of (5). This can be shown by trying to optimize the choices of in (6), which can be taken to be diagonal.

###### Remark 4.

When the -norm coincides with the usual -norm [14, 15]:

 ∥MAB∥(p,p)=∥MAB∥p.
###### Remark 5.

When is positive semidefinite, in (6) we may assume that , see [16]. That is, when is positive semidefinite we have

 ∥MAB∥p,q=infσA∥∥σ−12rAMABσ−12rA∥∥q=infσA∥∥Γ−1rσA(MAB)∥∥q,

where

 Γσ(X)=σ12Xσ12. (7)

We will compare our measure of correlation with Rényi mutual information which interestingly can also be written in terms of -norms. For the sandwiched -Rényi relative entropy is defined by222All the logarithms in this paper are in base two.

 Dα(ρ∥σ)=α′log∥∥Γ−1/α′σ(ρ)∥∥α,

where is the Hölder conjugate of given by (2). The -Rényi mutual information (Sibson’s proposal) for is given by333See [17] for different definitions and properties of Rényi mutual information.

 Iα(A;B)=infσBDα(ρAB∥ρA⊗σB).

Using the definition of and Remark 5 we find that

 Iα(A;B)=α′log∥∥Γ−1/α′ρA(ρBA)∥∥(1,α).

In particular, for classical random variables

and we have

 Iα(A;B) =α′log(∑b[∑ap(a)p(b|a)α]1/α). (8)

Finally the -Rényi conditional entropy is defined by

 Hα(A|B)=−infσBDα(ρAB∥IA⊗σB)=−α′log∥ρBA∥(1,α). (9)

We finish this section by stating a lemma about the monotonicity of the -norm.

###### Lemma 6.

For any and any density matrix the function is non-decreasing on .

###### Proof.

Let , and let be such that . Using Hölder’s inequality for arbitrary density matrices we have

 ∥∥σ−1/(2α′)BΓ−1/α′ξA (MBA)τ−1/(2α′)B∥∥α =∥∥(σB⊗ξA)1/(2γ)σ−1/(2β′)BΓ−1/β′ξA(MBA)τ−1/(2β′)B(τB⊗ξA)1/(2γ)∥∥α ≤∥∥(σB⊗ξA)1/(2γ)∥∥2γ⋅∥∥σ−1/(2β′)BΓ−1/β′ξA(MBA)τ−1/(2β′)B∥∥β⋅∥∥(τB⊗ξA)1/(2γ)∥∥2γ =∥∥σ−1/(2β′)BΓ−1/β′ξA(MBA)τ−1/(2β′)B∥∥β.

Taking infimum over we obtain the desired result. ∎

### 2.1 Completely bounded norm

The completely bounded norm of a super-operator is defined by

where the supremum is taken over all auxiliary Hilbert spaces with arbitrary dimension and is the identity super-operator. In the above definition, we may replace with any , see [14]. That is, for any we have

 ∥Φ∥\rm{cb},p→q:=supdC∥∥IC⊗Φ∥∥(t,p)→(t,q). (10)

We say that a super-operator between spaces with certain norms is a complete contraction if its completely bounded norm is at most .

###### Lemma 7.

For any and we have

 ∥MBCA∥(1,1,α)≥∥MBCA∥(1,α,α).
###### Proof.

First of all the swap super-operator is a complete contraction [14], i.e.,

 ∥∥MBCA∥∥(1,1,α)≥∥∥MBAC∥∥(1,α,1).

Therefore, it suffices to show that

 ∥∥MBAC∥∥(1,α,1)≥∥∥MBCA∥∥(1,α,α)=∥∥MBAC∥∥(1,α,α).

Equivalently we need to show that

 ∥∥IAC∥∥\rm{cb},(α,α)→(α,1)≤1.

Using (10) we have

Next since is completely positive and we have [16]

 ∥∥IC∥∥\rm{cb},α→1=∥∥IC∥∥α→1=1.

We are done.

## 3 A new measure of correlation

In this section, we define our measure of correlation and study some of its properties.

###### Definition 1.

Let be an arbitrary bipartite density matrix. For any we define444When we have .

 Vα(A;B) :=∥∥Γ−1/α′ρA(ρBA)−ρB⊗ρ1/αA∥∥(1,α), (11) Wα(A|B) :=∥∥ρBA−ρB⊗IAdA∥∥(1,α), (12)

where , and are the marginal states on and subsystems, respectively.

As will be seen below, is a measure of correlation while is a related quantity that may be thought of as a conditional entropy.

By Remark 4 when , and can be expressed in terms of the -norm:

 V1(A;B) =∥ρAB−ρA⊗ρB∥1, (13) W1(A|B) =∥∥ρBA−ρB⊗IAdA∥∥1. (14)

In the classical case when

is a joint probability distribution we have

 Vα(A;B)=∑b(∑ap(a)∣∣p(b|a)−p(b)∣∣α)1/α,

and

 Wα(A|B)=∑bp(b)(∑a∣∣p(a|b)−1|A|∣∣α)1/α.

As an immediate property of the above definitions, both and are non-negative. Moreover, since they are defined in terms of a norm, we have if and only if , and if and only if .

###### Proposition 8.

For any the functions

 α↦Vα(A;B),

and

 α↦d1α′AWα(A;B),

are non-decreasing. In particular, for any we have

 Vα(A;B)≥∥ρAB−ρA⊗ρB∥1, and % Wα(A;B)≥d−1α′A∥∥ρAB−IAdA⊗ρB∥∥1.
###### Proof.

For the monotonicity of , in Lemma 6 put and . For the other monotonicity let and . ∎

We now prove the main property of and , namely their monotonicity under local operations.

###### Theorem 9 (Monotonicity under local operations).
• For any and all CPTP maps and we have

 Vα(X;Y)≤Vα(A;B),

where .

• For any and any CPTP map

 Wα(A|Y)≤Wα(A|B),

where and is the identity super-operator.

###### Proof.

For (i) we compute

 Vα(X;Y) =∥∥Γ−1/α′ρX(ρYX)−ρY⊗ρ1/αX∥∥(1,α) =∥∥(Ψ⊗Γ−1/α′Φ(ρA)∘Φ∘Γ1/α′ρA)(Γ−1/α′ρA(ρBA)−ρB⊗ρ1/αA)∥∥(1,α) ≤∥∥Ψ⊗Γ−1/α′Φ(ρA)∘Φ∘Γ1/α′ρA∥∥(1,α)→(1,α)⋅∥∥Γ−1/α′ρA(ρBA)−ρB⊗ρ1/αA∥∥(1,α) =∥∥Ψ⊗Γ−1/α′Φ(ρA)∘Φ∘Γ1/α′ρA∥∥(1,α)→(1,α)⋅Vα(A;B)

where denotes the super-operator norm:

 ∥T∥(1,α)→(1,α):=supM≠0∥T(M)∥(1,α)∥M∥(1,α).

Now using equation (3.5) and Theorem 13 of [16] we have

 ∥∥IB⊗Γ−1/α′Φ(ρA)∘Φ∘Γ1/α′ρA∥∥(1,α)→(1,α)≤∥∥Γ−1/α′Φ(ρA)∘Φ∘Γ1/α′ρA∥∥α→α.

On the other hand, using Lemma 9 of [18] (see also [19]) we have

 ∥∥Γ−1/α′Φ(ρA)∘Φ∘Γ1/α′ρA∥∥α→α≤1.

Moreover, by Lemma 5 of [16] we have

 ∥Ψ⊗IA∥(1,α)→(1,α)=∥Ψ∥1→1=1,

since is CPTP. We conclude that, .

The proof of (ii) is similar, so we skip it. ∎

We now state the relation between and Rényi information measures.

###### Proposition 10.

For any bipartite density matrix we have

 21α′Iα(A;B)−1≤Vα(A;B)≤21α′Iα(A;B)+1,

where is the Hölder conjugate of . For we have

 2−1α′Hα(A|B)−d−1α′A≤Wα(A|B)≤2−1α′Hα(A|B)+d−1α′A,

where .

###### Proof.

By the triangle inequality we have

 ∥∥Γ−1/α′ρA(ρBA)∥∥(1,α)−∥∥ρB⊗ρ1/αA∥∥(1,α)≤∥∥ Γ−1/α′ρA(ρBA)−ρB⊗ρ1/αA∥∥(1,α)

Moreover, by Remark 2 we have

 ∥∥ρB⊗ρ1/αA∥∥(1,α)=∥∥ρB∥∥1⋅∥∥ρ1/αA∥∥α=1.

These give the first inequality. The proof of the second inequality is similar. ∎

###### Theorem 11.

Let be a tripartite density matrix. Then the followings hold:

• For any we have

 Wα(A|BC)≤22α−1d1α′CWα(AC|B)
• Assume that . Then for any we have

 Vα(A;BC)≤22α−1Vα(AC;B).

Moreover, if is classical (and ) then

 Vα(A;B|C)≤22α−1Vα(AC;B),

where we define

 Vα(A;B|C)=∑cp(c)Vα(A;B|C=c).
###### Proof.

The proof of (ii) is immediate once we have (i) since if then

 Vα(A;B)=d1/α′AWα(A|B).

Moreover, when is classical and

 ρABC=∑cp(c)ρAB|c⊗|c⟩⟨c|,

with , we have

 Vα(A;B|C) =∥∥∑cp(c)|c⟩⟨c|⊗Γ−1/α′ρA(ρAB|c)−∑cp(c)|c⟩⟨c|⊗ρB|c⊗ρ1/αA∥∥(1,1,α) =∥∥Γ−1/α′ρA(ρCBA)−ρCB⊗ρ1/αA∥∥(1,1,α) =Vα(A;BC).

So we only need to prove (i).

Define by

 Ξ(MBCA)=MBCA−\rm trA(MBCA)⊗IA/dA.

We claim that

 ∥∥Ξ∥∥(1,α,α)→(1,1,α)≤22α−1d1α′C. (15)

Since vector valued -spaces form an interpolation family [14], by the Riesz-Thorin theorem (see Appendix A) it suffices to prove this for and . For by the triangle inequlality we have

 ≤∥∥MBCA∥∥1+∥∥\rm trA(MBCA)⊗IA/dA∥∥1 =∥∥MBCA∥∥1+∥∥\rm trA(MBCA)∥∥1⋅∥∥IA/dA∥∥1 =∥∥MBCA∥∥1+∥∥\rm trA(MBCA)∥∥1 ≤2∥∥MBCA∥∥1,

where the second inequality comes from the fact that that is easy to verify. We now prove the inequality for . We compute

 ∥∥MBCA− \rm trA(MBCA)⊗IA/dA∥∥2(1,1,2) =infτBC,σBC∥∥τ−1/4BCMBCAσ−1/4BC−τ−1/4BC\rm trA(MBCAσBC)σ−1/4BC⊗IA/dA∥∥22 =infτBC,σBC∥∥τ−1/4BCMBCAσ−1/4BC∥∥22−1dA∥∥τ−1/4BC\rm trA(MBCAσBC)σ−1/4BC∥∥22 ≤infτBC,σBC∥∥τ−1/4BCMBCAσ−1/4BC∥∥22 ≤infτB,σB∥∥(τB⊗IC/dC)−1/4MBCA(σB⊗IC/dC)−1/4∥∥22 =infτB,σBdC∥∥τ−1/4BMBCAσ−1/4B∥∥22 =dC∥MBCA∥2(1,2,2).

Then (15) holds for all and for any we have

 ∥∥MBCA−\rm trA(MBCA)⊗IA/dA∥∥(1,1,α)≤22α−1d1α′C∥MBCA∥(1,α,α).

Letting

 MBCA=ρBCA−ρB⊗ICdC⊗IAdA,

in the above inequality we obtain the desired result.

The next theorem gives a “weak converse” of the above inequalities.

###### Theorem 12.

For every tripartite density matrix and the followings hold:

• .

• If then

 Vα(AC;B)≤d−1/α′