# The conditional Entropy Power Inequality for quantum additive noise channels

We prove the quantum conditional Entropy Power Inequality for quantum additive noise channels. This inequality lower bounds the quantum conditional entropy of the output of an additive noise channel in terms of the quantum conditional entropies of the input state and the noise when they are conditionally independent given the memory. We also show that this conditional Entropy Power Inequality is optimal in the sense that we can achieve equality asymptotically by choosing a suitable sequence of Gaussian input states. We apply the conditional Entropy Power Inequality to find an array of information-theoretic inequalities for conditional entropies which are the analogues of inequalities which have already been established in the unconditioned setting. Furthermore, we give a simple proof of the convergence rate of the quantum Ornstein-Uhlenbeck semigroup based on Entropy Power Inequalities.

There are no comments yet.

## Authors

• 10 publications
• 5 publications
• ### The Entropy Power Inequality with quantum conditioning

The conditional Entropy Power Inequality is a fundamental inequality in ...
08/24/2018 ∙ by Giacomo De Palma, et al. ∙ 0

• ### Entropy Power Inequality in Fermionic Quantum Computation

We study quantum computation relations on unital finite-dimensional CAR ...
08/12/2020 ∙ by N. J. B. Aza, et al. ∙ 0

• ### Gaussian optimizers for entropic inequalities in quantum information

We survey the state of the art for the proof of the quantum Gaussian opt...
03/06/2018 ∙ by Giacomo De Palma, et al. ∙ 0

• ### Quantifying coherence with quantum addition

Quantum addition channels have been recently introduced in the context o...
03/19/2018 ∙ by Chiranjib Mukhopadhyay, et al. ∙ 0

• ### Entropy in Quantum Information Theory -- Communication and Cryptography

In this Thesis, several results in quantum information theory are collec...
10/24/2018 ∙ by Christian Majenz, et al. ∙ 0

• ### Quantitative form of Ball's Cube slicing in ℝ^n and equality cases in the min-entropy power inequality

We prove a quantitative form of the celebrated Ball's theorem on cube sl...
09/08/2021 ∙ by James Melbourne, et al. ∙ 0

• ### Fluctuation and dissipation in memoryless open quantum evolutions

Von Neumann entropy rate for open quantum systems is, in general, writte...
07/30/2021 ∙ by Fabricio Toscano, et al. ∙ 0

##### This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

## 1 Introduction

Additive noise channels are central objects of interest in information theory. A general class of such channels can be modeled by the well-known convolution operation: If and

are two independent random variables with values in

, the convolution operation combines and into a new random variable

, the probability density function of which is given by

 fX+Y(z):=∫RkfX(z−x)fY(x)dkx . (1)

The convolution is a well-studied operation and it plays a role in many inequalities from functional analysis, such as Young’s Inequality and its sharp version [1, 2] as well as the entropy power inequality [3, 4, 5, 6]. These inequalities have important applications in classical information theory, as they can be used to bound communication capacities, which was originally carried out by Shannon [3]. An extensive overview of the many related inequalities in this area is given in [6].

Central to the work presented here is the entropy power inequality. It deals with the entropy of a linear combination of two independent random variables and with values in ,

 Z:=√λX+√|1−λ|Y,λ≥0 . (2)

The statement of the entropy power inequality [3, 4, 5, 6] is

 exp2S(Z)k≥λexp2S(X)k+|1−λ|exp2S(Y)k , (3)

where is the Shannon differential entropy of the random variable . A conditional version of (3) can easily be derived: If and are conditionally independent given the random variable (sometimes interpreted as a memory), then

 exp2S(Z|M)k≥λexp2S(X|M)k+|1−λ|exp2S(Y|M)k . (4)

In quantum information theory, an analogous operation to the convolution (1) is given by the action of a beam splitter with transmissivity on a quantum state (i.e., a linear positive operator with unit trace) which is bipartite on two -mode Gaussian quantum systems . This action has the form

 ρAB↦ρC=tr2(UλρABU†λ) , (5)

where is again an -mode quantum system and denotes the partial trace over the second system. The mathematical motivation of the study of this operation is that in the special case of a product state, that is , it is formally similar to the convolution described in (1) on the level of Wigner functions. For the beam splitter (5), several important inequalities in the same spirit as in classical information theory have been established [7, 8, 9, 10, 11]. For instance, the quantum entropy power inequality reads

 expS(C)n≥λexpS(A)n+(1−λ)expS(B)n , (6)

with being the von Neumann entropy of a quantum state. Unlike in the classical setting, a conditional entropy power inequality for the operation (5) does not trivially follow from the unconditioned inequality (6). However, it was recently established in [11] that such an inequality holds nonetheless: For a joint quantum state such that and are conditionally independent given the memory system , we have

 expS(C|M)n≥λexpS(A|M)n+(1−λ)expS(B|M)n , (7)

where is the quantum conditional entropy. The conditional independence of and given is expressed with the condition that the quantum conditional mutual information equals zero:

 I(A:B|M):=S(A|M)+S(B|M)−S(AB|M)=0 . (8)

Our work concerns yet another convolution operation, which mixes a probability density function on phase space with an -mode quantum state ,

 (9)

where are the Weyl displacement operators in phase space. This operation was first introduced by Werner in [12]. Werner established a number of results regarding (9), most notably a Young-type inequality. In [13], more inequalities involving this operation were shown, most prominently the entropy power inequality

 expS(f⋆ρ)n≥expS(f)n+expS(ρ)n . (10)

In the context of mixing times of semigroups, the authors in [14] have used this convolution extensively and proved various properties which are related to the discussion of the entropy power inequality.

### 1.1 Our contribution

Similarly to the work carried out in [11] for the beam splitter, we prove the conditional version of the entropy power inequality for the convolution given by (9). Let us consider an -mode Gaussian quantum system , a generic quantum system and a classical system which “stores” a classical probability density function . Let us further consider the map , , linearly extended to generic states as

 ρC=E(ρAR)=∫R2nD(ξ)ρA|R=ξD(ξ)†ρR(ξ)d2nξ(2π)n. (11)

We show in Theorem 5 that the conditional entropy of the output of is lower bounded as

 expS(C|M)n≥expS(A|M)n+expS(R|M)n , (12)

if , i.e., the systems and are conditionally independent given the system . As a special case, this inequality implies useful inequalities about the convolution (9) in the case when is uncorrelated with ,

 expS(C|M)n≥expS(A|M)n+expS(ρR)n . (13)

In the particular case when is a Gaussian random variable with probability density function , the inequality becomes

 expS(C|M)n≥expS(A|M)n+et . (14)

The special cases mentioned above are important in various applications, as we will show later.

This conditional entropy power inequality is tight in the sense that it is saturated for any couple of values of and by an appropriate sequence of Gaussian input states, which we show in Theorem 6. This behaviour is similar to the case of the beam splitter. On the way to this inequality, several intermediate results are proven which make up a set of information-theoretic inequalities regarding conditional Fisher information and conditional entropies. To complete the picture of information-theoretic inequalities involving quantum conditional entropies, we apply our results to prove a number of additional inequalities in a spirit similar to the classical case. Among them there are the concavity of the quantum conditional entropy along the heat flow (Theorem 8) and an isoperimetric inequality for quantum conditional entropies (Lemma 7). Furthermore, we show in subsection 8.3 how, similar to the case of the beam splitter, the conditional entropy power inequality implies a converse bound on the entanglement-assisted classical capacity of a non-Gaussian quantum channel, the classical noise channel defined in (9).

Another part of our work regards the quantum Ornstein-Uhlenbeck (qOU) semigroup. It is the one-parameter semigroup of completely positive and trace-preserving (CPTP) maps on the one-mode Gaussian quantum system generated by the Liouvillian

 Lμ,λ=μ2L−+λ2L+ for μ>λ>0 , (15)

where

 L+(ρ)=a†ρa−12{aa†,ρ} and L−(ρ)=aρa†−12{a†a,ρ} , (16)

and is the ladder operator of . This quantum dynamical semigroup has a unique fixed point given by

 ωμ,λ:=μ2−λ2μ2∞∑k=0(λ2μ2)k|k⟩⟨k| , (17)

where is the Fock basis of . It has been shown in [15] using methods of gradient flow that the quantum Ornstein-Uhlenbeck semigroup converges in relative entropy to the fixed point at an exponential rate given by the exponent ,

 D(P(μ,λ)(t)(ρ)∥∥ω(μ,λ))≤e−(μ2−λ2)tD(ρ∥∥ω(μ,λ)) for all t≥0 , (18)

where is the quantum relative entropy [16].

We show that a simple application of the linear version of the entropy power inequality (6) for the beam splitter is sufficient to prove this convergence rate. We also show a simple derivation of an analogous result for the case of a bipartite quantum system , where the system undergoes a qOU evolution, using the linear conditional entropy power inequality for the beam splitter recently proven in [11]. Specifically, we are going to show in Theorem 9 that

 D((P(μ,λ)⊗1M)(ρAM)∥∥ω(μ,λ)A⊗ρM)≤e−(μ2−λ2)tD(ρAM∥∥ω(μ,λ)A⊗ρM) , (19)

which directly implies the statement (18). Finite-dimensional versions of the statement (19) for general semigroups have recently been studied by Bardet [17]. Our argument shows that entropy power inequalities are a useful tool to study the convergence rate of semigroups.

The proof of the unconditioned entropy power inequality (10) given in [13] exhibits certain regularity issues regarding the Fisher information: the Fisher information was defined as the Hessian of a relative entropy, without a proof of well-definedness. Various proofs of the entropy power inequality for the beam splitter had similar issues [7, 8, 9]. They were settled in [11] by the adoption of a proof technique which starts with an integral version of the quantum Fisher information. We adopt a similar approach here. Since the conditional entropy power inequality reduces to the unconditioned inequality in the case where the system is trivial, this also gives a more rigorous proof of the unconditioned entropy power inequality. As such, our work can be seen as both a completion of the work carried out in [13] and a generalization thereof.

We now sketch the basic structure of the proof of our main result. The main ingredients in proving entropy power inequalities [5, 7, 9, 11, 13] are similar in all proofs, which all use the evolution under the heat semigroup. These ingredients are the Fisher information, de Bruijn’s identity, the Stam inequality, and a result on the asymptotic scaling of the entropy under the heat flow. First we define a “classical-quantum” integral conditional Fisher information, by which we mean a Fisher information of a classical system which is conditioned on a quantum system. We show in Theorem 1 that this quantity satisfies a de Bruijn identity, which links it to the change of the conditional entropy under the heat flow. We show the regularity of the integral conditional Fisher information in Theorem 2 and then prove the conditional Stam inequality in Theorem 3. In the next part, we show in Theorem 4 that the quantum conditional entropy of a classical system undergoing the classical heat flow evolution conditioned on a quantum system satisfies the same universal scaling which was shown for the quantum conditional entropy of a quantum system undergoing the quantum heat flow evolution conditioned on a quantum system. It is crucial for the proof of our conditional entropy power inequality that these two scalings are not only both universal but also the same. This scaling then implies that asymptotically, the inequality we want to prove becomes an equality. Then it is left to show that it is enough to consider the inequality in the asymptotic limit, i.e., the difference of the two sides of the inequality behaves under the heat flow in a way which only makes the inequality “worse”.

The paper is structured as follows: In section 2 we present bosonic quantum systems and the relevant quantities required for our discussion. In section 3, the integral version of the quantum conditional Fisher information is adapted to the convolution (9). Sections 4 and 5 are dedicated to the proof of various inequalities that are central to the proof of entropy power inequalities, such as the Stam inequality and an asymptotic scaling of the conditional entropy. Section 6 then proves the conditional entropy power inequality for the convolution (9) as our main result. Optimality of the conditional entropy power inequality is shown in section 7. This is followed by the derivation of various related information-theoretic inequalities involving the quantum conditional entropy in section 8. Before concluding, we apply the conditional entropy power inequality to bound the convergence rate of bipartite systems where one system undergoes a quantum Ornstein-Uhlenbeck semigroup evolution in section 9.

## 2 Preliminaries

Let us consider an -mode bosonic system [16, 18] with “position” and “momentum” operators , , for each mode which satisfy the canonical commutation relations

. If we denote the vector of position and momentum operators by

, the canonical commutation relations become

 [Rj,Rk]=iΔjk1,i,j=1,…,2n , (20)

where is the symplectic form.

The Weyl displacement operators are defined by

 D(ξ):=exp(iξ⋅(Δ−1R)), for ξ∈R2n . (21)

The displacement operators satisfy the commutation relations

 D(ξ)D(η)=exp(−i2ξ⋅(Δ−1η))D(ξ+η), for ξ,η∈R2n , (22)

and the “displacement property” on the mode operators

 D(ξ)†RjD(ξ)=Rj+ξj1 . (23)

Given an -mode quantum state

, we define its first moments as

 dk(ρ):=tr[Rkρ], for k=1,…,2n , (24)

and its covariance matrix (for finite first moments) as

 Γkl(ρ):=12tr[{Rk−dk(ρ),Rl−dl(ρ)}ρ],k,l=1,…,2n , (25)

with the anticommutator .

The aforementioned concepts of displacements and first and second moments are the quantum analogs of the classical concepts. For a probability distribution function

, we define its displacement by a vector as

 f(η)(ξ)=f(ξ−η) . (26)

Furthermore, we denote the energy of the function by the sum of its second moments,

 E(f)=2n∑k=1∫R2nξ2kf(ξ)d2nξ(2π)n . (27)

The quantities are called the first moments of , and

 γkl=∫R2nf(ξ)(ξk−μk)(ξl−μl)d2nξ(2π)n (28)

is called the covariance matrix of . We remark that we have rescaled the Lebesgue measure on in these definitions, which we have done purely for convenience.

###### Definition 1 (Quantum heat semigroup).

The quantum heat semigroup is the following time evolution for any quantum state :

 N(t)(ρ) :=∫R2ne−∥ξ∥22tρ(ξ)d2nξ(2πt)nfort>0 , (29) N(0) :=1 , (30)

where is a displacement of the state by .

The quantum heat semigroup has a semigroup structure, that is, for any , we have

 N(s)∘N(t)=N(s+t) . (31)

We note that if is the probability distribution of a Gaussian random variable with covariance matrix , then we have

 N(t)(ρ)=fZ,t⋆ρ . (32)

The quantum heat semigroup is the quantum analog of the classical heat semigroup, which we will repeat here. It can be written in an analogous way to the quantum heat semigroup:

###### Definition 2 (Classical heat semigroup).

The classical heat semigroup is the following time evolution defined on a function :

 (Ncl(t)(f))(η) :=∫R2ne−∥ξ∥22tf(ξ)(η)d2nξ(2πt)n , (33) Ncl(0) :=1 . (34)

We also have that for any ,

 Ncl(s)∘Ncl(t)=Ncl(s+t) . (35)

We note again that we have

 Ncl(t)(f)=fZ,t⋆f , (36)

where

 (g⋆f)(η):=∫R2ng(ξ)f(η−ξ)d2nξ(2π)n (37)

is the well-known classical convolution of the two functions and (with a factor of in the Lebesgue measure on which we introduce purely for convenience).

The convolution (9) is compatible with displacements and with the heat semigroup evolution in a convenient way, which is stated in the following two lemmas:

###### Lemma 1 (Compatibility with displacements of the convolution (9)).

[13, Lemma 2] Let be a probability distribution and an -mode quantum state. Then we have for any ,

 (f⋆ρ)(ξ1+ξ2)=f(ξ1)⋆ρ(ξ2) , (38)

where .

###### Remark 1.

Lemma 2 in [13] only states the compatibility for the case where are parallel. Nonetheless, the proof given there also works to prove the statement above.

###### Lemma 2 (Compatibility with the heat semigroup of the convolution (9)).

[13, Lemma 5] Assume the same prerequisites as in Lemma 1 and let . Then we have

 N(t1+t2)(f⋆ρ)=Ncl(t1)(f)⋆N(t2)(ρ) . (39)
###### Definition 3 (Shannon differential entropy).

For a classical -valued random variable with a probability density function , we define the Shannon differential entropy as

 S(X)=S(f)=−∫R2nf(ξ)logf(ξ)d2nξ(2π)n . (40)

We continue with a short review of Gaussian quantum states. An -mode quantum state is called Gaussian if it has the following form [16]:

 ρG=exp[−12∑2nk,l=1(Rk−dk)hkl(Rl−dl)]trexp[−12∑2nk,l=1(Rk−dk)hkl(Rl−dl)] , (41)

where is a positive definite real matrix and is the vector of first moments of the state. The entropy of such a Gaussian state is given by

 S(ρG)=n∑k=1g(νk−12) , (42)

where and

are the symplectic eigenvalues of the covariance matrix

, i.e., the absolute values of the eigenvalues of .

A Gaussian state is called thermal if its first moments are zero and the matrix is proportional to the identity. Such thermal states have the special form

 ωβ=e−βHtre−βH,h=β12n,β>0 (43)

for the Hamiltonian of harmonic oscillators . Gaussian states fulfill a special extremality property. Among all states with a given average energy , thermal states maximize the von Neumann entropy. Furthermore, among all states with fixed covariance matrix, the Gaussian state is the one with maximal entropy [19, 20].

In our proofs, we are going to require the notion of quantum conditional Fisher information of quantum systems which was introduced in [11]. We repeat the main properties of this quantity here. For a thorough definition and proofs we refer to [11]. Before giving this definition, we clarify the notion of “classical-quantum” states on a system if the classical system is continuous. A state on is a probability measure on which takes values in the trace class operators, i.e., a measurable collection of trace class operators on with the normalization condition

 ∫R2ntrM[ρMR(ξ)]d2nξ(2π)n=1 . (44)

This state “stores” a classical probability distribution in the classical system if its marginal on has as probability distribution. The marginals of are

 ρM=∫R2nρMR(ξ)d2nξ(2π)n,ρR(ξ)=trM[ρMR(ξ)], (45)

and the conditional states on given the value of are

 ρM|R=ξ=ρMR(ξ)ρR(ξ). (46)

We do not consider the case where the probability measure is not absolutely continuous with respect to the Lebesgue measure, since in this case its Shannon differential entropy is not defined. For a more detailed discussion, we refer to [21, Section III.A.3] and references therein ([22] and [23, Chapter 4.6-4.7]).

We can also define displacements of such a classical-quantum state: We write to denote a state where the classical system has been displaced by and the quantum system has been displaced by .

###### Definition 4 (Quantum integral conditional Fisher information).

[11, Definition 6] Let be an -mode bosonic quantum system, and a generic quantum system. Let be a quantum state on . For any , the integral Fisher information of conditioned on is given by

 ΔA|M(ρAM)(t) :=I(A:Z|M)σAMZ(t)≥0 ,t>0 , (47) ΔA|M(ρAM)(0) :=0 , (48)

where is a classical Gaussian random variable with values in and probability density function

 fZ,t(z)=e−|z|22ttn,z∈R2n , (49)

and is the quantum state on such that its marginal on is and for any ,

 σAM|Z=z(t)=DA(z)ρAMDA(z)† . (50)
###### Definition 5 (Quantum conditional Fisher information).

[11, Definition 7, Proposition 1] Let be a quantum state on such that the marginal has finite energy and the marginal has finite entropy. Then we define the quantum conditional Fisher information of conditioned on as

 J(A|M)ρAM:=limt→0ΔA|M(ρAM)(t)t=ddtS(A|M)(NA(t)⊗1M)(ρAM)∣∣∣t=0 . (51)

As shown in [11], this limit always exists.

Finally, we are going to require a notion of conditional entropy of a classical system which is conditioned on a quantum system. If the system on which we condition is classical, the conditional entropy is simply

 S(A|M)=∫MS(A|M=m)dpM(m) , (52)

where is the probability distribution of . This definition is independent of whether the system

is classical or quantum. We now define the conditional entropy of a classical system which is conditioned on a quantum system in a way such that the chain rule for entropies is preserved.

###### Definition 6 (Quantum conditional entropy of classical-quantum systems).

Let be a classical system, a quantum system. We define the conditional entropy of given as

 S(R|M)=S(M|R)+S(R)−S(M) , (53)

whenever the three quantities appearing on the right handside are finite.

The case where , and are not finite will not be part of our consideration.

## 3 Quantum integral conditional Fisher information

In this section we consider a generic quantum system and a classical system . We are going to define the quantum integral conditional Fisher information of conditioned on and prove a de Bruijn identity as well as a number of useful properties.

###### Definition 7 (quantum integral conditional Fisher information).

For a quantum state on whose marginal on is and , define the integral Fisher information of conditioned on as

 ΔR|M(ρRM)(t) :=I(R:Z|M)σRZM(t) , (54) ΔR|M(ρRM)(0) :=0 , (55)

where is a classical Gaussian random variable with probability density function equal to

 fZ,t(ξ)=e−|ξ|22ttn,ξ∈R2n , (56)

and is the quantum state on such that its marginal on is equal to , and for any , we have

 σRM|Z=z(t)=ρ(z,0)RM . (57)

The marginal of on is equal to

 σRM(t)=(Ncl(t)⊗1M)(ρRM) . (58)

The marginal on has probability density function .

###### Theorem 1 (Integral conditional de Bruijn identity).
 ΔR|M(ρRM)(t)=S(R|M)(Ncl(t)⊗1M)(ρRM)−S(R|M)ρRM . (59)
###### Proof.

We use the definition of the conditional mutual information as well as the definition of the conditional quantum entropy when the system on which we condition is classical. We calculate

 I(R:Z|M)σRMZ =S(R|M)σRMZ−S(R|MZ)σRMZ (60) =S(R|M)σRM−∫R2nS(R|M)σRM|Z=zfZ,t(z)d2nz(2π)n (61) =S(R|M)σRM−∫R2nS(R|M)ρRMfZ,t(z)d2nz(2π)n (62) =S(R|M)σRM−S(R|M)ρRM . (63)

The second to last step follows because the entropy is invariant under displacements of the classical system. ∎

We now show that the integral conditional Fisher information defined as above, as a function of , is continuous, increasing, and concave. The proof strategy is similar to the proof of regularity for the quantum integral conditional Fisher information given in [11].

###### Lemma 3 (Continuity of the integral conditional Fisher information).

Let be a state such that the function is continuous with respect to the trace norm and the marginal has finite average energy. Then, the function is continuous for any .

###### Proof.

From the de Bruijn identity Theorem 1, it is sufficient to prove that

 limt→0S(R|M)(ρRM(t))=S(R|M)(ρRM), (64)

where we have defined for any ,

 ρRM(t)=(Ncl(t)⊗1M)(ρRM). (65)

From the data processing inequality, for any

 S(R|M)(ρRM(t))≥S(R|M)(ρRM). (66)

It is then sufficient to prove that

 limsupt→0S(R|M)(ρRM(t))≤S(R|M)(ρRM). (67)

We have from the chain rule

 S(R|M)((Ncl(t)⊗1M)(ρRM))=S(M|R)(ρRM(t))+S(ρR(t))−S(ρM). (68)

From [24], Remark 9.3.8, and [25, 26, 27], the Shannon differential entropy is upper semicontinuous on the set of probability measures on absolutely continuous with respect to the Lebesgue measure and with finite average energy, and

 limsupt→0S(ρR(t))≤S(ρR). (69)

On the other hand, we have

 S(ρM)−S(M|R)(ρRM(t))=∫R2nD(ρM|R=ξ(t)∥ρM)ρR(t)(ξ)d2nξ(2π)n. (70)

Since the function is continuous with respect to the trace norm, we have for any

 limt→0∥ρM|R=ξ(t)−ρM|R=ξ∥1=0. (71)

Because the relative entropy is positive, we get from Fatou’s lemma

 ∫R2nliminft→0D(ρM|R=ξ(t)∥ρM)ρR(t)(ξ)d2nξ(2π)n ≤liminft→0∫R2nD(ρM|R=ξ(t)∥ρM)ρR(t)(ξ)d2nξ(2π)n. (72)

Since the relative entropy is lower semicontinuous, we have for any

 D(ρM|R=ξ∥ρM)≤liminft→0D(ρM|R=ξ(t)∥ρM). (73)

Combining (3), (73), and (70), we get

 limsupt→0S(M|R)(ρRM(t))≤S(M|R)(ρRM). (74)

###### Lemma 4.

For any ,

 ΔR|M((Ncl(s)⊗1M)(ρRM))(t)=I(R:Z|M)(Ncl(s)⊗1MZ)(σRMZ(t)) . (75)
###### Proof.

Follows from the semigroup structure of . ∎

###### Lemma 5.

For any ,

 ΔR|M((Ncl(s)⊗1M)(ρRM)(t)≤ΔR|M(ρRM)(t) . (76)
###### Proof.

Follows from the data processing inequality for the quantum mutual information. ∎

###### Lemma 6.

For any ,

 ΔR|M(ρRM)(s+t) =ΔR|M(ρRM)(s)+ΔR|M((Ncl(s)⊗1M)(ρRM)(t) (77) ≥ΔR|M(ρRM)(s) . (78)
###### Proof.

Follows from Theorem 1. ∎

###### Theorem 2 (Regularity of the integral conditional Fisher information).

For any quantum state on such that the conditions of Lemma 3 are fulfilled, the integral conditional Fisher information is a continuous, increasing, and concave function of .

###### Proof.

Continuity was shown in Lemma 3 and the fact that the conditional Fisher information is increasing follows from Lemma 6.

For concavity, by continuity it is enough to prove that for , we have

 ΔR|M(ρRM)(s+t2)≥ΔR|M(ρRM)(s)+ΔR|M(ρRM)(t)2 . (79)

This can be written as

 ΔR|M(ρRM)(s+t2)−ΔR|M(ρRM)(s) ≥ΔR|M(ρRM)(t)−ΔR|M(ρRM)(s+t2) . (80)

By Lemma 6, this can be restated as

 ΔR|M(ρRM(s))(t−s2)≥ΔR|M((Ncl(t−s2)⊗1M)(ρRM(s)))(t−s2) , (81)

for . But this holds because of Lemma 5. ∎

## 4 Quantum conditional Fisher information

###### Definition 8.

For a quantum state on such that the conditions of Lemma 3 are fulfilled, we define the Fisher information of conditioned on as

 J(R|M)ρRM:=limt→0ΔR|M(ρRM)(t)t . (82)

This limit always exists because the function is continuous and concave by Theorem 2.

###### Proposition 1 (Quantum conditional de Bruijn).

Assume the hypotheses of Theorem 2. Then we have

 J(R|M)ρRM=ddtS(R|M)(Ncl(t)⊗1M)(ρRM)∣∣∣t=0 . (83)
###### Proof.

Follows from the integral conditional de Bruijn identity given in Theorem 1. ∎

### 4.1 Stam inequality

###### Theorem 3.

Let be an -mode quantum system, be a classical system and be a generic quantum system. Let be a quantum state on such that its marginal on has a probability density function . Let further fulfill

 tr[HρA]<∞,E(ρR)<∞,S(ρM)<∞ . (84)

Let us suppose that and are conditionally independent given ,

 I(A:R|M)ρARM=0 . (85)

Then the linear conditional Stam inequality holds,

 J(C|M)ρCM≤λ2J(A|M)ρAM+(1−λ)2J(R|M)ρRM∀λ∈[0,1] , (86)

where

 ρCM:=(E⊗1M)(ρARM)=∫R2nD(ξ)ρAM|R=ξD(ξ)†ρR(ξ)d2nξ(2π)n. (87)

Choosing , we obtain the conditional Stam inequality

 1J(C|M)ρCM≥1J(A|M)ρAM+1J(R|M)ρRM . (88)
###### Proof.

We prove the following:

 ΔC|M(ρCM)(t)≤ΔA|M(ρAM)(λ2t)+ΔR|M(ρRM)((1−λ)2t) . (89)

Because is increasing and concave the Stam inequality follows taking the derivative at .

By definition, we have for any that

 ΔC|M(ρCM)(t)=I(C:Z|M)σCMZ(t) , (90)

for an -valued Gaussian random variable with probability density function

 fZ,t(z)=e−∥z∥22ttn,z∈R2n , (91)

and has as marginal on and for any , it fulfills

 σCM|Z=z(t)=DC(z)ρCMDC(z)†