# A New Inequality Related to Proofs of Strong Converse Theorems in Multiterminal Information Theory

In this paper we provide a new inequality useful for the proofs of strong converse theorems in the multiterminal information theory. We apply this inequality to the recent work by Tyagi and Watanabe on the strong converse theorem for the Wyner-Ziv source coding problem to obtain a new strong converse outer bound. This outer bound deviates from the Wyner-Ziv rate distortion region with the order O(1/√(n)) on the length n of source outputs.

## Authors

• 10 publications
01/15/2019

### A New Inequality Related to Proofs of Strong Converse Theorems for Source or Channel Networks

In this paper we provide a new inequality useful for the proofs of stron...
07/13/2020

### On the Parallel Tower of Hanoi Puzzle: Acyclicity and a Conditional Triangle Inequality

A generalization of the Tower of Hanoi Puzzle—the Parallel Tower of Hano...
09/01/2021

### New Proofs of Extremal Inequalities With Applications

The extremal inequality approach plays a key role in network information...
02/18/2020

### Vector Gaussian Successive Refinement With Degraded Side Information

We investigate the problem of the successive refinement for Wyner-Ziv co...
01/05/2019

### Exponential Strong Converse for Successive Refinement with Causal Decoder Side Information

We revisit the successive refinement problem with causal decoder side in...
05/12/2018

### Strong Converse using Change of Measure Arguments

The strong converse for a coding theorem shows that the optimal asymptot...
05/21/2020

### An Importance Aware Weighted Coding Theorem Using Message Importance Measure

There are numerous scenarios in source coding where not only the code le...
##### This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

## I Definitions of Functions

Let be an index set. For each , let be a finite set. For each , let

be a random variable taking values in

. For , . In particular for , we write . Let

be a set of all probability distributions on

. For , we write its disribution as . For , we often omit its subscript to simply write . For , let denote the probability distribution of , which is the marginral distribution of . We adopt similar notations for other variables or sets. For , we consider a function having the following form:

 ωp(x––)=L0∑l=1ξlϕl(x––) (1) +L1∑l=1μllogpXSl(xSl)−L2∑l=1ηllogpXTl(xTl). (2)

In (1), , , are given nonnegative functions and , are given real valued coefficients. In (2), the quantities and are given positive coefficients. Furthermore, and are given subsets of . We define

 (3)

In this paper we assume that the function satisfy the following property.

###### Assumption 1

• For any , is nonnegative and bounded, i.e., there exists a positive such that for any .

• is a continuous function of .

Let be a given subset of . The following two optimization problems

 ~Ψmax :=maxp∈~PΨ(p) or ~Ψmin=minp∈~PΨ(p) (4)

frequently appear in the analysis of capacity or rate regions in the field of multiterminal information theory. In this paper we give one example of and , which is related to the source coding with side information at the deconder posed and investigated Wyner and Ziv [6]. This example is shown below.

###### Example 1

Let , , , and be four random variables, respectively taking values in the finite sets , , , and . We consider the case where . Let a probability distribution of . For , we define

 ωp(u,x,y,z):=ξd(x,z)+¯ξlogpX|UY(x|u,y)pX|Y(x|y) =ξd(x,z)+¯ξ[logpUXY(u,x,y)+logpY(y)] −¯ξ[logpUY(u,y)+logpXY(x,y)], (5)

where are distortion measures. In this example we have the following:

 L1=2,μ1=μ2=¯ξ,S1={U,X,Y},S2={Y},L2=2,η1=η2=¯ξ,T1={U,Y},T2={X,Y}.} (6)

Let

 ~P=~P(pXY)={q=qUXYZ:|U|≤|X|,qXY=pXY U↔X↔Y,X↔(U,Y)↔Z}.

In this example we denote the quantity by , which has the following form:

 R(ξ)WZ(pXY)=minq∈~P[¯ξIq(X;U|Y)+ξEqd(X,Z)].

The quantity

yields the following hyperplane expression of Wyner-Ziv rate distortion region

:

 RWZ(pXY)=⋂ξ∈[0,1]{(R,D):¯ξR+ξD≥R(ξ)WZ(pXY)}.

In the above example because of the two Markov chains

and , the computation of becomes a non-convex optimization problem, which is very hard to solve in its present form. As we can see from this example, the computations of and are in general highly challenging. To solve those problems, alternative optimization problems having one parameter on some relaxed condition of are introduced. Let be some suitable onto mapping satisfying We set On the above , we assume the following:

###### Assumption 2

• Let denote a feasible region on those relaxed optimization problems. On the feasible region , we assume that for any , its support set includes the support set of .

• For any and for any , we have

 ωq(x––)−ωp(x––)=L3∑l=1κllogpXA2l|XA2l−1(xA2l|xA2l−1)qXA2l|XA2l−1(xA2l|xA2l−1) −L4∑l=1νllogpXB2l|XB2l−1(xB2l|xB2l−1)qXB2l|XB2l−1(xB2l|xB2l−1),

where and are positive constants and the quantities and are subsets of satistying the following:

 A2l−1∩A2l=∅ for l=1,2,⋯,L3, B2l−1∩B2l=∅ for l=1,2,⋯,L4.

For and , define

 Ψ(α)(q) :=Eq[ωq(X––)−αlogq(X––)p(q)(X––)] =Eq[ωq(X––)]−αD(q∣∣∣∣p(q)).

We consider the following two optimization problems:

 Ψ(α)max :=maxq∈P∗Ψ(α)(q) or Ψ(−α)min=minq∈P∗Ψ(−α)(q). (7)

Those optimization problems appear in recent results that the author [1]-[4], Tyagi and Watanabe [5] obtained on the proofs of the strong converse theorems for multi-terminal source or channel networks.

###### Example 2

We consider the case of Example 1. Define by The feasible region is given by

 P∗= {q=qUXYZ:|U|≤|X||Y||Z|, Supp(q)⊇Supp(φ(q))}.

For and for , we have

 ωq(u,x,y,z)−ω~q(u,x,y,z) =¯ξlog~qU|Y(u|y)qU|Y(u|y)−¯ξlog~qU|XY(u|x,y)qU|XY(u|x,y). (8)

From (8), we have that and . We denote the quantity by , which has the following form:

 R(ξ,α)WZ(pXY) =minq∈P∗[¯ξIq(X;U|Y)+ξEqd(X,Z)+αD(q||φ(q))] =minq∈P∗[¯ξIq(X;U|Y)+ξEqd(X,Z)+α{Iq(Y;U|X) +D(qXY||pXY)+Iq(X;Z|U,Y)}].

According to Tyagi and Watanabe [5], a single letter characterization of the rate distortion region using the function plays an important role in the proof of the the strong converse theorem for Wyner-Ziv source coding problem.

## Ii Main Results

Our aim in this paper is to evaluate the differences between and and between and . It is obvious that we have

 ~Ψmax≤Ψ(α)max,~Ψmin≥Ψ(−α)min. (9)

for any . In fact, restricting the feasible region in the definitions of or to , we obtain the bounds in (9). We first describe explicit upper bounds of and by standard analytical arguments. This result is given by the following proposition.

###### Proposition 1

For any positive , we have

 0≤Ψ(α)max−~Ψmax≤K′√2Kαlog(√α2Ke|X––|), (10) 0≤~Ψmin−Ψ(−α)min≤K′√2Kαlog(√α2Ke|X––|), (11)

where we set

 ϕmax:=max1≤l≤L1maxx––∈X––ϕl(x––), K′:=max{ϕmaxL0∑l=1|ξl|,L1∑l=1|μl|,L2∑l=1|ηl|}.

Proof of this proposition is given in Appendix -A. We set

 μsum :=L1∑l=1μl,ηsum:=L2∑l=1ηl, κsum :=L3∑l=1κl,νsum:=L4∑l=1νl.

For and , define

 ~Ω(λ)(p):=logEp[exp{λωp(X––)}].

Furthermore, define

 ~Ω(λ)max:=maxp∈~P~Ω(λ)(p). (12)

For , define

 ρ(λ) :=maxp∈~P:~Ω(λ)(p)=~Ω(λ)maxVarp[ωp(X––)].

Furthermore, set

 ρ(+) :=maxλ∈[0,12ηsum]ρ(λ),ρ(−):=maxλ∈[−12μsum,0]ρ(λ).

Note that the quantity depends on and the quantity depends on . Our main result is given in the following proposition.

###### Proposition 2

For any satisfying , we have

 0≤Ψ(α)max−~Ψmax≤1α−νsum[ρ(+)2+c(+)α−νsum], (13)

where is a suitable positive constant depending on . Furthermore, for any satisfying , we have

 0≤~Ψmin−Ψ(−α)min≤1α−κsum[ρ(−)2+c(−)α−κsum], (14)

where is a suitable positive constant depending on .

Proof of this proposition will be given in the next section. We can see from the above proposition that the two bound (13) and (14) in Propostion 2, respectively, provide significant improvements from the bounds (10) and (11) in Proposition 1.

We next consider an application of Propostion 2 to the case discussed in Examples 1 and 2. As stated in Examples 1 and 2, and . Set

 Δ(ξ,α)WZ(pXY):=R(ξ)WZ(pXY)−R(ξ,α)WZ(pXY)≥0, Δ(α)WZ(pXY):=maxξ∈[0,1]Δ(ξ,α)WZ(pXY).

Here we note that and depend on the value of . Hence we write and when we wish to express that those are the functions of . Applying Proposition 2 to the example of Wyner-Ziv source coding problem, we have the following result.

###### Proposition 3

For any and any satisfying , we have

 0≤ Δ(ξ,α)WZ(pXY)≤1α−¯ξ[ρ(−)(ξ)2+c(−)(ξ)α−¯ξ].

Specifically, for any satisfying , we have

 0≤ Δ(α)WZ(pXY)≤1α−1[ρ(−)max2+c(−)maxα−1],

where

 ρ(−)max=maxξ∈[0,1]ρ(−)(ξ),c(−)max=maxξ∈[0,1]c(−)(ξ).

Let and for fixed source block length , let be the -rate distortion region consisting of a pair of compression rate and distortion level such that the decoder fails to obtain the sources within distortion level with a probability not exceeding . Formal definition of is found in [2]. The above theorem together with the result of Tyagi and Watanabe [5] yields a new strong converse outer bound. To describe this result for , we set

 R−ν(1,1):={(a−ν,b−ν)∈R2+:(a,b)∈R}.

According to Tyagi and Watanabe [5], we have the following theorem.

###### Theorem 1 (Tyagi and Watanabe [5])

For any ,

 RWZ(n,ε|pXY)⊆RWZ(pXY) −{Δ(α)WZ(pXY)+αnlog11−ε}(1,1). (15)

From Theorem 1 and Proposition 3, we have the following:

###### Theorem 2

For any satisfying , we have

 RWZ(n,ε|pXY)⊆RWZ(pXY)−υn(ε,α)(1,1), (16)

where

 υn(ε,α):=1α−1[ρ(−)max2+c(−)maxα−1]+αnlog11−ε. (17)

In (17), we choose For this choice of , the quantity becomes the following:

 υn(ε)=√2ρ(−)maxnlog11−ε+1n[2c(−)maxρ(−)max+1]log11−ε.

The quantity indicates a gap of the outer bound of from . This gap is tighter than the similar gap given by

 υ′n(ε)=√cnlog51−ε+2nlog51−ε,

where is some positive constant not depending on . The above was obtained by the author [2] in a different method based on the theory of information spectrums [7].

## Iii Proof of the Main Result

For , and for , define

 ω(α)q(x––) :=ωq(x––)−αlogq(x––)p(q)(x––), Ω(θ,α)(q) :=logEq[exp{θω(α)q(X––)}].

We can show that the functions we have definded so far satisfy several properties shown below.

###### Property 1

• For fixed positive , a sufficient condition for to exist is

• For , define a probability distribution by

 q(θ;α)(x––):=q(x––)exp{θω(α)q(x––)}Eq[exp{θω(α)q(X––)}].

For , define a probability distribution by

 p(λ)(x––):=p(x––)exp{λωp(x––)}Ep[exp{λωp(X––)}].

Then, we have

 ddθΩ(θ,α)(q) =Eq(θ;α)[ω(α)q(X––)], (18) d2dθ2Ω(θ,α)(q) =Varq(θ;α)[ω(α)q(X––)], (19) ddλ~Ω(λ)(p) =Ep(λ)[ωp(X––)], (20) d2dλ2~Ω(λ)(p) =Varp(λ)[ωp(X––)]. (21)

Specifically, we have

 (ddθΩ(θ,α)(q))θ=0 =Eq[ω(α)q(X––)]=Ψ(α)(q), (22) (ddλ~Ω(λ)(p))λ=0 =Ep[ωp(X––)]=~Ψ(p). (23)

For fixed , a sufficient condition for the three times derivative of to exist is Furthermore, a sufficient condition for the three times derivative of to exist is .

• Let be some positive constant depending on . Then, for any , we have

 ~Ω(λ)max≤λ~Ψmax+λ2[ρ(+)2+λc(+)]. (24)
• For any , , any , and any , we have

 Ω(θ,α)(q)≤θ(α−νsum)~Ω(1α−νsum)(p(q)). (25)

From (25), we have

 Ω(θ,α)(q)θ≤(α−νsum)~Ω(1α−νsum)(p(q)) (26)

for . By letting in (26), and taking (22) into account, we have that for any and any , ,

 Ψ(α)(q)≤(α−νsum)~Ω(1α−νsum)(p(q)). (27)
###### Property 2

• For fixed positive , a sufficient condition for to exist is

• For fixed positive , a sufficient condition for the three times derivative of to exist is

• Let be some positive constant depending on . Then, for any , we have

 ~Ω(−λ)max≤−λ~Ψmin+λ2[ρ(−)2+λc(−)]. (28)
• For any , , any , and any , we have

 Ω(θ,−α)(q)≤−θ(α−κsum)~Ω(−1α−κsum)(p(q)). (29)

From (29), we have

 Ω(θ,−α)(q)θ≥−(α−κsum)~Ω(−1α−κsum)(p(q)) (30)

for . By letting in (30), and taking (22) into account, we have that for any and any , ,

 Ψ(−α)(q)≥−(α−κsum)~Ω(−1α−κsum)(p(q)). (31)

Proofs of Properties 1 and 2 part a)-c) are given in Appendix -B. Proofs of the equalities in Property 1 part b) are also given in Appendix -B. Proofs of the inequality (25) in Property 1 part d) and the inequality (29) in Property 2 part d) are given in Appendix -C.

Proof of Propositon 2: We first prove (13). Fix arbitrary. For , we set Then the condition is equivalent to . When we have the following chain of inequalities

 Ψ(α)(q) (a)≤1λ~Ω(p(q))≤~Ω(λ)maxλ (b)≤~Ψmax+λ[ρ(+)2+λc(+)] =~Ψmax+1α−νsum[ρ(+)2+c(+)α−νsum]. (32)

Step (a) follows from (27) in Property 1 part d) and the choice of . Step (b) follows from (24) in Property 1 part c). S