A New Inequality Related to Proofs of Strong Converse Theorems in Multiterminal Information Theory

01/15/2019
by   Yasutada Oohama, et al.
0

In this paper we provide a new inequality useful for the proofs of strong converse theorems in the multiterminal information theory. We apply this inequality to the recent work by Tyagi and Watanabe on the strong converse theorem for the Wyner-Ziv source coding problem to obtain a new strong converse outer bound. This outer bound deviates from the Wyner-Ziv rate distortion region with the order O(1/√(n)) on the length n of source outputs.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

01/15/2019

A New Inequality Related to Proofs of Strong Converse Theorems for Source or Channel Networks

In this paper we provide a new inequality useful for the proofs of stron...
07/13/2020

On the Parallel Tower of Hanoi Puzzle: Acyclicity and a Conditional Triangle Inequality

A generalization of the Tower of Hanoi Puzzle—the Parallel Tower of Hano...
09/01/2021

New Proofs of Extremal Inequalities With Applications

The extremal inequality approach plays a key role in network information...
02/18/2020

Vector Gaussian Successive Refinement With Degraded Side Information

We investigate the problem of the successive refinement for Wyner-Ziv co...
01/05/2019

Exponential Strong Converse for Successive Refinement with Causal Decoder Side Information

We revisit the successive refinement problem with causal decoder side in...
05/12/2018

Strong Converse using Change of Measure Arguments

The strong converse for a coding theorem shows that the optimal asymptot...
05/21/2020

An Importance Aware Weighted Coding Theorem Using Message Importance Measure

There are numerous scenarios in source coding where not only the code le...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Definitions of Functions

Let be an index set. For each , let be a finite set. For each , let

be a random variable taking values in

. For , . In particular for , we write . Let

be a set of all probability distributions on

. For , we write its disribution as . For , we often omit its subscript to simply write . For , let denote the probability distribution of , which is the marginral distribution of . We adopt similar notations for other variables or sets. For , we consider a function having the following form:

(1)
(2)

In (1), , , are given nonnegative functions and , are given real valued coefficients. In (2), the quantities and are given positive coefficients. Furthermore, and are given subsets of . We define

(3)

In this paper we assume that the function satisfy the following property.

Assumption 1

  • For any , is nonnegative and bounded, i.e., there exists a positive such that for any .

  • is a continuous function of .

Let be a given subset of . The following two optimization problems

(4)

frequently appear in the analysis of capacity or rate regions in the field of multiterminal information theory. In this paper we give one example of and , which is related to the source coding with side information at the deconder posed and investigated Wyner and Ziv [6]. This example is shown below.

Example 1

Let , , , and be four random variables, respectively taking values in the finite sets , , , and . We consider the case where . Let a probability distribution of . For , we define

(5)

where are distortion measures. In this example we have the following:

(6)

Let

In this example we denote the quantity by , which has the following form:

The quantity

yields the following hyperplane expression of Wyner-Ziv rate distortion region

:

In the above example because of the two Markov chains

and , the computation of becomes a non-convex optimization problem, which is very hard to solve in its present form. As we can see from this example, the computations of and are in general highly challenging. To solve those problems, alternative optimization problems having one parameter on some relaxed condition of are introduced. Let be some suitable onto mapping satisfying We set On the above , we assume the following:

Assumption 2

  • Let denote a feasible region on those relaxed optimization problems. On the feasible region , we assume that for any , its support set includes the support set of .

  • For any and for any , we have

    where and are positive constants and the quantities and are subsets of satistying the following:

For and , define

We consider the following two optimization problems:

(7)

Those optimization problems appear in recent results that the author [1]-[4], Tyagi and Watanabe [5] obtained on the proofs of the strong converse theorems for multi-terminal source or channel networks.

Example 2

We consider the case of Example 1. Define by The feasible region is given by

For and for , we have

(8)

From (8), we have that and . We denote the quantity by , which has the following form:

According to Tyagi and Watanabe [5], a single letter characterization of the rate distortion region using the function plays an important role in the proof of the the strong converse theorem for Wyner-Ziv source coding problem.

Ii Main Results

Our aim in this paper is to evaluate the differences between and and between and . It is obvious that we have

(9)

for any . In fact, restricting the feasible region in the definitions of or to , we obtain the bounds in (9). We first describe explicit upper bounds of and by standard analytical arguments. This result is given by the following proposition.

Proposition 1

For any positive , we have

(10)
(11)

where we set

Proof of this proposition is given in Appendix -A. We set

For and , define

Furthermore, define

(12)

For , define

Furthermore, set

Note that the quantity depends on and the quantity depends on . Our main result is given in the following proposition.

Proposition 2

For any satisfying , we have

(13)

where is a suitable positive constant depending on . Furthermore, for any satisfying , we have

(14)

where is a suitable positive constant depending on .

Proof of this proposition will be given in the next section. We can see from the above proposition that the two bound (13) and (14) in Propostion 2, respectively, provide significant improvements from the bounds (10) and (11) in Proposition 1.

We next consider an application of Propostion 2 to the case discussed in Examples 1 and 2. As stated in Examples 1 and 2, and . Set

Here we note that and depend on the value of . Hence we write and when we wish to express that those are the functions of . Applying Proposition 2 to the example of Wyner-Ziv source coding problem, we have the following result.

Proposition 3

For any and any satisfying , we have

Specifically, for any satisfying , we have

where

Let and for fixed source block length , let be the -rate distortion region consisting of a pair of compression rate and distortion level such that the decoder fails to obtain the sources within distortion level with a probability not exceeding . Formal definition of is found in [2]. The above theorem together with the result of Tyagi and Watanabe [5] yields a new strong converse outer bound. To describe this result for , we set

According to Tyagi and Watanabe [5], we have the following theorem.

Theorem 1 (Tyagi and Watanabe [5])

For any ,

(15)

From Theorem 1 and Proposition 3, we have the following:

Theorem 2

For any satisfying , we have

(16)

where

(17)

In (17), we choose For this choice of , the quantity becomes the following:

The quantity indicates a gap of the outer bound of from . This gap is tighter than the similar gap given by

where is some positive constant not depending on . The above was obtained by the author [2] in a different method based on the theory of information spectrums [7].

Iii Proof of the Main Result

For , and for , define

We can show that the functions we have definded so far satisfy several properties shown below.

Property 1

  • For fixed positive , a sufficient condition for to exist is

  • For , define a probability distribution by

    For , define a probability distribution by

    Then, we have

    (18)
    (19)
    (20)
    (21)

    Specifically, we have

    (22)
    (23)

    For fixed , a sufficient condition for the three times derivative of to exist is Furthermore, a sufficient condition for the three times derivative of to exist is .

  • Let be some positive constant depending on . Then, for any , we have

    (24)
  • For any , , any , and any , we have

    (25)

    From (25), we have

    (26)

    for . By letting in (26), and taking (22) into account, we have that for any and any , ,

    (27)
Property 2

  • For fixed positive , a sufficient condition for to exist is

  • For fixed positive , a sufficient condition for the three times derivative of to exist is

  • Let be some positive constant depending on . Then, for any , we have

    (28)
  • For any , , any , and any , we have

    (29)

    From (29), we have

    (30)

    for . By letting in (30), and taking (22) into account, we have that for any and any , ,

    (31)

Proofs of Properties 1 and 2 part a)-c) are given in Appendix -B. Proofs of the equalities in Property 1 part b) are also given in Appendix -B. Proofs of the inequality (25) in Property 1 part d) and the inequality (29) in Property 2 part d) are given in Appendix -C.

Proof of Propositon 2: We first prove (13). Fix arbitrary. For , we set Then the condition is equivalent to . When we have the following chain of inequalities

(32)

Step (a) follows from (27) in Property 1 part d) and the choice of . Step (b) follows from (24) in Property 1 part c). S