On the Scalar-Help-Vector Source Coding Problem

04/10/2019 ∙ by C. Deng, et al. ∙ Harbin Institute of Technology 0

In this paper, we consider a scalar-help-vector problem for L+1 correlated Gaussian memoryless sources. We deal with the case where L encoders observe noisy linear combinations of L correlated Gaussian scalar sources which work as partial side information at the decoder, the remaining encoder observes a vector Gaussian source which works as a primary source we need to reconstruct. We determine an outer region in the case where the sources are conditionally independent of the vector source. We also show an inner region in a special case when the vector source can be regard as K scalar sources.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

(5)

Distributed source coding problem was first introduced by Slepian and Wolf [1]. They considered a lossless coding problem of two corelated discrete memoryless information sources. Cover [2] studied a multiterminal source coding system which is an extension of Slepian and Wolf’s two-terminal source coding system. He used the symmetric compression coding scheme to determine the rate region of the Slepian-Wolf problem in [1]. Wyner [3, 4]

introduced a new encoding method using one auxiliary random variable, and determined the admissible rate region for his coding system (also referred to as the one-help-one problem). Berger

[5] and Tung [6] studied the two terminal direct sourcing coding problem. Their coding technique using auxiliary random variables, which can be regarded as the extension version of Wyner [3, 4], is a viable tool of proving direct coding theorems. Since then, the distributed source coding systems have been extended in several directions, including, but not limited to the CEO problems, source coding problems with partial side information and other kinds of multiterminal source coding problems.

We consider the Gaussian scalar-help-vector source coding problem in which there are encoders, and the first encoder observes a vector Gaussian source which is a corrupted version of scalar sources. This problem is a source coding problem with partial side information at the decoder, which can be seen as an extension of the CEO problem and the many-help-one scalar source coding problem.

The CEO problem was first introduced by Berger et al. [7], [9]. They characterized the scalar Gaussian version of the CEO problem. Oohama [10] developed a new method and bounded the rate region by leveraging Shannon’s entropy power inequality (EPI) to relate various information-theoretic quantities. Oohama [13] and Prabhakaran et al. [11] later gave a complete characterization of the rate region of the scalar Gaussian CEO problem. Pandya et al. [12] generalized the quadratic Gaussian CEO problem to the case when the observations are noisy versions of transformed remote sources, which are jointly Gaussian; they derived an upper bound on the total sum rate subject to a sum distortion constraint of the remote sources.

Yang and Xiong [15], Oohama [16] considered the generalized Gaussian CEO problem in the situation where correlated Gaussian observations which have the form of are separately compressed by encoders. In their systems, the joint decoder attempts to reproduce the remote source . They determined explicit inner and outer bounds of the rate region. Yang and Xiong [15] considered the sum-distortion constraint, and Oohama [16]

considered three types of distortion constraints, all of which are based on the covariance matrix of the estimation error on the remote source

.

For the source coding problems with partial side information, Oohama [8] studied the one-help-one problem in which both sources are scalar memoryless Gaussian. He gave a complete characterization of the rate region. He proved the converse theorem by leveraging the EPI and gave a rigorous proof of the direct coding theorem. Oohama [13] dealt with the many-help-one problem for correlated Gaussian memoryless scalar sources. He assumed that Gaussian scalar sources are conditionally independent and derived the bound of rate region. The encoding technique of Oohama [13], which is used to prove the direct coding theorem, can be considered as a continuous extension of that of Berger [5], Tung [6]. Wagner et al. [14] considered a scalar-help-vector source-coding problem in which there are two encoders. They determined the rate region and derived an outer region to establish the converse theorem by creating a reduced dimensional optimization problem.

Zaidi et al. [17] established a completed characterization of the rate region of the vector Gaussian CEO problem with partial side information. They considered the case where the side information is not assumed to be conditionally independent for given remote source with logarithmic loss distortion measure. Here, motivated by the work of Oohama [13], [16], we consider the scalar-help-vector problem using the distortion matrix constraint to restrain the error between the decoded output and the original primary source, and prove the converse theorem; we also prove the direct coding theorem under the sum distortion constraint by deriving the inner bound.

The remainder of this paper is organized as follows. In Section II, we state the problem formulation and our two results. In Section III, we prove the converse coding theorem by determining the outer region stated in the previous section. In the process of deriving the outer bound, we use a result of Oohama [16] which is based on the variants of the entropy power inequality. In Section IV, we prove the direct coding theorem by deriving the inner region stated in Section II. In the process of proving this theorem, we consider an encoding technique using many Gaussian auxiliary random variables, which is inspired by Oohama’s work in [13]. In Section V, we briefly conclude the paper. Note that we do not make a distinction between differential (conditional) entropies and ordinary (conditional) entropies.

Ii Problem description and results

Ii-a Description of the Problem

In this subsection, we present a description of the many-help-one source coding problem we consider in this paper. The system model is illustrated in Fig. 1, in which correlated Gaussian scalar sources and one Gaussian vector source are treated.

Fig. 1: The many-help-one problem for scalar Gaussian sources and one vector Gaussian source.

Let , . Let , be mutually independent , Gaussian random vectors with mean zero and positive definite covariance matrices , , respectively. Let , be mutually independent Gaussian random vectors with mean zero and positive definite covariance matrices , , respectively. Let , where is a matrix. Let . can be regared as a degraded version of . Let , denote i.i.d. copies of and , respectively.

In this system, the encoding functions are defined by and , where

(1)
(2)

For each , set

(3)

which stands for the transmission rate of the encoding function . The joint decoding function is defined by

(4)

The set that consists of the tuple of encoding and decoding functions satisfying (1)-(4) are denoted by .

Next, we present the rate distortion region under the distortion matrix constraint and the sum distortion constraint.

Distortion matrix constraint: A rate vector is said to be -admissible if for a given distortion matrix , there exists a sequence such that

where denotes the covariance matrix which has the form expressed by Eq. (5) at the top of this page.

Sum distortion constraint: A rate vector is said to be -admissible if for a given positive , there exists a sequence such that

Let and denote the set of all -admissible rate vectors and -admissible rate vectors, respectively. Our goal is to determine an outer bound and an inner bound of the rate region .

Ii-B Main results

In this subsection we state our results on the characterizations of .

For , , let be a Gaussian random variable with mean

and variance

. Fix nonnegative vector . For and , define

(6)

where , , and denotes a covariance matrix of the random vector . We now apply those notations to present our rate region.

Set

It can be verified that the region is a closed convex set. For the sum distortion constraint, the outer bound of the rate distortion region can be expressed as .

Our main result for an outer bound of the rate distortion region is given in the following theorem.

Theorem 1

: For Gaussian sources which satisfies the following Markov chains:

, , .

The proof of Theorem 1 will be given in Section III.

Next, we present a result for an inner bound of the rate distortion region. Our presentation follows the same logic as that of [13].

Let be random variables taking in real lines . For , set

We write the subset of as , and define the subsets of by . Let and denote an arbitrary permutation of and a set of all permutations of , respectively. Let denote the image of by . Set

and

Theorem 2: For Gaussian information sources with general correlation .

Theorem 2 provides an inner bound of the rate distortion region , and the proof of Theorem 2, which are based on the inner bound of Oohama’s work in [13], can be found in section IV.

Iii Proof of the converse Throrem

In this section, we prove the inclusion .

We first observe that

(7)

and

(8)

hold for any subset of . Assume . Then, there exists a sequence such that , .

For any subset , we have the following chain of inequalities:

(9)

where step (a) follows from (7), (8), step (b) follows from the definition that and .

(17)

Due to the fact that:

(10)

we need to find a lower bound of and an upper bound of . We have the following chain of inequalities for a lower bound of :

(11)

We have the following chain of equalities for an upper bound of .

(12)

Furthermore, can be expanded as the follows:

(13)

For , we shall first derive a lower bound of ,

(14)

Then, we observe that

(15)

Finally, we estimate a lower bound of . We first state a lemma given in Oohama [16], this lemma is based on the EPI and serves as the key to derive of .

Lemma 1: For any ,

(16)

then, according to the definition of , we need to prove that . The derivation is given in APPENDIX i@ .

Then, we combine the above bounds i.e., Eqs. (14), (15), (16), and (10) - (13), we obtain Eq. (17) shown at the top of this page. Since we know that is nonnegative, we have

(18)

where is defined earlier by (6). Combining (9) and (18), we have

(19)

On the other hand, we have

(20)

Letting and combining the above results, we obtain

(21)
(22)

From (21), (22), we have the inclusion . Furthermore, we have

Iv Proof of the direct coding theorem

In this section, we prove the inclusion . We provide encoding and decoding schemes which are based on the random coding arguments developed by Oohama [13].

Let , . Let . For , we write , , , as , , , , respectively.

Suppose that and is the identity map. In this case,

We next prove the inclusion under the sum distortion constraint. Our proof follows the same logic as that of Oohama’s work in [13].

Iv-a Distortion Typical Sequences

Set and . Since , there exists a mapping such that .

Furthermore, define

Fix any . An sequence pair is called distortion typical if

We denote the set of all distortion typical sequences for by . The definitions and properties of typical sets , that we use to prove our direct coding theorem are similar to those given in [13], so we omit the details here.

Iv-B Pre-encoding

For each , generate i.i.d sequences . We denote these sequences by , , which are also independent to each other over . We define the pre-encoder functions as , , which are the mappings from , for , and , for .

If there exists an interger such that , and for , then, ; if there is no a satisfying the above conditions, .

For , and , we define the cells

where ; There is no loss of generality to assume that is an integer value.

Iv-C Coding Scheme

Encoding: For each , , and , , we define the encoder function and by , when and for .

Decoding: For a received vector , pick an index vector such that , If there does not exist a unique vector satisfying the above condition, output an arbitrary vector from and adopt output vector