On the Scalar-Help-Vector Source Coding Problem

In this paper, we consider a scalar-help-vector problem for L+1 correlated Gaussian memoryless sources. We deal with the case where L encoders observe noisy linear combinations of L correlated Gaussian scalar sources which work as partial side information at the decoder, the remaining encoder observes a vector Gaussian source which works as a primary source we need to reconstruct. We determine an outer region in the case where the sources are conditionally independent of the vector source. We also show an inner region in a special case when the vector source can be regard as K scalar sources.

Authors

• 1 publication
• 1 publication
• 1 publication
• Lossy Transmission of Correlated Sources over Two-Way Channels

Achievability and converse results for the lossy transmission of correla...
05/05/2018 ∙ by Jian-Jia Weng, et al. ∙ 0

• Partial Information Decomposition via Deficiency for Multivariate Gaussians

We consider the problem of decomposing the information content of three ...
05/03/2021 ∙ by Gabriel Schamberg, et al. ∙ 0

• Vector Gaussian CEO Problem Under Logarithmic Loss and Applications

We study the vector Gaussian CEO problem under logarithmic loss distorti...
11/09/2018 ∙ by Yigit Ugur, et al. ∙ 0

• Coding Theorems for Asynchronous Slepian-Wolf Coding Systems

The Slepian-Wolf (SW) coding system is a source coding system with two e...
06/07/2019 ∙ by Tetsunao Matsuta, et al. ∙ 0

• Vector Gaussian CEO Problem Under Logarithmic Loss

In this paper, we study the vector Gaussian Chief Executive Officer (CEO...
02/25/2019 ∙ by Yigit Ugur, et al. ∙ 0

• Universal Decoding for Asynchronous Slepian-Wolf Encoding

We consider the problem of (almost) lossless source coding of two correl...
07/25/2020 ∙ by Neri Merhav, et al. ∙ 0

• Achievability Performance Bounds for Integer-Forcing Source Coding

Integer-forcing source coding has been proposed as a low-complexity meth...
12/14/2017 ∙ by Elad Domanovitz, et al. ∙ 0

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

Distributed source coding problem was first introduced by Slepian and Wolf [1]. They considered a lossless coding problem of two corelated discrete memoryless information sources. Cover [2] studied a multiterminal source coding system which is an extension of Slepian and Wolf’s two-terminal source coding system. He used the symmetric compression coding scheme to determine the rate region of the Slepian-Wolf problem in [1]. Wyner [3, 4]

introduced a new encoding method using one auxiliary random variable, and determined the admissible rate region for his coding system (also referred to as the one-help-one problem). Berger

[5] and Tung [6] studied the two terminal direct sourcing coding problem. Their coding technique using auxiliary random variables, which can be regarded as the extension version of Wyner [3, 4], is a viable tool of proving direct coding theorems. Since then, the distributed source coding systems have been extended in several directions, including, but not limited to the CEO problems, source coding problems with partial side information and other kinds of multiterminal source coding problems.

We consider the Gaussian scalar-help-vector source coding problem in which there are encoders, and the first encoder observes a vector Gaussian source which is a corrupted version of scalar sources. This problem is a source coding problem with partial side information at the decoder, which can be seen as an extension of the CEO problem and the many-help-one scalar source coding problem.

The CEO problem was first introduced by Berger et al. [7], [9]. They characterized the scalar Gaussian version of the CEO problem. Oohama [10] developed a new method and bounded the rate region by leveraging Shannon’s entropy power inequality (EPI) to relate various information-theoretic quantities. Oohama [13] and Prabhakaran et al. [11] later gave a complete characterization of the rate region of the scalar Gaussian CEO problem. Pandya et al. [12] generalized the quadratic Gaussian CEO problem to the case when the observations are noisy versions of transformed remote sources, which are jointly Gaussian; they derived an upper bound on the total sum rate subject to a sum distortion constraint of the remote sources.

Yang and Xiong [15], Oohama [16] considered the generalized Gaussian CEO problem in the situation where correlated Gaussian observations which have the form of are separately compressed by encoders. In their systems, the joint decoder attempts to reproduce the remote source . They determined explicit inner and outer bounds of the rate region. Yang and Xiong [15] considered the sum-distortion constraint, and Oohama [16]

considered three types of distortion constraints, all of which are based on the covariance matrix of the estimation error on the remote source

.

For the source coding problems with partial side information, Oohama [8] studied the one-help-one problem in which both sources are scalar memoryless Gaussian. He gave a complete characterization of the rate region. He proved the converse theorem by leveraging the EPI and gave a rigorous proof of the direct coding theorem. Oohama [13] dealt with the many-help-one problem for correlated Gaussian memoryless scalar sources. He assumed that Gaussian scalar sources are conditionally independent and derived the bound of rate region. The encoding technique of Oohama [13], which is used to prove the direct coding theorem, can be considered as a continuous extension of that of Berger [5], Tung [6]. Wagner et al. [14] considered a scalar-help-vector source-coding problem in which there are two encoders. They determined the rate region and derived an outer region to establish the converse theorem by creating a reduced dimensional optimization problem.

Zaidi et al. [17] established a completed characterization of the rate region of the vector Gaussian CEO problem with partial side information. They considered the case where the side information is not assumed to be conditionally independent for given remote source with logarithmic loss distortion measure. Here, motivated by the work of Oohama [13], [16], we consider the scalar-help-vector problem using the distortion matrix constraint to restrain the error between the decoded output and the original primary source, and prove the converse theorem; we also prove the direct coding theorem under the sum distortion constraint by deriving the inner bound.

The remainder of this paper is organized as follows. In Section II, we state the problem formulation and our two results. In Section III, we prove the converse coding theorem by determining the outer region stated in the previous section. In the process of deriving the outer bound, we use a result of Oohama [16] which is based on the variants of the entropy power inequality. In Section IV, we prove the direct coding theorem by deriving the inner region stated in Section II. In the process of proving this theorem, we consider an encoding technique using many Gaussian auxiliary random variables, which is inspired by Oohama’s work in [13]. In Section V, we briefly conclude the paper. Note that we do not make a distinction between differential (conditional) entropies and ordinary (conditional) entropies.

Ii Problem description and results

Ii-a Description of the Problem

In this subsection, we present a description of the many-help-one source coding problem we consider in this paper. The system model is illustrated in Fig. 1, in which correlated Gaussian scalar sources and one Gaussian vector source are treated.

Let , . Let , be mutually independent , Gaussian random vectors with mean zero and positive definite covariance matrices , , respectively. Let , be mutually independent Gaussian random vectors with mean zero and positive definite covariance matrices , , respectively. Let , where is a matrix. Let . can be regared as a degraded version of . Let , denote i.i.d. copies of and , respectively.

In this system, the encoding functions are defined by and , where

 f(n)0:Xk×n→M0={1,2,⋯,M0}, (1)
 f(n)i:Xn→Mi={1,2,⋯,Mi},i=1,2,⋯,L. (2)

For each , set

 1nlogMl≤Rl, (3)

which stands for the transmission rate of the encoding function . The joint decoding function is defined by

 ψ(n):M0×M1×M2×⋯×ML→Xk×n. (4)

The set that consists of the tuple of encoding and decoding functions satisfying (1)-(4) are denoted by .

Next, we present the rate distortion region under the distortion matrix constraint and the sum distortion constraint.

Distortion matrix constraint: A rate vector is said to be -admissible if for a given distortion matrix , there exists a sequence such that

 limsupn→∞r(n)l>Rl,
 limsupn→∞1nΣX′K−^X′K≺=Σd,

where denotes the covariance matrix which has the form expressed by Eq. (5) at the top of this page.

Sum distortion constraint: A rate vector is said to be -admissible if for a given positive , there exists a sequence such that

 limsupn→∞r(n)l>Rl,
 limsuptrn→∞[1nΣX′K−^X′K]≤D.

Let and denote the set of all -admissible rate vectors and -admissible rate vectors, respectively. Our goal is to determine an outer bound and an inner bound of the rate region .

Ii-B Main results

In this subsection we state our results on the characterizations of .

For , , let be a Gaussian random variable with mean

and variance

. Fix nonnegative vector . For and , define

 (6)

where , , and denotes a covariance matrix of the random vector . We now apply those notations to present our rate region.

Set

It can be verified that the region is a closed convex set. For the sum distortion constraint, the outer bound of the rate distortion region can be expressed as .

Our main result for an outer bound of the rate distortion region is given in the following theorem.

Theorem 1

: For Gaussian sources which satisfies the following Markov chains:

, , .

The proof of Theorem 1 will be given in Section III.

Next, we present a result for an inner bound of the rate distortion region. Our presentation follows the same logic as that of [13].

Let be random variables taking in real lines . For , set

 Gs(D)Δ={(U0,US):(U0,US)isaGaussianrandomvectorthatsatisfiesUS→XS→X′K→U0,UB→XB∪{0}→XS−B→US−BforanyB⊆S,andtr[ΣX′K−~ψ(U0,US)]≤Dformapping~ψ:U0×US→Xk×n}.

We write the subset of as , and define the subsets of by . Let and denote an arbitrary permutation of and a set of all permutations of , respectively. Let denote the image of by . Set

 Rπ(S)(D)Δ={(R0,R1,R2,⋯,RL):thereexistsarandomvector(U0,US)∈Gs(D)suchthatR0≥I(U0;X′K|US),Rπ(ki)≥I(Uπ(ki);Xπ(ki)∣∣Uπ(S−Bi))fori=1,2,⋯,s,andRi≥0fori⊆Λ−S},

and

Theorem 2: For Gaussian information sources with general correlation .

Theorem 2 provides an inner bound of the rate distortion region , and the proof of Theorem 2, which are based on the inner bound of Oohama’s work in [13], can be found in section IV.

Iii Proof of the converse Throrem

In this section, we prove the inclusion .

We first observe that

 W0↔X′K↔XK↔YΛ↔WΛ (7)

and

 WS↔YS↔XK↔YSc↔WSc (8)

hold for any subset of . Assume . Then, there exists a sequence such that , .

For any subset , we have the following chain of inequalities:

 nR(n)0+∑l∈SnR(n)l≥logM0+∑l∈SlogMl≥H(W0)+∑l∈SH(Wl)≥H(W0)+H(WS)≥H(W0,WS|WSc)=I(W0,WS;X′K|WSc)+H(W0,WS∣∣WSc,X′K)=I(W0,WS;X′K|WSc)+H(WS∣∣WSc,X′K)+I(W0;WS∣∣WSc,X′K)(a)≥I(W0,WS;X′K|WSc)+∑l∈SH(Wl∣∣X′K)(b)=I(W0,WS;X′K|WSc)+∑l∈SI(Yl;Wl∣∣X′K) (9)

where step (a) follows from (7), (8), step (b) follows from the definition that and .

Due to the fact that:

 (10)

we need to find a lower bound of and an upper bound of . We have the following chain of inequalities for a lower bound of :

 I(X′K;^X′K)=h(X′K)−h(X′K∣∣∣^X′K)≥h(X′K)−h(X′K−^X′K∣∣∣^X′K)≥h(X′K)−h(X′K−^X′K)=n2log(2πe)K∣∣ΣX′K∣∣−n2log[(2πe)K∣∣ΣX′K−^X′K∣∣]=n2log∣∣ΣX′K∣∣∣∣1n∣∣ΣX′K−^X′K∣∣∣∣. (11)

We have the following chain of equalities for an upper bound of .

 I(X′K;WS)=h(X′K)−h(X′K|WS)=n2log[(2πe)K∣∣ΣX′K∣∣]−h(X′K|WS). (12)

Furthermore, can be expanded as the follows:

 (13)

For , we shall first derive a lower bound of ,

 h(X′K∣∣WS,XK)=h(X′K∣∣XK)−I(X′K;WS∣∣XK)≥h(X′K∣∣XK)−I(YS;WS∣∣XK)=n2log[(2πe)K∣∣ΣN′K∣∣]−∑l∈SI(Yi;Wi∣∣X′K). (14)

Then, we observe that

 I(XK;X′K)=h(X′K)−h(X′K∣∣XK)=n2log∣∣ΣX′K∣∣∣∣ΣN′K∣∣. (15)

Finally, we estimate a lower bound of . We first state a lemma given in Oohama [16], this lemma is based on the EPI and serves as the key to derive of .

Lemma 1: For any ,

 I(XK,WS)≤n2log∣∣∣I+ΣXKAtΣ−1NS(r(n)S)A∣∣∣, (16)

then, according to the definition of , we need to prove that . The derivation is given in APPENDIX i@ .

Then, we combine the above bounds i.e., Eqs. (14), (15), (16), and (10) - (13), we obtain Eq. (17) shown at the top of this page. Since we know that is nonnegative, we have

 I(W0,WS;X′K|WSc)+∑l∈SI(Yl;Wl∣∣X′K)≥nJ′S(θ,rS∣∣rSC), (18)

where is defined earlier by (6). Combining (9) and (18), we have

 R(n)0+∑l∈SR(n)l≥J′S(∣∣∣1nΣX′K−^X′K∣∣∣,rS∣∣rSC),S⊆Λ (19)

On the other hand, we have

 Σ−1XK+AtΣ−1NS(r(n)S)A≻=(1nΣX′K−^X′K)−1. (20)

Letting and combining the above results, we obtain

 R(n)0+∑l∈SR(n)l≥J′S(|Σd|,rS∣∣rSC), (21)
 Σ−1XK+AtΣ−1NL(rL)A≻=Σ−1d. (22)

From (21), (22), we have the inclusion . Furthermore, we have

Iv Proof of the direct coding theorem

In this section, we prove the inclusion . We provide encoding and decoding schemes which are based on the random coding arguments developed by Oohama [13].

Let , . Let . For , we write , , , as , , , , respectively.

Suppose that and is the identity map. In this case,

 ~Rπ(S)Δ={(R0,R1,⋯,RL):thereexistsarandomvector(U0,US)∈GS(D)suchthatRi≥I(Ui;Xi∣∣U{i+1,⋯,s})fori=0,1,⋯,s−1,andRs≥I(Us;Xs),Ri≥0fors+1≤i≤L}

We next prove the inclusion under the sum distortion constraint. Our proof follows the same logic as that of Oohama’s work in [13].

Iv-a Distortion Typical Sequences

Set and . Since , there exists a mapping such that .

Furthermore, define

 ~ψ(u)=(~ψ(u1),~ψ(u2),⋯,~ψ(un)),
 d(x0,~ψ(u))=K∑k=1n∑t=1d(x′k,t,~ψ(ut)).

Fix any . An sequence pair is called distortion typical if

 ∣∣∣−1nlogpnUAXN(uA,xB)−H(UA,XB)∣∣∣≤ε,
 ∣∣ ∣∣−1nd(x0,~ψ(u))−K∑k=1Ed(X′kK,~ψ(U))∣∣ ∣∣≤ε.

We denote the set of all distortion typical sequences for by . The definitions and properties of typical sets , that we use to prove our direct coding theorem are similar to those given in [13], so we omit the details here.

Iv-B Pre-encoding

For each , generate i.i.d sequences . We denote these sequences by , , which are also independent to each other over . We define the pre-encoder functions as , , which are the mappings from , for , and , for .

If there exists an interger such that , and for , then, ; if there is no a satisfying the above conditions, .

For , and , we define the cells

 Bmi={(mi−1)bi+1,(mi−1)bi+2,⋯,mibi},

where ; There is no loss of generality to assume that is an integer value.

Iv-C Coding Scheme

Encoding: For each , , and , , we define the encoder function and by , when and for .

Decoding: For a received vector , pick an index vector such that , If there does not exist a unique vector satisfying the above condition, output an arbitrary vector from and adopt output vector