# Reconstructing Gaussian sources by spatial sampling

Consider a Gaussian memoryless multiple source with m components with joint probability distribution known only to lie in a given class of distributions. A subset of k ≤ m components are sampled and compressed with the objective of reconstructing all the m components within a specified level of distortion under a mean-squared error criterion. In Bayesian and nonBayesian settings, the notion of universal sampling rate distortion function for Gaussian sources is introduced to capture the optimal tradeoffs among sampling, compression rate and distortion level. Single-letter characterizations are provided for the universal sampling rate distortion function. Our achievability proofs highlight the following structural property: it is optimal to compress and reconstruct first the sampled components of the GMMS alone, and then form estimates for the unsampled components based on the former.

Comments

There are no comments yet.

## Authors

• 3 publications
• ### Lossy Compression via Sparse Linear Regression: Computationally Efficient Encoding and Decoding

We propose computationally efficient encoders and decoders for lossy com...
12/07/2012 ∙ by Ramji Venkataramanan, et al. ∙ 0

read it

• ### Robust Distributed Compression of Symmetrically Correlated Gaussian Sources

Consider a lossy compression system with ℓ distributed encoders and a ce...
07/18/2018 ∙ by Yizhong Wang, et al. ∙ 0

read it

• ### The Dispersion of the Gauss-Markov Source

The Gauss-Markov source produces U_i = aU_i-1 + Z_i for i≥ 1, where U_0 ...
04/25/2018 ∙ by Peida Tian, et al. ∙ 0

read it

• ### The Time-Invariant Multidimensional Gaussian Sequential Rate-Distortion Problem Revisited

We revisit the sequential rate-distortion (SRD) trade-off problem for ve...
11/27/2017 ∙ by Photios A. Stavrou, et al. ∙ 0

read it

• ### Lossy Compression via Sparse Linear Regression: Performance under Minimum-distance Encoding

We study a new class of codes for lossy compression with the squared-err...
02/03/2012 ∙ by Ramji Venkataramanan, et al. ∙ 0

read it

• ### The Rate Distortion Function of Asynchronously Sampled Memoryless Cyclostationary Gaussian Processes

Man-made communications signals are typically modelled as continuous-tim...
02/29/2020 ∙ by Emeka Abakasanga, et al. ∙ 0

read it

• ### The Gaussian lossy Gray-Wyner network

We consider the problem of source coding subject to a fidelity criterion...
02/02/2020 ∙ by Erixhen Sula, et al. ∙ 0

read it

##### This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

## I Introduction

Consider a set of

jointly Gaussian memoryless sources with joint probability density function (pdf) known only to belong to a given family of pdfs. A fixed subset of

sources are sampled at each time instant and compressed jointly by a (block) source code, with the objective of reconstructing all the sources within a specified level of distortion under a mean-squared error criterion. “Universality” requires that the sampling and lossy compression code be designed without precise knowledge of the underlying pdf. In this paper we study the tradeoffs – under optimal processing – among sampling, compression rate and distortion level. This study builds on our prior works [3, 4] on sampling rate distortion for multiple discrete sources with known joint pmf and universal sampling rate distortion for multiple discrete sources with joint pmf known only to lie in a finite class of pmfs, respectively. Here, we do not assume the class of pdfs to be finite.

Problems of combined sampling and compression have been studied extensively in diverse contexts for discrete and Gaussian sources. Relevant works include lossless compression of analog sources in an information theoretic setting [38]; compressed sensing with an allowed detection error rate or quantization distortion [29]; sub-Nyquist temporal sampling of Gaussian sources followed by lossy reconstruction [15]; and rate distortion function for multiple sources with time-shared sampling [22]. See also [13, 33].

Closer to our approach that entails spatial sampling, in a setting of distributed acoustic sensing and reconstruction, centralized as well as distributed coding schemes and sampling lattices are studied in [16]. The rate distortion function has been characterized when multiple Gaussian signals from a random field are sampled and quantized (centralized or distributed) in [27], [24, 25]. In [14], a Gaussian field on the interval and i.i.d. in time, is reconstructed from compressed versions of -sampled sequences under a mean-squared error criterion. The rate distortion function is studied for schemes that reconstruct only the sampled sources first and then reconstruct the unsampled sources by forming minimum mean-squared error (MMSE) estimates based on the reconstructions for the sampled sources. All the sampling problems above assume a knowledge of the underlying distribution.

In the realm of rate distortion theory where a complete knowledge of the signal statistics is unknown, there is a rich literature that considers various formulations of universal coding; only a sampling is listed here. Directions include classical Bayesian and nonBayesian methods [41, 23, 26, 30]; “individual sequences” studies [42, 35, 36]; redundancy in quantization rate or distortion [20, 18, 19]; and lossy compression of noisy or remote signals [21, 34, 6]. These works propose a variety of distortion measures to investigate universal reconstruction performance.

Our work differs materially from the approaches above. Sampling is spatial rather than temporal. Our notion of universality involves a lack of specific knowledge of the underlying pdf in a given compact family of pdfs. Accordingly, in Bayesian and nonBayesian settings, we consider average and peak distortion criteria, respectively, with an emphasis on the former.

Our technical contributions are as follows. In Bayesian and nonBayesian settings, we extend the notion of a universal sampling rate distortion function (USRDf) [4] to Gaussian memoryless sources, with the objective of characterizing the tradeoffs among sampling, compression rate and distortion level. To this end, we consider first the setting with known underlying pdf, and characterize its sampling rate distortion function (SRDf). This uses as an ingredient the rate distortion function for a discrete “remote” source-receiver model with known distribution [8, 1, 2, 39]. When the underlying pdf is known, we show that the overall reconstruction can be performed – optimally – in two steps: the sampled sources are reconstructed first under a modified weighted mean-squared error criterion and then MMSE estimates are formed for the unsampled sources based on the reconstructions for the sampled sources. This is akin to the structure observed in [3] for reconstructing discrete sources from subsets of sources under the probability of error criterion and in [37] for reconstructing remote Gaussian sources. The USRDf for Gaussian memoryless sources with known pdf will serve as a key ingredient in characterizing the USRDf for the Gaussian field, with known distribution, previously studied in [14] in a restricted setting. Building on the ideas developed, for the SRDf (with known pdf), we characterize next the USRDf for Gaussian memoryless sources in the Bayesian and nonBayesian settings and show that it remains optimal to reconstruct first the sampled sources and then form estimates for the unsampled sources based on the reconstructions of the sampled sources.

Our model is described in Section II and our main results and illustrative examples are presented in Section III. In Section IV, we present achievability proofs first when the pdf is known and then, building on it, the achievability proof for the universal setting, with an emphasis on the Bayesian setting. A unified converse proof is presented thereafter.

## Ii Preliminaries

Denote and let

 XM=⎛⎜ ⎜ ⎜ ⎜⎝X1X2⋮Xm⎞⎟ ⎟ ⎟ ⎟⎠ (1)

be a

-valued zero-mean (jointly) Gaussian random vector with a positive-definite covariance matrix. For a nonempty set

with , we denote by the random (column) vector , with values in . Denote repetitions of , with values in , by . Each takes values in . Let and let be the reproduction alphabet for . All logarithms and exponentiations are with respect to the base 2 and all norms are -norms.

Let footnotetext: is a collection of covariance matrices indexed by . By an abuse of notation, we shall use to refer to the covariance matrix itself. be a set of -positive-definite matrices, and assume to be convex and compact in the Euclidean topology on . For instance, for ,

 Θ={(σ21rσ1σ2rσ1σ2σ22),c1≤σ21,σ22≤c2, −d1≤r≤d1}, (2)

with and Hereafter, all covariance matrices under consideration will be taken as being positive-definite without explicit mention. We assume to be a -valued rv with a pdf that is absolutely continuous with respect to the Lebesgue measure on . We assume

 νθ(τ)>0,   τ∈Θ,

and that is continuous on . We consider a jointly Gaussian memoryless multiple source (GMMS) consisting of i.i.d. repetitions of the rv with pdf known only to the extent of belonging to the family of pdfs footnotetext: Throughout this paper, is used to denote the pdf of a Gaussian random vector with mean and covariance matrix . ,. Two settings are studied: in a Bayesian formulation, the pdf is taken to be known, while in a nonBayesian formulation is an unknown constant in .

###### Definition 1.

For a fixed with , a k-fixed-set sampler (-FS), , collects at each , from . The output of the -FS is .

###### Definition 2.

For , an -length block code with -FS for a GMMS with reproduction alphabet is the pair where the encoder maps the -FS output into some finite set and the decoder maps into . We shall use the compact notation suppressing . The rate of the code with -FS is .

Our objective is to reconstruct all the components of a GMMS from the compressed representations of the sampled GMMS components under a suitable distortion criterion with (single-letter) mean-squared error (MSE) distortion measure

 ||xM−yM||2=m∑i=1(xi−yi)2,xM,yM∈Rm. (3)

For threshold an -length block code with -FS will be required to satisfy one of the following distortion criterion depending on the setting.

(i) Bayesian: The expected distortion criterion is

 (4)

(ii) NonBayesian: The peak distortion criterion is

 supτ∈Θ E[∣∣∣∣XnM−φ(f(XnA))∣∣∣∣2∣∣θ=τ]≤Δ, (5)

where denotes .

###### Definition 3.

A number is an achievable universal -sample coding rate at distortion level if for every and sufficiently large , there exist -length block codes with -FS of rate less than and satisfying the fidelity criterion in (4) or (5) above; will be termed an achievable universal -sample rate distortion pair under the expected or peak distortion criterion. The infimum of such achievable rates foe each fixed is denoted by . We shall refer to as the universal sampling rate distortion function (USRDf), suppressing the dependence on . For the USRDf is termed simply the sampling rate distortion function (SRDf), denoted by

Remarks: (i) The USRDf under (4) is no larger than that under (5).

(ii) When , the pdf of the GMMS is, in effect, known.

Below, we recall (Chapter 1, [12]

) the definition of mutual information between two random variables.

###### Definition 4.

For real-valued rvs and with a joint probability distribution , the mutual information between the rvs and is given by

 I(X∧Y)={EμXY[logdμXYdμX×dμY(X,Y)],% if μXY<<μX×μY∞, otherwise, (6)

where denotes that is absolutely continuous with respect to and is the Radon-Nikodym derivative of with respect to .

## Iii Results

We begin with a setting where the pdf of is known and provide a (single-letter) characterization for the SRDf. Next, in a brief detour, we introduce an extension of GMMS, namely a Gaussian memoryless field (GMF) and show how the ideas developed for a GMMS can be used to characterize the SRDf for a GMF. Finally, building on the SRDf for a GMMS, a (single-letter) characterization of the USRDf is provided for a GMMS in the Bayesian and nonBayesian settings.

Throughout this paper, a recurring structural property of our achievability proofs is this: it is optimal to reconstruct the sampled GMMS components first under a (modified) weighted MSE criterion with reduced threshold and then form deterministic (MMSE) estimates of the unsampled components based on the reconstruction of the former.

Before we present our first result, we recall that for a GMMS with pdf reconstructed under the MSE distortion criterion, the standard rate distortion function (RDf) is

 R(Δ) =minμXMYM<<μXM×μYME[||XM−YM||2]≤ΔI(XM∧YM),0<Δ≤m∑i=1E[X2i] (7) (8)

where

s are the eigenvalues of

, and is chosen to satisfy

### Iii-a |Θ|=1: Known pdf

Starting with , for a GMMS with (known) pdf , our first result shows that the fixed-set SRDf for a GMMS is, in effect, the RDf of a GMMS with a weighted MSE distortion measure and a reduced threshold; here is given by

 dA(xA,yA)≜(xA−yA)TGA(xA−yA),xA, yA∈Rk (9)

with

 GA=I+Σ−1AΣAAcΣTAAcΣ−1A, (10)

where .

###### Theorem 1.

For a GMMS with pdf and fixed , the SRDf is

 ρA(Δ) (11) (12)

where

 Δmin,A=∑i∈Ac(E[X2i]−E[XiXTA]Σ−1AE[XAXi]),Δmax=∑i∈ME[X2i] (13)

and s are the eigenvalues of , and is chosen to satisfy .

Comparing (11) with (7), it can be seen that (11) is, in effect, the RDf for a GMMS with weighted MSE distortion measure. In contrast to the RDf (7), in (11) the minimization involves only (and not ) under a weighted MSE criterion with reduced threshold level. For , i.e., , however this reduces to the RDf (7). Also, for every feasible distortion level the SRDf for any is no smaller than that with .

In Section IV, the achievability proof of the theorem above involves reconstructing the sampled components of the GMMS first, and then forming MMSE estimates for the unsampled components based on the former. Accordingly, in (11), the MSE in the reconstruction of the entire GMMS is captured jointly by the weighted MSE (with weight-matrix ) in the reconstructions of the sampled components and the minimum distortion .

Observing that (11) is equivalent to the RDf of a GMMS with a weighted MSE distortion measure enables us to provide an analytic expression for the SRDf using the standard reverse water-filling solution (12) [12]. An instance of this is shown in the example below.

###### Example 1.

For a GMMS with a -FS with , this example illustrates the effect of the choice of the sampling set on SRDf. Consider a GMMS with covariance matrix given by

 ΣM=⎛⎜ ⎜ ⎜ ⎜ ⎜⎝σ21r12σ1σ2⋯r1mσ1σmr21σ1σ2σ22⋯r2mσ2σm⋮⋮⋱⋮rm1σ1σmrm2σ2σm⋯σ2m⎞⎟ ⎟ ⎟ ⎟ ⎟⎠, (14)

where For , we have

 G{j}Σ{j}=(1+∑i≠jr2ijσ2iσ2j)σ2j=σ2j+∑i≠jr2ijσ2i (15)

and hence from (12), the SRDf is

 ρ{j}(Δ) =12log⎛⎜ ⎜ ⎜⎝σ2j+∑i≠jr2ijσ2iΔ−Δmin,{j}⎞⎟ ⎟ ⎟⎠ (16) =12log⎛⎜ ⎜ ⎜ ⎜⎝m∑i=1σ2i−Δmin,{j}Δ−Δmin,{j}⎞⎟ ⎟ ⎟ ⎟⎠ (17)

for where . Observe that every SRDf is a monotonically increasing function of and that the SRDfs are translations of each other and hence decrease at the same rate. Thus, the SRDf with the smallest is uniformly best among all fixed-set SRDfs. For however, there may not be any whose fixed-set SRDf is uniformly best for all distortion levels. ∎

Before turning to the USRDf for a GMMS, the ideas involved in Theorem 1 are used to study sampling and lossy compression of a Gaussian field which affords greater flexibility in the choice of sampling set. While Gaussian fields have been studied extensively under different formulations, we consider a Gaussian memoryless field (GMF) as in [14], which is described next. In lieu of and Gaussian rv in Section II, consider and let be a -valued zero-mean Gaussian processfootnotetext: A Gaussian process on an interval means that any finite collection of rvs are jointly Gaussian. with a bounded covariance function , such that, for any finite

 E[XBXTB] (18)

is a positive-definite matrix and

 ∫I∫I|r(u,v)|dudv<∞. (19)

A GMFfootnotetext: Extensive studies of memoryless repetitions of a Gaussian process exist, cf. [14], [24], under various terminologies. consists of i.i.d. repetitions of . We consider a GMF sampled finitely by a -FS at , with , and with a reconstruction alphabet .

For a GMF with fixed-set sampler and MSE distortion measure

 ||xI−yI||2 =∫I(xu−yu)2du,xI,yI∈RI, (20)

the sampling rate distortion function is defined as in Definitions 2 and 3 with the decoder characterized by a collection of mappings with

 φu:{1,…,J}→Rn,  u∈I. (21)

Analogous to a GMMS, for a GMF sampled at our next result shows that the SRDf is, in effect, the RDf of a GMMS with a weighted MSE distortion measure with weight-matrix given by

 (22)

with connoting element-wise integration. Note that for every , (19) and the boundedness of imply that the integral

 ∫Ir(u,s1)r(u,s2)du (23)

exists and hence (22) is well-defined.

###### Proposition 2.

For a GMF with , the SRDf is

 ρA(Δ) (24) (25)

where

 Δmin,A =∫I(E[X2u]−E[XuXTA]Σ−1AE[XAXu])du and % Δmax=∫IE[X2u]du, (26)

and s are the eigenvalues of , and satisfies .

The SRDf for a GMF (24) and its equivalent form (25) can be seen as counterparts of (11) and (12), with (25) being the reverse water-filling solution for (24). As before, the expression (24) is the RDf of a GMMS with a weighted MSE distortion measure. In Section IV, an achievability proof for the proposition above is provided by adapting the ideas developed for Theorem 1; a converse proof for the proposition is provided involving a set of techniques different from the converse proof provided for Theorem 1.

In contrast to a GMMS with a discrete set , for a GMF, being an interval affords greater flexibility in the choice of the sampling set allowing for a better understanding of the structural properties of the “best” sampling set. In contrast to Example 1 in the example below, considering a GMF with a stationary Gauss-Markov process, we show the structure of the optimal set for minimum distortion for as well. In general, the optimal sampling set is a function of the threshold .

###### Example 2.

Consider a GMF with a zero-mean, stationary Gauss-Markov process over with covariance function

 r(s,u)=p|s−u|,0≤s,u≤1, (27)

and . Note that the correlation between any two points in the interval depends only on the distance between them. For the Gauss-Markov process , for any , it holds that

 Xu1 −∘− Xu2 −∘− ⋯ −∘− Xul. (28)

For a -FS with and ,

 G{a},I=1−Δmin,{a} (29)

and . In (25), the eigenvalue is itself and hence, the SRDf is

 ρ{a}(Δ)=12log1−Δmin,{a}Δ−Δmin,{a} (30)

for , where

 Δmin,{a} =∫a0(E[X2u]−E2[XuXa]E[X2a])du+∫1a(E[X2u]−E2[XuXa]E[X2a])du (31) =∫a0(1−p2(a−u))du+∫1a(1−p2(u−a))du (32) =1−p2a−1+p2(1−a)−1lnp. (33)

Note that the SRDf is a monotonically increasing function of , which in turn is a monotonically increasing function of . Thus, is uniformly best among all SRDfs , , for all distortion levels. Now, for a -FS with and , with the minimum distortion admits a simple form

 Δmin,A=1−k−1∑i=1γ(ai+1−ai), (34)

where is according to

 γ(a)≜11−p2a(p2a(1−2alogp)−1logp),0

The minimum reconstruction error is the “sum” of the minimum error in reconstructing each segment of the GMF. Now, the Markov property (28) implies that the minimum error in reproducing each component is determined by its nearest sampled points and hence the minimum error in reconstructing each segment of the GMF is independent of the location of sampling points other than , and is given by

 (ai+1−ai)−γ(ai+1−ai).

The stationarity of the field means that this minimum error depends on the length alone. Observing that is a concave function of over , above is seen to be minimized when , i.e., when the sampling points are spaced uniformly. However, such a placement is not optimal for all distortion levels. ∎

### Iii-B Universal setting

Turning to the universal setting with a GMMS, consider a set with indexing the members of , i.e., footnotetext: The collection of covariance matrices are indexed by and by an abuse will also be used to refer to itself. . An encoder associated with a -FS observes alone and cannot distinguish among jointly Gaussian pdfs in that have the same marginal pdf . Accordingly (and akin to [4]), consider a partition of comprising “ambiguity” atoms, with each atom of the partition comprising s with identical , i.e., identical and for each , is the collection of s in the ambiguity atom indexed by , i.e.,

 ΣAτ1≜ΣAτ,τ∈Λ(τ1). (36)

Let be a -valued rv induced by . It is easy to see that and are convex, compact subsets of and the rv admits a pdf induced by .

In the Bayesian setting,

 νXA|θ1=τ1=νXA|θ=τ=N(0,ΣAτ1), τ∈Λ(τ1). (37)

In the nonBayesian setting, in order to retain the same notation, we choose to be the right-side above.

Our characterization of the USRDf builds on the structure of the SRDf for a GMMS. Accordingly, in the Bayesian setting, consider the set of (constrained) probability measures

 κBA(δ,τ1) ≜{μθXMYM: θ,XM −∘− θ1,XA −∘− YM,  μXAYM|θ1=τ1<<μXA|θ1=τ1×μYM|θ1=τ1, (38) E[||XM−YM||2|θ1=τ1]≤δ} (39)

and (constraint) minimized mutual information

 (40)

Correspondingly, in the nonBayesian setting, consider

 κnBA(δ,τ1) ≜{μXMYM|θ=τ: μYM|XM,θ=τ=μYM|XA,θ1=τ1, μXAYM|θ1=τ1<<μXA|θ1=τ1×μYM|θ1=τ1, (41) E[||XM−YM||2|θ=τ]≤δ, τ∈Λ(τ1)} (42)

and

 ρnBA(δ,τ1)≜infκnBA(δ,τ1) I(XA∧YM|θ1=τ1). (43)

Remark: In (40) and (43), the minimization is with respect to the conditional measure .

The minimized conditional mutual informations above will be a key ingredient in the characterization of USRDf. First, we show in the proposition below that (40) and (43) admit simpler forms involving rvs corresponding to the sampled components of the GMMS and their reconstruction alone. In the Bayesian setting, for each , the mentioned simpler form involves a weighted MSE distortion measure with weight-matrix , defined as in (10) with replaced by and

 dAτ1(xA,yA)≜(xA−yA)TGA,τ1(xA−yA),xA,yA∈Rk. (44)

In the Bayesian setting, the modified distortion measure plays a role similar to that of .

Remark: Clearly, is a nonincreasing function of Convexity of can be shown as in [32], and convexity implies the continuity of . Now, to show the convexity, pick any and For let be such that

 Iμi(XA∧YM|θ1=τ1)≤ρnBA(δi)+ϵ. (45)

For by the standard convexity arguments, it can be seen that and

 Iαμ1+(1−α)μ2(XA∧YM|θ1=τ1)≤αρnBA(δ1)+(1−α)ρnBA(δ2)+ϵ. (46)

Since (46) holds for any , in the limit, we have

 (47)
###### Proposition 3.

For each , in the Bayesian setting

 ρBA(δ,τ1)=minμXAYA|θ1=τ1<<μXA|θ1=τ1×μYA|θ1=τ1E[dAτ1(XA,YA)|θ1=τ1]≤δ−Δmin,A,τ1I(XA∧YA|θ1=τ1) (48)

for , where

 Δmin,A,τ1=E[E[minyAc∈Rm−k∑i∈Ac(Xi−yi)2|XA,θ1=τ1]∣∣θ1=τ1]. (49)

For each , in the nonBayesian setting

 ρnBA(δ,τ1)=infE[||XM−YM||2|θ=τ]≤δ, τ∈Λ(τ1)I(XA∧YA|θ1=τ1),δ>Δmin,A,τ1, (50)

where the infimum in (50) is over such that

 μYM|XM,θ=τ =μYA|XA,θ1=τ1×μYAc|YA,θ1=τ1, τ∈Λ(τ1),  and (51) μXAYA|θ1=τ1 <<μXA|θ1=τ1×μYA|θ1=τ1 (52)

and

 Δmin,A,τ1=infμYAc|XA,θ=τ=μYAc|XA,θ1=τ1 maxτ∈Λ(τ1)∑i∈AcE[(Xi−Yi)2|θ=τ]. (53)

Remark: From (48), notice that is, in effect, the rate distortion function for a GMMS with pdf and weighted MSE distortion measure. Hence, the minimum in (48) and ergo that in (40) exist and the standard properties of a rate distortion function are applicable to as well, i.e., is a convex, nonincreasing, continuous function of .

###### Theorem 4.

For a GMMS with fixed , the Bayesian USRDf is

 RA(Δ)=min{Δτ1, τ1∈Θ1}E[Δθ1]≤Δmaxτ1∈Θ1 ρBA(Δτ1,τ1) (54)

for , where

 Δmin,A=E[E[minyAc∈Rm−k∑i∈Ac(Xi−yi)2|XA,θ1]]   and   Δmax=∑i∈ME[X2i]. (55)

The nonBayesian USRDf is

 RA(Δ)=maxτ1∈Θ1 ρnBA(Δ,τ1) (56)

for , where

 Δmin,A=supτ1∈Θ1infμYAc|XA,θ=τ=μYAc|XA,θ1=τ1 maxτ∈Λ(τ1)∑i∈AcE[(Xi−Yi)2|θ=τ]   and   Δmax=maxτ∈Θm∑i=1E[X2i|θ=τ]. (57)

Remark: In Appendix -C a simple proof (using contradiction arguments) is provided to show the existence of