# Online Adaptive Statistical Compressed Sensing of Gaussian Mixture Models

A framework of online adaptive statistical compressed sensing is introduced for signals following a mixture model. The scheme first uses non-adaptive measurements, from which an online decoding scheme estimates the model selection. As soon as a candidate model has been selected, an optimal sensing scheme for the selected model continues to apply. The final signal reconstruction is calculated from the ensemble of both the non-adaptive and the adaptive measurements. For signals generated from a Gaussian mixture model, the online adaptive sensing algorithm is given and its performance is analyzed. On both synthetic and real image data, the proposed adaptive scheme considerably reduces the average reconstruction error with respect to standard statistical compressed sensing that uses fully random measurements, at a marginally increased computational complexity.

Comments

There are no comments yet.

## Authors

• 1 publication
• 72 publications
• 8 publications
• 172 publications
• ### Statistical Compressed Sensing of Gaussian Mixture Models

A novel framework of compressed sensing, namely statistical compressed s...
01/30/2011 ∙ by Guoshen Yu, et al. ∙ 0

read it

• ### Info-Greedy sequential adaptive compressed sensing

We present an information-theoretic framework for sequential adaptive co...
07/02/2014 ∙ by Gabor Braun, et al. ∙ 0

read it

• ### Task-Driven Adaptive Statistical Compressive Sensing of Gaussian Mixture Models

A framework for adaptive and non-adaptive statistical compressive sensin...
01/25/2012 ∙ by Julio M. Duarte-Carvajalino, et al. ∙ 0

read it

• ### RL-NCS: Reinforcement learning based data-driven approach for nonuniform compressed sensing

A reinforcement-learning-based non-uniform compressed sensing (NCS) fram...
07/02/2021 ∙ by Nazmul Karim, et al. ∙ 0

read it

• ### Plug-And-Play Learned Gaussian-mixture Approximate Message Passing

Deep unfolding showed to be a very successful approach for accelerating ...
11/18/2020 ∙ by Osman Musa, et al. ∙ 0

read it

• ### Statistical Compressive Sensing of Gaussian Mixture Models

A new framework of compressive sensing (CS), namely statistical compress...
10/20/2010 ∙ by Guoshen Yu, et al. ∙ 0

read it

• ### Sample Distortion for Compressed Imaging

We propose the notion of a sample distortion (SD) function for independe...
03/22/2013 ∙ by Chunli Guo, et al. ∙ 0

read it

##### This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

## 1 Introduction

Compressed sensing (CS) aims at achieving accurate signal reconstruction while sampling signals at a low sampling rate, typically far smaller than that of Nyquist. Let be a signal of interest, a non-adaptive sensing matrix (encoder), consisting of measurements, a measured signal, and a decoder used to reconstruct from . CS develops encoder-decoder pairs such that a small reconstruction error , where is a norm, can be achieved.

Assuming a sparse signal model, i.e., the signal can be accurately represented in a dictionary with a few non-zero coefficients, the CS theory has shown that using random sensing matrices such as Gaussian or Bernoulli matrix with measurements, and an minimization or a greedy matching pursuit decoder

promoting sparsity, with high probability accurate signal reconstruction is possible: The obtained approximation error

is tightly upper bounded by a constant times the best -term approximation error in the sparse representation [cohen2009compressed].

While conventional CS deals with one signal at a time, statistical compressed sensing (SCS) aims at efficiently sampling a collection of signals and having accurate reconstruction on average. Assuming that the signals

follow a distribution with probability density function (pdf)

, SCS designs encoder-decoder pairs so that the average error is small [yu2011SCS].

For signals following a Gaussian distribution, it has been shown that with

any sensing matrix of measurements and the maximum a posteriori (MAP) linear decoder, SCS leads to a mean squared error (MSE) upper bounded by a constant times the minimum MSE obtained with the -term linear approximation in the principal direction analysis (PCA) basis that is optimal for Gaussian signals [yu2011SCS]. In particular, the error bound is tight when Gaussian or Bernoulli random sensing matrix is used [yu2011SCS]. For signals generated from a Gaussian mixture model (GMM), i.e., there exist multiple Gaussian distributions and each signal is generated from one of them with an unknown index, GMMs giving more precise description of most real signals than single Gaussian models, a piecewise linear decoder that calculates the signal reconstruction from each of the Gaussian models, and then selects the best one, has been introduced [yu2011SCS, yu2010PLE]. Additional theoretical results on the Gaussian model selection accuracy and overall reconstruction have been shown in [chen2010compressive].

SCS of GMM applies non-adaptive random sensing matrices because for the signal being sensed, the Gaussian model from which the signal is generated is a priori unknown. If it were known, one would then prefer sensing along the principal directions in the appropriate Gaussian, which leads to the minimum MSE.111This optimal MSE sensing for Gaussians is easy to prove, see next Section, while the optimal sensing for other distributions has been recently elegantly developed in [Carson2011]. The strategy here introduced can then be extended to mixtures beyond GMMs. More generally speaking, assume that the signals are generated from a mixture model, and an optimal sensing scheme is associated with each of the underlying models (e.g., following [Carson2011]). If for the current signal, its model were known before sensing, the optimal sensing scheme in that distribution would be preferred rather than using non-adaptive measurements.

This paper follows this line of thoughts and introduces an online adaptive sensing framework for signals generated from a mixture model. The scheme imbeds an online model selection and a switch from non-adaptive to adaptive sensing. To sense a signal, non-adaptive measurements are first used, from which an online decoding scheme calculates the model selection. As soon as a model has been selected, the optimal sensing scheme of the selected model then continues to apply. The final signal reconstruction is calculated from the ensemble of both the non-adaptive and the adaptive measurements.

As an important example, this online adaptive sensing is here illustrated for signals following a GMM. Not only GMMs have been shown to lead to results in the ballpark of the state-of-the-art in various inverse problems for different types of real data [leger2010Matrix, yu2010PLE], theoretical results on statistical compressed sensing of GMM have also been recently given [chen2010compressive, yu2011SCS].

Section 2 recalls the main results of SCS of GMM [yu2011SCS] based on which the online SCS of GMM will be developed. An algorithm for the online adaptive SCS of GMM is then given in Section 3, and its performance is analyzed and compared against standard SCS using fully random sensing. In Section 4 the proposed online adaptive SCS is applied in real image data sensing, leading to considerably improved results with respect to standard SCS, at a marginally increased computational complexity. Concluding remarks and future work are discussed in Section 5.

## 2 Statistical Compressed Sensing

### 2.1 Sensing of Gaussian Models

#### 2.1.1 Optimal Principal Direction Sensing

Signals are assumed to follow a Gaussian distribution , where and are respectively its mean and covariance. Without loss of generality, the Gaussian mean is assumed zero,

, as one can always center the signal with respect to the mean. Principal Component Analysis gives the orthonormal PCA basis

that diagonalizes the covariance matrix , where is a diagonal matrix whose diagonal elements

are the sorted eigenvalues

[mallat2008wts]. It is well known that for Gaussian signals a linear approximation in the PCA basis minimizes the mean squared error (MSE). Putting this in the signal sensing context, a sensing matrix

 Φ=[b1,…,bM]T∈RM×N, (1)

where is the -th principal direction of the Gaussian, i.e., the -th column in , and a linear decoder

 Δ=[b1,…,bM]∈RN×M,−1ex (2)

minimize the MSE amongst all sensing matrices and any decoder :

 σ2M≜minΦ∈RM×N,ΔE[∥x−Δ(Φx)∥2]=E[∥x−M∑n=1⟨x,bn⟩bn∥2]=N∑n=M+1λn,

where denotes the minimum MSE.

#### 2.1.2 Statistical Compressed Sensing

For Gaussian signals, it has been shown that any sensing matrix and the maximum a posteriori (MAP) linear decoder lead to an MSE upper bounded by a constant times the minimum MSE [yu2011SCS]:

###### Theorem 1

Assume . Let be an sensing matrix and the optimal and linear decoder. Then

 E[∥x−Δ(Φx)∥22]≤C0σ2M, (3)

where the constant is defined in [yu2011SCS].

The bound constant in Theorem 1 can be obtained via Monte Carlo simulations. For Gaussian and Bernoulli matrices, a small has been shown, i.e., the error bound is tight [yu2011SCS].

### 2.2 Sensing of Gaussian Mixture Models

A single Gaussian distribution is often too simplistic for modeling real signals. Assuming multiple Gaussian distributions and that each signal follows one of them with an unknown index, Gaussian mixture models (GMMs) provide more precise signal descriptions.

As the Gaussian indices of the signals are unknown, the optimal principal sensing (1), (2) is impracticable. SCS applies instead non-adaptive random matrices for signal sensing and a piecewise linear decoder for reconstruction [yu2011SCS]. The piecewise linear decoder first calculates the linear MAP decoder using each of the Gaussian models,

 ~xj≜Δj(Φx)=ΣjΦT(ΦΣjΦT)−1(Φx),   ∀1≤j≤J, (4)

and then selects the best model that maximizes the log a-posteriori probability among all the models [yu2010PLE]

 ~j=argmax1≤j≤J−12(log|Σj|+~xTjΣ−1j~xj), (5)

whose corresponding decoder gives the final signal reconstruction:

 Δ(Φx)=Δ~j(Φx). (6)

The accuracy of the Gaussian model selection (5) and of the signal reconstruction given by the piecewise linear decoder has been shown influenced by a number of factors, including the geometry of the Gaussian distributions in the GMM, the signal dimension, and the number of sensing measurements [yu2011SCS]. More accurate model selection and smaller reconstruction error is obtained as the Gaussians distributions are more “orthogonal” one another, as each of the Gaussians is more anisotropic, as the signals are in a higher dimension given that the energy of the signals are concentrated in the first few dimensions, and as the number of sensing measurements increases. Additional theoretical results on Gaussian model selection have been given in [chen2010compressive].

## 3 Online Adaptive Statistical Compressed Sensing

SCS of GMM applies non-adaptive random sensing matrices because for the signal to be sensed, the Gaussian model from which the signal is generated is a priori unknown. If it were known, one would then prefer sensing along the principal directions in the appropriate Gaussian, which leads to the minimum MSE.

The online adaptive SCS improves the accuracy of SCS by first selecting online the Gaussian model, and then adapts the measurements as a function of the model selection. It starts by performing non-adaptive random measurements, based on which the piecewise linear decoder estimates online the Gaussian model for the signal being sensed. As soon as the Gaussian model is selected, for the rest of the measurements it switches to the principal direction sensing in the selected Gaussian. As long as the online model selection is correct, the adaptive sensing along the principal directions in the appropriate Gaussian leads to a smaller MSE than applying fully random sensing.

### 3.1 Algorithm

Assume that measurements are dedicated to sensing the signal. The online SCS algorithm proceeds as follows.

1. Random sensing

. Sense the signal with a random matrix

of measurements.

2. Online decoding and model selection. Decode online the signal from using the piecewise linear decoder (4) and (5):

 (7)
 ^j=argmax1≤j≤J−12(log|Σj|+(~xRj)TΣ−1j~xRj). (8)
3. Optimal sensing. Sense the signal with , i.e., the first

first principal direction vectors in the

-th Gaussian selected online in (8).

4. Decoding. Write the concatenation of the signal measurements sensed in steps 1 and 3. Decode the signal from with the piecewise linear decoder (4), (5), and (6).

Contrary to the conventional CS and SCS that apply linear sensing, the sensing of the online adaptive SCS is nonlinear, as the principal direction sensing matrix in Step 3 depends on the Gaussian model selection estimated from the random measurements sensed in Step 1.

The online adaptive sensing algorithm marginally increases the computational complexity with respect to standard SCS using fully random measurements. The sensing complexity is the same, but the online SCS has an additional online decoding step. The complexity of decoding (4) is dominated by the matrix inversion, which requires floating-point operations (flops) [yu2010PLE]. With a GMM comprised of Gaussian distributions, the online and the final decoding steps are respectively calculated in and flops. As , the additional online decoding brings a marginal increase in computational complexity.

Adjusting the number of random measurements in the online SCS trades off between the online Gaussian model selection accuracy in Step 2 and the signal reconstruction error in Step 4. The larger the , the more random measurements are dedicated, and more accurate the online Gaussian model selection is in consequence (see [chen2010compressive] for the exact bounds). Given the correct online Gaussian model selection, a smaller leaves a bigger number measurements along the principal directions of the appropriate Gaussian, which reduces the signal reconstruction error.

### 3.2 Performance Analysis

To better understand the performance of the online adaptive SCS, let us analyze a GMM comprised of two Gaussian distributions and . Assume without loss of generality that the signals follow the first Gaussian distribution . The MSE of the online SCS can be written as

 E∥x−Δ(Φx)∥2=2∑^i=12∑~i=1∫^j=^i and ~j=~i∥x−Δ~j,^j(Φ(K)^jx)∥2f1(x)dx (9)

where , and index respectively the Gaussian model selected online and at the final signal reconstruction, is the concatenation of the random sensing matrix of measurements and the principal direction sensing matrix of measurements in the Gaussian selected online, and . (9) includes 4 components:

1. : Both the online decoding in Step 2 and the final decoding in Step 4 correctly select the Gaussian model for the signal.

2. : The online decoding correctly selects the Gaussian model, whereas the final decoding incorrectly selects the Gaussian model.

3. : The online decoding incorrectly selects the Gaussian model, whereas the final decoding correctly selects the Gaussian model.

4. : Both the online decoding in Step 2 and the final decoding in Step 4 incorrectly select the Gaussian model for the signal.

To further understand the behavior of the four error components, Monte Carlo simulation is performed to check them on synthetic data. The data set up follows that in [yu2011SCS], emulating standard behavior of image patches: the signals are of dimension ; the eigenvalues of the Gaussians follow a power decay law with a typical value ; the two Gaussians are “orthogonal” one another, i.e., , where and are the PCAs of the two Gaussians, and

is the left-right flipped identity matrix. The sensing matrix

contains measurements (sampling rate 1/4), and the number of random measurements varies from to . The Gaussian model selection is more accurate as the two Gaussians are orthogonal (see also [chen2010compressive]). On the other hand, when the online model selection is erroneous, the resulting principal direction sensing in the wrong Gaussian is the farthest possible from optimal.

Figure 1 plots the four error components (normalized by the signal energy) as a function of the number of random measurements in the online SCS. The first component increases as increases: As the online model selection is correctly calculated from the random measurements, principal direction vectors of that Gaussian are then used to sense the signal; a larger leads to a smaller number of optimal principal direction sensing measurements, and thus to a larger error. The second component is constantly zero: If the online decoding correctly calculates the model selection from the random measurements, after adding measurements along the principal directions in the appropriate Gaussian, the model selection in the final decoding never goes wrong. The third and the fourth components decrease as increases: The incorrect model selection obtained by the online decoding from the first random measurements leads to principal direction measurements in the wrong Gaussian; in our example, the two Gaussians have the opposite eigenvalue order, and these principal direction measurements in the wrong Gaussian are therefore the worst possible; a larger reduces the number of the principal direction measurements in the wrong Gaussian, and reduces in consequence the error, since having more random measurements is better than having more of the wrong measurements.

Figure 2 plots the Gaussian model selection errors of the online and final coding as a function of . Both errors decrease as increases. The online model selection error is constantly larger than that of the final model selection, the two converging as goes to .

The sum of the four online SCS error components illustrated in Figure 1 gives the MSE of the online SCS, plotted in Figure 3 (red) as a function of . The curve presents a U-shape. When is small, the online model selection is inaccurate, the principal direction sensing is thus likely in the wrong Gaussian, which results in a large MSE. As increases, the MSE first decreases and then increases, trading off between the online model selection accuracy (the larger is, the more accurate the online model selection) and the principal direction sensing (the larger is, the smaller principal direction measurements). The MSE of SCS using fully random sensing measurements is plotted in the same figure (blue) for comparison. The lowest point is attained at , where the MSE of the online SCS is times that of SCS. The online SCS thus considerably reduces the MSE of SCS.

Monte Carlo simulations further show that a similar U-shape graph is obtained with different values and the Gaussian eigenvalue decay parameter : The online SCS has the lowest MSE with in the order of , and the ratio between the MSE of the online adaptive SCS and the standard SCS is smaller as and increase.

## 4 Experiments with Real Images

The online adaptive SCS is applied in real image sensing, and compared with SCS using fully random measurements. The latter has been reported to bring about 0.5 to 3.5 dB improvement in PSNR at various sample rates with respect to conventional CS based on sparse models [yu2011SCS].

Following a common practice, an image is decomposed into non-overlapping local patches (an image patch is reshaped and considered as a vector), each regarded as a signal and assumed to follow a GMM [yu2010PLE]. As illustrated in Figure 4, the GMM is comprised of geometry-motivated Gaussian models, each capturing a local direction (see [yu2010PLE] for more details). measurements, or equivalent a sampling rate of , are applied. The standard images Lena (), House (), and Peppers (), as shown in Figure 5, are used in the experiments.

Figure 6 plots the PSNR of the reconstructed patches obtained with the proposed online adaptive SCS as a function of , the number of first-step random measurements, in comparison with that of standard SCS. Similar to the U-shape curve obtained on the synthetic data in Section 3.2, for all the three images under test, as increases the PSNR of the adaptive SCS overall first rises, and then decreases, converging to that of the standard SCS as goes to . The largest improvement with respect to standard SCS, about dB, is attained at or .

## 5 Conclusion and Future Works

An online adaptive sensing strategy has been developed for signals following a mixture model. The basic idea is to first detect online the model, and then adapt the sensing for it. Illustrated for GMMs, the framework considerably reduces the average reconstruction error with respect to standard CS using fully random measurements on both synthetic and real image data, at marginally increased complexity.

We are currently refining the proposed algorithm. The hard switch from random sensing to optimal sensing triggered by the online model selection may be improved with a sample-per-sample optimization following (9), or extending the analysis developed in [Carson2011].

The proposed scheme imbeds low-level pattern recognition (model selection) in the signal sensing and estimation problem. The pattern recognition part has value by itself, and will be further explored.

Following the recent results in [Carson2011], the same type of adaptive sensing strategy can be applied to mixtures of other distributions.

Acknowledgments: Work supported by NSF, ONR, NGA, ARO, DARPA, and NSSEFF. We thank Prof. Robert Calderbank for discussion on the topics of this paper.