Binary Sparse Bayesian Learning Algorithm for One-bit Compressed Sensing

05/08/2018
by   Jiang Zhu, et al.
0

In this letter, a binary sparse Bayesian learning (BSBL) algorithm is proposed to slove the one-bit compressed sensing (CS) problem in both single measurement vector (SMV) and multiple measurement vectors (MMVs). By utilising the Bussgang-like decomposition, the one-bit CS problem can be approximated as a standard linear model. Consequently, the standard SBL algorithm can be naturally incorporated. Numerical results demonstrate the effectiveness of the BSBL algorithm.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

01/03/2016

Sparse Diffusion Steepest-Descent for One Bit Compressed Sensing in Wireless Sensor Networks

This letter proposes a sparse diffusion steepest-descent algorithm for o...
08/18/2019

Greedy Algorithms for Hybrid Compressed Sensing

Compressed sensing (CS) is a technique which uses fewer measurements tha...
04/10/2019

Compressed sensing reconstruction using Expectation Propagation

Many interesting problems in fields ranging from telecommunications to c...
12/29/2017

A Unified Bayesian Inference Framework for Generalized Linear Models

In this letter, we present a unified Bayesian inference framework for ge...
11/18/2015

Bayesian hypothesis testing for one bit compressed sensing with sensing matrix perturbation

This letter proposes a low-computational Bayesian algorithm for noisy sp...
05/05/2021

AdaBoost and robust one-bit compressed sensing

This paper studies binary classification in robust one-bit compressed se...
11/25/2017

Multivariate Copula Spatial Dependency in One Bit Compressed Sensing

In this letter, the problem of sparse signal reconstruction from one bit...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

Compressed sensing (CS) aims to reconstruct sparse signals from the underdetermined measurements [1], which has many applications in Magnetic Resonance Imaging (MRI), lensless imaging and network tomography [2, 3, 4]. Various algorithms have been proposed to solve the standard linear models (SLMs) , such as the least absolute shrinkage and selection operator (lasso) [5], the iterative thresholding algorithms [6], the sparse Bayesian learning (SBL) algorithm [7, 8] and approximate message passing (AMP) algorithms [9].

With the rapid development of the millimeter wave (mmWave) communication technology, the future communication transmission rate will be greatly improved, which means that the sampling rate of the analog-to-digital converter (ADC) must be increased. However, high-speed high-precision ADC is either not available, or is costly and power-hungry [10]

. One approach to reduce power consumption is to adopt the low-precision quantized systems (1-4 bits). In this setting, traditional algorithms for SLMs suffer performance degradation in the low precision quantized systems. As a result, it’s of great theoretical and practical importance to study the channel estimation and direction of arrival (DOA) from quantized measurements, especially for the one-bit scenarios.

One of the representative algorithms for solving the one-bit CS reconstruction problems is the binary iterative hard thresholding (BIHT) algorithm, which was proposed to recover the original signals from noiseless one-bit measurements and having a remarkable performance in terms of the reconstruction error as well as consistency [11]. For the generalized linear models (GLMs), a generalized approximate message passing (GAMP) algorithm was proposed [12]. It is shown that the GAMP algorithm can be applied to solve the channel estimation problems for mmWave multiple input multiple output (MIMO) systems with one-bit ADCs [13]. In addition, the GAMP algorithm can be naturally incorporated into the adaptive quantization framework [15, 14]. In [16], the SBL algorithm is extended to deal with the GLMs, where the GLMs are iteratively approximated as SLMs, and a generalized SBL (Gr-SBL) algorithm is proposed. Besides, the Gr-SBL is applied to solve the one-bit DOA estimation problem [17]. The Gr-SBL algorithm leverages the joint sparsity of the real and imaginary parts and thus improves the recovery performance.

In this paper, utilising the Bussgang-like decomposition [18], we transfer a nonlinear model into a linear one. Then we naturally propose the binary sparse Bayesian learning (BSBL) algorithm. Compared to the Gr-SBL algorithm [16, 17], which approximates the nonlinear model into the pseudo linear one with iteratively updated measurements, the binary observation of the BSBL algorithm is kept unchanged during the whole iteration process. The simulation results have shown its effectiveness in the single measurement vector (SMV) and multiple measurement vectors (MMVs). In addition, this BSBL algorithm can also be applied to the correlated noise scenario.

Notation: For a vector , let denote a matrix whose diagonal is composed of . For a square matrix , let denote a vector whose elements are the diagonal elements of . For a positive definite matrix , let denote the elementwise inverse square root of . For the scalar and vector , the division operation and power operation are applied componentwisely.

Ii Algorithm

In this section, the BSBL algorithm for one-bit CS in both SMV and MMVs scenarios are proposed. The key step is to iteratively transform the one-bit quantization problem into a linear model, then the SBL algorithm can be naturally incorporated.

Ii-a Single measurement vector

Consider the estimation problem from one-bit measurements described as (extended to complex observations)

(1)

where , and denote the real and imaginary parts of , returns the componentwise sign of its variables. denotes the complex binary-valued measurements. is a known measurement matrix, is a circular symmetric Gaussian noise with covariance matrix being . denotes the complex amplitudes, whose number of non-zero elements is . Equivalently, the complex observation model (1) can be equivalently expressed as

(2)

where and

(3)
(4)

In the following, we focus on solving the problem (2) instead of (1), as (2) can be transformed as a linear regression problem [19]. For the SBL framework, the priors of elements of

are independent and identically distributed (i.i.d.) Gaussian random variables

[20],111One can see that for complex signals, its real and imaginary parts should share some common sparsity. While in this paper, we assume that they are independent and some performance loss may be incurred due to this assumptions. i.e.,

(5)

where

contains the hyperparameters that control the sparsity of

, denotes the transpose operator and

denotes the Gaussian probability density function (PDF) of

with mean 0 and variance

. Assuming that each element of

follows the Gamma distribution expressed as

(6)

here denotes the Gamma distribution with shape and rate , and is the Gpamma function

(7)

Note that and corresponds to the uninformative prior of .

Now the BSBL algorithm is described in detail. For the -th iteration, assume that , where and denotes the componentwise division. By utilising the Bussgang-like decomposition, model (2) can be approximated as a linear regression problem [19]

(8)

where is a linearization matrix and is a residual error vector that includes noise and linearization artifacts. The closed-form expression of can be easily obtained [19].

We propose an BSBL algorithm, which consists of two steps, as shown in Fig. 1.

Fig. 1: System diagram for performing the BSBL algorithm

According to E step, we obtain the posterior means and covariance matrix of and pass them as the input of the M step. M step subsequently utilises the information of and to provide as the input of the E step. Then is used to update the model (8). This procedure proceeds until the number of iterations is reached. Now we present the details as follows. We are interested in evaluating the E step, i.e., obtaining the posterior means and covariance matrix of . According to [19, 21], the posterior means and covariance matrix of are

(9a)
(9b)

where

(10a)
(10b)
(10c)

To make the algorithm more stable, we propose using a damping factor on the posterior means and variances as

(11a)
(11b)

For the M step, we update as

(12)

Now, we close the loop of the BSBL algorithm. The proposed algorithm is summarized in Algorithm 1.

1:  Initialize , and ; Set the parameters of the noise covariance matrix , the number of iterations and the damping factor ;
2:  for   do
3:     Perform the E step and calculate the post means and variances of as (9).
4:     Perform the damping step (11).
5:     Perform the M step and update as (12).
6:  end for
7:  Return .
Algorithm 1 BSBL algorithm for one-bit compressed sensing with SMV

In some practical one-bit CS scenarios, the noise variance may be unknown and needs to be estimated. But when the noise is uncorrelated, i.e., , where

denotes the identity matrix with dimension

, and the relative amplitude or the support of is the focus, there’s no need to estimate for Algorithm 1 if we use the uninformative prior. In fact, the DOAs are determined by the relative amplitudes, not the complex amplitudes. Given that the noise variance is unknown, we can reformulate the model (1) as

(13)

In this setting, the variance of the additive noise is unit. The above analysis also applies to the Gr-SBL method, which is also explained in [17]. In the numerical simulations, we will verify this fact.

Ii-B Multiple measurement vectors

For the MMVs scenario, the model can be expressed as

(14)

where denotes the one-bit quantized measurements, is the number of snapshots and is the number of measurements per snapshot. For each snapshot, the noise satisfies and is independent across all the snapshots. is row sparse. The MMVs model can be decoupled as

(15)

Therefore, we apply Algorithm 1 for each snapshot. In detail, for a given , we obtain the posterior means and covariance matrix of for each snapshot according to (9). Note that the covariance matrix is the same for all the snapshots. Let and denote the damped poster means and covariance matrix for the -th snapshot. We now evaluate the expected complete data loglikelihood function given by

(16)

where the expectation is taken with respect to the posterior distribution of , . Performing the M step and setting , we update as

(17)

The whole process is summarized in Algorithm 2.

1:  Initialize , and ; Set the parameters of the noise covariance matrix , the number of iterations and the damping factor ;
2:  for   do
3:     For each snapshot, perform the E step and calculate the post means and variances of as (9).
4:     Perform the damping step (11).
5:     Perform the M step and update as (17).
6:  end for
7:  Return .
Algorithm 2 BSBL algorithm for one-bit compressed sensing with MMVs

Iii Simulation

In this section, we compare the performance of the BSBL algorithm against SVM [22], BIHT and Gr-SBL methods to solve the DOA estimation problem, where there exist three signal sources at direction of arrivals (DOAs) with amplitudes being dB in the uniform linear array (ULA) scenario, which is the same scenario as used in [23]. The relationship between the interelement spacing and the wavelength is . All the DOAs are restricted into an angular grid . Each column of the measurement matrix is defined as In both experiments, the signal-to-noise ratio (SNR) in SMV and MMVs is defined as and , respectively, where and denotes the and Frobenius norm separately. Then we calculate the noise variance according to the . For both the Gr-SBL and BSBL algorithm, we set , which corresponds to the uninformative prior of . We set the number of iterations as 500 for both BSBL and Gr-SBL methods. The damping factor is .

Before moving forward, we firstly conduct a experiment about estimation performance of various algorithms in one single realization. The results are presented in Fig. 2. Given that the number of sources is known, it can be seen that all algorithms except BIHT and SVM successfully locate the DOAs of the three sources. In the SMV scenarios, the Gr-SBL has the best estimation performance. As for the MMVs scenario, the performance of the BSBL algorithm is comparable to that of the Gr-SBL algorithm. In addition, we can see that increasing the number of snapshots is very beneficial for enhancing the reconstruction performance.

Fig. 2: Estimation performance of various algorithms at dB. The circle markers denote the top magnitudes, and the cross markers denote the true DOAs.

In the following, we conduct the two numerical experiments when the noise is uncorrelated. We have found that the proposed method also works well with noise being correlated.

In the first simulation, we want to compare the estimation performance of all algorithms in terms of normalized mean square error (NMSE). The debiased NMSE for SMV and MMVs scenarios is defined as or , respectively. For all four algorithms, we look into the relationship between the debiased NMSE of the algorithms against the SNR in the scenario where , and , , respectively. For both the Gr-SBL and BSBL algorithm, the mismatched corresponds to the scenario that we set the input noise variance to 1 instead of the true one.

Fig. 3: The debiased NMSE versus SNR averaged over 10 Monte Carlo (MC) trials.

It can be seen from Fig. 3 that the performance of the BSBL and Gr-SBL methods in the mismatched scenario is the same as that of the variance-known cases. The debiased NMSE of the BSBL is larger than that of BIHT for . The reason is that the the number of the unknown sources is available to the BIHT method. Actually, if the BSBL chooses the top magnitudes, it achieves a lower debiased NMSE, as seen in Fig. 3. Utilizing the information of significantly reduces the debiased NMSE of the BSBL algorithm in the SMV settings. In contrast to the SMV settings, the effect of utilizing becomes smaller in the MMVs settings. The reason is that the BSBL algorithm without knowing can locate the source more accurately in the MMVs settings, as shown in Fig. 2. As for the running time, from Table I, we can see that the BIHT is the fastest, and the Gr-SBL with MMVs is the slowest. The running time of the BSBL with MMVs is much faster than that of itself in the SMV setting.

Fig. 4: Bin counts of the SVM, BIHT, Gr-SBL and BSBL based on 100 MC trials.
SVM BSBL (L = 1) Gr-SBL (L = 1)
8.3 69.8 31.3
BIHT BSBL (L = 50) Gr-SBL (L = 50)
0.8 15.5 209.4
TABLE I: Running time (seconds) of algorithms averaged over 100 MC trials.
SVM BSBL (L = 1) Gr-SBL (L = 1)
89 79 96
BIHT BSBL (L = 50) Gr-SBL (L = 50)
47 98 81
TABLE II: Number of successively detecting true DOAs in 100 MC trials.

As for the second experiment, we plot the bin counts over 100 MC simulations at dB. From Fig. 4 and Table II, we can see that the Gr-SBL algorithm has the best location results and the BIHT is the worst in the SMV scenario. For the MMVs scenarios, the BSBL algorithm has a better estimation performance than Gr-SBL algorithm. In general, the BSBL method with MMVs achieves the best location performance.

Iv Conclusion

We propose a BSBL algorithm to cope with the one-bit CS reconstruction problems. The proposed algorithm transfers the original nonlinear problem into a linear one and naturally takes the standard SBL framework into consideration. Algorithms for both the SMV and MMVs scenarios are introduced. Furthermore, simulations demonstrate the effectiveness of the proposed algorithm, especially in the MMVs scenario.

References

References

  • [1] D. L. Donoho, “Compressed sensing,” IEEE Trans. Inf. Theory, vol. 52, no. 4, pp. 1289-1306, 2006.
  • [2] M. Lustig, D. L. Donoho, J. M. Santos and J. M. Pauly, “Compressed sensing MRI,” IEEE Signal Processing Magazine, vol. 25, no. 2, pp. 72-82, 2008.
  • [3] G. Huang, H. Jiang, K. Matthews and P. Wilford, “Lensless imaging by compressive sensing,” 2013 IEEE International Conference on Image Processing, pp. 2101-2105, 2013.
  • [4] M. H. Firooz and S. Roy, “Network tomography via compressed sensing,” 2010 IEEE Global Telecommunications Conference GLOBECOM, pp. 1-5, 2010.
  • [5] R. Tibshirani, “Regression shrinkage and selection via the Lasso,” Journal of the Royal Statistical Society, Series B (Methodological), vol. 58, no. 1, pp. 267-288. JSTOR, 1996.
  • [6] I. Daubechies, M. Defrise and C. D. Mol, “An iterative thresholding algorithm for linear inverse problems with a sparsity constraint,” Commun. Pure Appl. Math., vol. 57, pp. 1413-1457, 2004.
  • [7] M. E. Tipping, “Sparse Bayesian learning and the relevance vector machine,” JMLR, vol. 1, pp. 211-244, 2001.
  • [8] D. P. Wipf and B. D. Rao, “Sparse Bayesian learning for basis selection,” IEEE Trans. Signal Process., vol. 52, no. 8, pp. 2153-2164, 2004.
  • [9] D. L. Donoho, A. Maleki and A. Montanari, “Message passing algorithms for compressed sensing,” PNAS, vol. 106, no. 45, pp. 18914-18919, 2009.
  • [10] J. Singh, O. Dabeer and U. Madhow, “On the limits of communication with low-precision analog-to-digital conversion at the receiver,” IEEE Trans. Commun., vol. 57, no. 12, pp. 3629-3639, 2009.
  • [11] L. Jacques, J. L. Laska, P. T. Boufounos and R. G. Baraniuk, “Robust 1-bit compressive sensing via binary stable embeddings of sparse vectors,” IEEE Trans. Inf. Theory, pp. 2082-2102, 2013.
  • [12] S. Rangan, “Generalized approximate message passing for estimation with random linear mixing,” in Proc. IEEE Int. Symp. Inf. Theory, pp. 2168-2172, 2011.
  • [13] J. Mo, P. Schniter, N. G. Prelcic and R. W. Heath Jr, “Channel estimation in millimeter wave MIMO systems with one-bit quantization,” Proc. IEEE Asilomar Conf. Signals Syst. Comput., pp. 957-961, 2014.
  • [14] H. Cao, J. Zhu and Z. Xu, “Adaptive one-bit quantization via approximate message passing with nearest neighbour sparsity pattern learning,” IET Signal Processing, 2018.
  • [15] U. S. Kamilov, A. Bourquard, A. Amini and M. Unser, “One-bit measurements with adaptive thresholds”, IEEE Signal Process. Lett., vol. 19, no. 10, pp. 607-610, 2012.
  • [16]

    X. Meng, S. Wu and J. Zhu, “A unified Bayesian inference framework for generalized linear models,”

    IEEE Signal Process. Lett., vol. 25, no. 3, pp. 398-402, 2018.
  • [17] X. Meng and J. Zhu, “A generalized sparse Bayesian learning algorithm for one-bit DOA estimation,” to appear in IEEE Commun. Lett., 2018.
  • [18] J. Bussgang, “Cross-correlation function of amplitude-distorted gaussian signals,” MIT, Res. Lab. Elec. Tech. Rep. 216, Mar. 1952.
  • [19] A. S. Lan, M. Chiang and C. Studer, “Linearized binary regression,” available at https://arxiv.org/pdf/1802.00430.pdf.
  • [20] K. P. Murphy, Machine Learning: a probabilistic perspective, pp. 463-467, 2012.
  • [21] Y. Li, C. Tao, G. Seco-Granados, A. Mezghani, A. Swindlehurst and L. Liu, “Channel estimation and performance analysis of one-bit massive MIMO systems,” IEEE Trans. Signal Process., vol. 65, no. 15, pp. 4075-4089, 2017.
  • [22] Y. Gao, D. Hu, Y. Chen and Y. Ma, “Gridless 1-b DOA estimation exploiting SVM approach,” IEEE Commun. Lett., vol. 21, no. 4, Oct. 2017.
  • [23] P. Gerstoft, C. F. Mecklenbräuker, A. Xenaki and S. Nannuru, “Multisnapshot sparse Bayesian learning for DOA,” IEEE Signal Process. Lett., vol. 23, no. 10, pp. 1469-1473, 2016.