I Introduction
Compressed sensing (CS) aims to reconstruct sparse signals from the underdetermined measurements [1], which has many applications in Magnetic Resonance Imaging (MRI), lensless imaging and network tomography [2, 3, 4]. Various algorithms have been proposed to solve the standard linear models (SLMs) , such as the least absolute shrinkage and selection operator (lasso) [5], the iterative thresholding algorithms [6], the sparse Bayesian learning (SBL) algorithm [7, 8] and approximate message passing (AMP) algorithms [9].
With the rapid development of the millimeter wave (mmWave) communication technology, the future communication transmission rate will be greatly improved, which means that the sampling rate of the analogtodigital converter (ADC) must be increased. However, highspeed highprecision ADC is either not available, or is costly and powerhungry [10]
. One approach to reduce power consumption is to adopt the lowprecision quantized systems (14 bits). In this setting, traditional algorithms for SLMs suffer performance degradation in the low precision quantized systems. As a result, it’s of great theoretical and practical importance to study the channel estimation and direction of arrival (DOA) from quantized measurements, especially for the onebit scenarios.
One of the representative algorithms for solving the onebit CS reconstruction problems is the binary iterative hard thresholding (BIHT) algorithm, which was proposed to recover the original signals from noiseless onebit measurements and having a remarkable performance in terms of the reconstruction error as well as consistency [11]. For the generalized linear models (GLMs), a generalized approximate message passing (GAMP) algorithm was proposed [12]. It is shown that the GAMP algorithm can be applied to solve the channel estimation problems for mmWave multiple input multiple output (MIMO) systems with onebit ADCs [13]. In addition, the GAMP algorithm can be naturally incorporated into the adaptive quantization framework [15, 14]. In [16], the SBL algorithm is extended to deal with the GLMs, where the GLMs are iteratively approximated as SLMs, and a generalized SBL (GrSBL) algorithm is proposed. Besides, the GrSBL is applied to solve the onebit DOA estimation problem [17]. The GrSBL algorithm leverages the joint sparsity of the real and imaginary parts and thus improves the recovery performance.
In this paper, utilising the Bussganglike decomposition [18], we transfer a nonlinear model into a linear one. Then we naturally propose the binary sparse Bayesian learning (BSBL) algorithm. Compared to the GrSBL algorithm [16, 17], which approximates the nonlinear model into the pseudo linear one with iteratively updated measurements, the binary observation of the BSBL algorithm is kept unchanged during the whole iteration process. The simulation results have shown its effectiveness in the single measurement vector (SMV) and multiple measurement vectors (MMVs). In addition, this BSBL algorithm can also be applied to the correlated noise scenario.
Notation: For a vector , let denote a matrix whose diagonal is composed of . For a square matrix , let denote a vector whose elements are the diagonal elements of . For a positive definite matrix , let denote the elementwise inverse square root of . For the scalar and vector , the division operation and power operation are applied componentwisely.
Ii Algorithm
In this section, the BSBL algorithm for onebit CS in both SMV and MMVs scenarios are proposed. The key step is to iteratively transform the onebit quantization problem into a linear model, then the SBL algorithm can be naturally incorporated.
Iia Single measurement vector
Consider the estimation problem from onebit measurements described as (extended to complex observations)
(1) 
where , and denote the real and imaginary parts of , returns the componentwise sign of its variables. denotes the complex binaryvalued measurements. is a known measurement matrix, is a circular symmetric Gaussian noise with covariance matrix being . denotes the complex amplitudes, whose number of nonzero elements is . Equivalently, the complex observation model (1) can be equivalently expressed as
(2) 
where and
(3)  
(4) 
In the following, we focus on solving the problem (2) instead of (1), as (2) can be transformed as a linear regression problem [19]. For the SBL framework, the priors of elements of
are independent and identically distributed (i.i.d.) Gaussian random variables
[20],^{1}^{1}1One can see that for complex signals, its real and imaginary parts should share some common sparsity. While in this paper, we assume that they are independent and some performance loss may be incurred due to this assumptions. i.e.,(5) 
where
contains the hyperparameters that control the sparsity of
, denotes the transpose operator anddenotes the Gaussian probability density function (PDF) of
with mean 0 and variance
. Assuming that each element offollows the Gamma distribution expressed as
(6) 
here denotes the Gamma distribution with shape and rate , and is the Gpamma function
(7) 
Note that and corresponds to the uninformative prior of .
Now the BSBL algorithm is described in detail. For the th iteration, assume that , where and denotes the componentwise division. By utilising the Bussganglike decomposition, model (2) can be approximated as a linear regression problem [19]
(8) 
where is a linearization matrix and is a residual error vector that includes noise and linearization artifacts. The closedform expression of can be easily obtained [19].
We propose an BSBL algorithm, which consists of two steps, as shown in Fig. 1.
According to E step, we obtain the posterior means and covariance matrix of and pass them as the input of the M step. M step subsequently utilises the information of and to provide as the input of the E step. Then is used to update the model (8). This procedure proceeds until the number of iterations is reached. Now we present the details as follows. We are interested in evaluating the E step, i.e., obtaining the posterior means and covariance matrix of . According to [19, 21], the posterior means and covariance matrix of are
(9a)  
(9b) 
where
(10a)  
(10b)  
(10c) 
To make the algorithm more stable, we propose using a damping factor on the posterior means and variances as
(11a)  
(11b) 
For the M step, we update as
(12) 
Now, we close the loop of the BSBL algorithm. The proposed algorithm is summarized in Algorithm 1.
In some practical onebit CS scenarios, the noise variance may be unknown and needs to be estimated. But when the noise is uncorrelated, i.e., , where
denotes the identity matrix with dimension
, and the relative amplitude or the support of is the focus, there’s no need to estimate for Algorithm 1 if we use the uninformative prior. In fact, the DOAs are determined by the relative amplitudes, not the complex amplitudes. Given that the noise variance is unknown, we can reformulate the model (1) as(13) 
In this setting, the variance of the additive noise is unit. The above analysis also applies to the GrSBL method, which is also explained in [17]. In the numerical simulations, we will verify this fact.
IiB Multiple measurement vectors
For the MMVs scenario, the model can be expressed as
(14) 
where denotes the onebit quantized measurements, is the number of snapshots and is the number of measurements per snapshot. For each snapshot, the noise satisfies and is independent across all the snapshots. is row sparse. The MMVs model can be decoupled as
(15) 
Therefore, we apply Algorithm 1 for each snapshot. In detail, for a given , we obtain the posterior means and covariance matrix of for each snapshot according to (9). Note that the covariance matrix is the same for all the snapshots. Let and denote the damped poster means and covariance matrix for the th snapshot. We now evaluate the expected complete data loglikelihood function given by
(16) 
where the expectation is taken with respect to the posterior distribution of , . Performing the M step and setting , we update as
(17) 
The whole process is summarized in Algorithm 2.
Iii Simulation
In this section, we compare the performance of the BSBL algorithm against SVM [22], BIHT and GrSBL methods to solve the DOA estimation problem, where there exist three signal sources at direction of arrivals (DOAs) with amplitudes being dB in the uniform linear array (ULA) scenario, which is the same scenario as used in [23]. The relationship between the interelement spacing and the wavelength is . All the DOAs are restricted into an angular grid . Each column of the measurement matrix is defined as In both experiments, the signaltonoise ratio (SNR) in SMV and MMVs is defined as and , respectively, where and denotes the and Frobenius norm separately. Then we calculate the noise variance according to the . For both the GrSBL and BSBL algorithm, we set , which corresponds to the uninformative prior of . We set the number of iterations as 500 for both BSBL and GrSBL methods. The damping factor is .
Before moving forward, we firstly conduct a experiment about estimation performance of various algorithms in one single realization. The results are presented in Fig. 2. Given that the number of sources is known, it can be seen that all algorithms except BIHT and SVM successfully locate the DOAs of the three sources. In the SMV scenarios, the GrSBL has the best estimation performance. As for the MMVs scenario, the performance of the BSBL algorithm is comparable to that of the GrSBL algorithm. In addition, we can see that increasing the number of snapshots is very beneficial for enhancing the reconstruction performance.
In the following, we conduct the two numerical experiments when the noise is uncorrelated. We have found that the proposed method also works well with noise being correlated.
In the first simulation, we want to compare the estimation performance of all algorithms in terms of normalized mean square error (NMSE). The debiased NMSE for SMV and MMVs scenarios is defined as or , respectively. For all four algorithms, we look into the relationship between the debiased NMSE of the algorithms against the SNR in the scenario where , and , , respectively. For both the GrSBL and BSBL algorithm, the mismatched corresponds to the scenario that we set the input noise variance to 1 instead of the true one.
It can be seen from Fig. 3 that the performance of the BSBL and GrSBL methods in the mismatched scenario is the same as that of the varianceknown cases. The debiased NMSE of the BSBL is larger than that of BIHT for . The reason is that the the number of the unknown sources is available to the BIHT method. Actually, if the BSBL chooses the top magnitudes, it achieves a lower debiased NMSE, as seen in Fig. 3. Utilizing the information of significantly reduces the debiased NMSE of the BSBL algorithm in the SMV settings. In contrast to the SMV settings, the effect of utilizing becomes smaller in the MMVs settings. The reason is that the BSBL algorithm without knowing can locate the source more accurately in the MMVs settings, as shown in Fig. 2. As for the running time, from Table I, we can see that the BIHT is the fastest, and the GrSBL with MMVs is the slowest. The running time of the BSBL with MMVs is much faster than that of itself in the SMV setting.
SVM  BSBL (L = 1)  GrSBL (L = 1) 

8.3  69.8  31.3 
BIHT  BSBL (L = 50)  GrSBL (L = 50) 
0.8  15.5  209.4 
SVM  BSBL (L = 1)  GrSBL (L = 1) 

89  79  96 
BIHT  BSBL (L = 50)  GrSBL (L = 50) 
47  98  81 
As for the second experiment, we plot the bin counts over 100 MC simulations at dB. From Fig. 4 and Table II, we can see that the GrSBL algorithm has the best location results and the BIHT is the worst in the SMV scenario. For the MMVs scenarios, the BSBL algorithm has a better estimation performance than GrSBL algorithm. In general, the BSBL method with MMVs achieves the best location performance.
Iv Conclusion
We propose a BSBL algorithm to cope with the onebit CS reconstruction problems. The proposed algorithm transfers the original nonlinear problem into a linear one and naturally takes the standard SBL framework into consideration. Algorithms for both the SMV and MMVs scenarios are introduced. Furthermore, simulations demonstrate the effectiveness of the proposed algorithm, especially in the MMVs scenario.
References
References
 [1] D. L. Donoho, “Compressed sensing,” IEEE Trans. Inf. Theory, vol. 52, no. 4, pp. 12891306, 2006.
 [2] M. Lustig, D. L. Donoho, J. M. Santos and J. M. Pauly, “Compressed sensing MRI,” IEEE Signal Processing Magazine, vol. 25, no. 2, pp. 7282, 2008.
 [3] G. Huang, H. Jiang, K. Matthews and P. Wilford, “Lensless imaging by compressive sensing,” 2013 IEEE International Conference on Image Processing, pp. 21012105, 2013.
 [4] M. H. Firooz and S. Roy, “Network tomography via compressed sensing,” 2010 IEEE Global Telecommunications Conference GLOBECOM, pp. 15, 2010.
 [5] R. Tibshirani, “Regression shrinkage and selection via the Lasso,” Journal of the Royal Statistical Society, Series B (Methodological), vol. 58, no. 1, pp. 267288. JSTOR, 1996.
 [6] I. Daubechies, M. Defrise and C. D. Mol, “An iterative thresholding algorithm for linear inverse problems with a sparsity constraint,” Commun. Pure Appl. Math., vol. 57, pp. 14131457, 2004.
 [7] M. E. Tipping, “Sparse Bayesian learning and the relevance vector machine,” JMLR, vol. 1, pp. 211244, 2001.
 [8] D. P. Wipf and B. D. Rao, “Sparse Bayesian learning for basis selection,” IEEE Trans. Signal Process., vol. 52, no. 8, pp. 21532164, 2004.
 [9] D. L. Donoho, A. Maleki and A. Montanari, “Message passing algorithms for compressed sensing,” PNAS, vol. 106, no. 45, pp. 1891418919, 2009.
 [10] J. Singh, O. Dabeer and U. Madhow, “On the limits of communication with lowprecision analogtodigital conversion at the receiver,” IEEE Trans. Commun., vol. 57, no. 12, pp. 36293639, 2009.
 [11] L. Jacques, J. L. Laska, P. T. Boufounos and R. G. Baraniuk, “Robust 1bit compressive sensing via binary stable embeddings of sparse vectors,” IEEE Trans. Inf. Theory, pp. 20822102, 2013.
 [12] S. Rangan, “Generalized approximate message passing for estimation with random linear mixing,” in Proc. IEEE Int. Symp. Inf. Theory, pp. 21682172, 2011.
 [13] J. Mo, P. Schniter, N. G. Prelcic and R. W. Heath Jr, “Channel estimation in millimeter wave MIMO systems with onebit quantization,” Proc. IEEE Asilomar Conf. Signals Syst. Comput., pp. 957961, 2014.
 [14] H. Cao, J. Zhu and Z. Xu, “Adaptive onebit quantization via approximate message passing with nearest neighbour sparsity pattern learning,” IET Signal Processing, 2018.
 [15] U. S. Kamilov, A. Bourquard, A. Amini and M. Unser, “Onebit measurements with adaptive thresholds”, IEEE Signal Process. Lett., vol. 19, no. 10, pp. 607610, 2012.

[16]
X. Meng, S. Wu and J. Zhu, “A unified Bayesian inference framework for generalized linear models,”
IEEE Signal Process. Lett., vol. 25, no. 3, pp. 398402, 2018.  [17] X. Meng and J. Zhu, “A generalized sparse Bayesian learning algorithm for onebit DOA estimation,” to appear in IEEE Commun. Lett., 2018.
 [18] J. Bussgang, “Crosscorrelation function of amplitudedistorted gaussian signals,” MIT, Res. Lab. Elec. Tech. Rep. 216, Mar. 1952.
 [19] A. S. Lan, M. Chiang and C. Studer, “Linearized binary regression,” available at https://arxiv.org/pdf/1802.00430.pdf.
 [20] K. P. Murphy, Machine Learning: a probabilistic perspective, pp. 463467, 2012.
 [21] Y. Li, C. Tao, G. SecoGranados, A. Mezghani, A. Swindlehurst and L. Liu, “Channel estimation and performance analysis of onebit massive MIMO systems,” IEEE Trans. Signal Process., vol. 65, no. 15, pp. 40754089, 2017.
 [22] Y. Gao, D. Hu, Y. Chen and Y. Ma, “Gridless 1b DOA estimation exploiting SVM approach,” IEEE Commun. Lett., vol. 21, no. 4, Oct. 2017.
 [23] P. Gerstoft, C. F. Mecklenbräuker, A. Xenaki and S. Nannuru, “Multisnapshot sparse Bayesian learning for DOA,” IEEE Signal Process. Lett., vol. 23, no. 10, pp. 14691473, 2016.
Comments
There are no comments yet.