Off-grid Variational Bayesian Inference of Line Spectral Estimation from One-bit Samples

by   Jiang Zhu, et al.

In this paper, the line spectral estimation (LSE) problem is studied from one-bit quantized samples where variational line spectral estimation (VALSE) combined expectation propagation (EP) VALSE-EP method is proposed. Since the original measurements are heavily quantized, performing the off-grid frequency estimation is very challenging. Referring to the expectation propagation (EP) principle, this quantized model is decomposed as two modules, one is the componentwise minimum mean square error (MMSE) module, the other is the standard linear model where the variational line spectrum estimation (VALSE) algorithm can be performed. The VALSE-EP algorithm iterates between the two modules in a turbo manner. In addition, this algorithm can be easily extended to solve the LSE with the multiple measurement vectors (MMVs). Finally, numerical results demonstrate the effectiveness of the proposed VALSE-EP method.



There are no comments yet.


page 1

page 2

page 3

page 4


Variational Bayesian Inference of Line Spectral Estimation with Multiple Measurement Vectors

In this paper, the line spectral estimation (LSE) problem with multiple ...

Expectation Propagation Line Spectral Estimation

Line spectral estimation (LSE) is a fundamental problem in signal proces...

Matrix Completion from Quantized Samples via Generalized Sparse Bayesian Learning

The recovery of a low rank matrix from a subset of noisy low-precision q...

Learning Power Spectrum Maps from Quantized Power Measurements

Power spectral density (PSD) maps providing the distribution of RF power...

Multidimensional Variational Line Spectra Estimation

The fundamental multidimensional line spectral estimation problem is add...

Newtonized Orthogonal Matching Pursuit for Line Spectrum Estimation with Multiple Measurement Vectors

A Newtonized orthogonal matching pursuit (NOMP) algorithm is proposed to...

Variational Bayesian Inference of Line Spectra

In this paper, we address the fundamental problem of line spectral estim...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

Line spectrum estimation (LSE) is a fundamental problem in statistical signal processing due to its widespread application in channel estimation [1] and direction of arrival (DOA) estimation [2]. On the one hand, many classical methods have been proposed, such as periodogram, MUSIC and ESPRIT [3, 4, 5]. On the other hand, sparse representation and compressed sensing (CS) based methods have been proposed to estimate frequencies for multiple sinusoids. At first, grid based method where the continuous frequency is discretized into a finite set of grid points is proposed [6]. It is shown that grid based method will incur basis mismatch when the true frequencies do not lie exactly on the grid [7]. As a result, off-the-grid compressed sensing methods have been proposed, such as atom norm minimization and gridless SPICE (GLS) [9, 10, 8, 11, 12]. The atomic norm-based methods involve solving a semidefinite programming (SDP) problem [13], whose computation complexity is prohibitively high for large problem sizes. In [14], a Newtonalized orthogonal matching pursuit (NOMP) method is proposed, where a Newton step and feedback are utilized to refine the frequency estimates. Compared to the incremental step in updating the frequencies in NOMP approach, the iterative reweighted approach (IRA) [15] estimates the frequencies in parallel, which improves the estimation accuracy at the cost of increasing complexity. In [16], superfast LSE methods are proposed based on fast Toeplitz matrix inversion algorithm. In [17]

, an off-grid based variational line spectrum estimation (VALSE) algorithm is proposed, where posterior probability density function (PDF) of the frequency is provided. In

[18], the VALSE is extended to the MMV setting, and it shows the relationship between the VALSE for the SMV and the VALSE for the MMV.

Recently, the mmWave multiple input multiple output (MIMO) system has drawn a great deal of attention. Since the mmWave accompanies large bandwidths, the cost and power consumption are huge due to high precision (e.g., 10-12 bits) analog-to-digital converters (ADCs) [19]. Consequently, low precision ADCs are adopted to alleviate the ADC bottleneck. Another motivation is wideband spectrum sensing in bandwidth-constrained wireless networks [20, 21]. In order to reduce the communication overhead, the sensors quantize their measurements even into a single bit, and the spectrum has to be estimated from the quantized measurements at the fusion center (FC). Thus designing recovery algorithms under low precision quantized observations is meaningful [28, 29].

The most related work to ours is [23], where LSE from heavy quantizations of their noisy complex-valued random linear measurements has been studied. The Cramér-Rao bound is derived and an atomic norm soft thresholding based algorithm has been proposed [23]. While in this paper, we study the LSE from one-bit quantization of their noisy measurements. Utilizing the expectation propagation (EP) principle [24], the generalized linear model can be iteratively decomposed as two modules (a standard linear model and a componentwise minimum mean square error (MMSE) module) [25]. Thus we run the VALSE algorithm in the standard linear module where the frequency estimate is iteratively refined. For the MMSE module, it refines the pseudo observations of the linear model 111Iteratively approximating the generalized linear model as a standard linear model is very beneficial, as many well developed methods such as the information-theoretically optimal successive interference cancellation (SIC) is developed in the SLM.. By iterating between the two modules, it improves the frequency estimation gradually. Finally, numerical experiments are conducted to demonstrate the effectiveness of the proposed algorithm.

Ii Problem Setup

Let be a line spectrum signal consisting of complex sinusoids


where is the complex amplitude of the th frequency, is the th frequency, and


The line spectrum signal is corrupted by the additive white Gaussian noise (AWGN), and are quantized into a single bit, which can be expressed as


where ,

is the variance of the noise, and

returns the sign of the variables. Note that the knowledge of the variance of the noise has no effect on the frequency estimation, as revealed in [26], while its value does have. The goal in this paper is to recover the set of frequencies and the corresponding coefficients without knowing the sparsity level .

Since we usually do not know the sparsity level , we adopt the following model where we assume that the line spectrum signal consisting of complex sinusoids [17]


where and is usually known and satisfies . Since the number of frequencies is , we introduce the binary hidden variables , and means that the th frequency is active, otherwise deactive (). The probability mass function of is


Given that , we assume that 222Sometimes we use

instead when the random variable is clear from the text.

. Thus

follows a Bernoulli-Gaussian distribution, that is


From (5) and (6), it can be seen that the parameter denotes the probability of the th component being active and is a variance parameter. The variable has the prior PDF . Without any knowledge of the frequency , the uninformative prior distribution is used [17]. For encoding the prior distribution, please refer to [17, 18] for further details.

Given , the probability density function of can be easily calculated through (3). According to the Bayes rule, the joint PDF is


where denotes the model parameters. Directly maximizing over and the nusiance parameters is intractable. As a result, an iterative algorithm is designed in the following section.

Iii Algorithm

At first, we present the factor graph and the algorithm module as shown in Fig. 1. Specifically, in the -th iteration, from the EP point of view, the message transmitted from the factor node to the variable node is projected as a Gaussian distribution, i.e., the message is projected as . According to the EP, in the -th iteration can be viewed as the prior distribution of . Combing the likelihood , we projected the posterior as Gaussian distribution using the EP, i.e., we obtain the component-wise MMSE estimate of ,


here the expectation and variance are taken with respect to the posterior PDF , and the analytic expression can be obtained [27]. Now the posterior PDF of is approximated as


Then we calculate the message from the variable node to the factor node as


and and can be calculated as


where denotes componentwise multiplication. From the definition of the factor node , we obtain a pseudo linear observation model


where , and . In [17]

, the VALSE is derived where the additive Gaussian noise is homogenous while here the additive Gaussian noise is heteroscedastic (independent components having different variance). We can average the noise variance but it incurs some performance degradation in our numerical experiments. As a result, some minor modifications are needed to ensure that the original VALSE works. Due to space limitations, we do not present the details about deriving the VALSE for model (


Fig. 1: Factor graph of the joint PDF (7) and the module of the EP-VALSE algorithm. Here the circle denotes the variable node, and the rectangle denotes the factor node. According to the dashed block diagram in Fig. 1 (a), the problem can be decomposed as two modules in Fig. 1 (b), where module A corresponds to the standard linear model, and module B corresponds to the MMSE estimation. Intuitively, the problem can be solved by iterating between the two modules, where module A performs the standard VALSE algorithm, and module B performs the componentwise MMSE estimation.

Now we run the VALSE algorithm. Let be the set of indices of the non-zero components of , i.e., . Similarly, define as the estimate of

by the VALSE algorithm. Note that the VALSE is only initialized using Heuristic

in the first iteration, i.e., . The VALSE algorithm is stopped when the support estimate is kept unchanged. Then we calculate the posterior means and variances of through the posterior PDF of and as


where , and are the posterior means and covariance of . We set and we calculate the extrinsic and variance given by


and we input them to module B. The algorithm iterates until convergence or the maximum number of iterations is reached. The VALSE-EP algorithm is summarized as Algorithm 1.

1:  Initialize , ; Set the number of outer iterations ;  
2:  for   do
3:     Compute the post mean and variance of as (8), (9).
4:     Compute the extrinsic mean and variance of as (12b) and (12a), and set and .
5:     If , run the VALSE algorithm with initialization provided by the Heuristic until the support is unchanged. Otherwise, run the VALSE algorithm directly with initialization provided by the previous results of the VALSE.
6:     Calculate the posterior means (14) and variances (15).
7:     Compute the extrinsic mean and variance of as (16), (17). 
8:  end for
9:  Return , , and .
Algorithm 1 VALSE-EP algorithm

Iii-a Computation complexity

In [17], it shows that the complexity per iteration is dominated by the two steps: the maximization of and the approximations of the posterior PDF by mixtures of von Mises pdfs, and the complexity of the two steps are and with Heuristic in [17]. For the VALSE-EP algorithm, in each inner iteration where VALSE is performed, the posterior covariance matrix of has to be calculated, whose computation complexity is at most because . For a single iteration, the computation complexity of VALSE-EP is , whereas the computation complexity of the atomic norm based algorithm is [12].

Iii-B Extension to the MMV case

The extension to the multisnapshot scenario is straightforward. For the MMV case, the model can be described as


where and denotes the number of snapshots. The algorithm can be designed by iterating between the two modules. Performing the MMSE operation is straightforward. For module A, it can be decoupled as SMVs


where , and . In [18, the end of Sec. IV], the relationship of the VALSE algorithm between the SMVs and MMVs has been revealed. For the th snapshot, we perform the VALSE algorithm and obtain for the -th frequency. Then is obtained via summing for all the snapshots, i.e., , and now each is updated as . We use to obtain estimates and . In addition, we update the weight and their covariance by applying the SMV VALSE. Let and be the estimated weight and their covariance of the th snapshot. For updating , can be viewed as a sum of the results for each snapshots. Let denote the estimate of the th SMV VALSE, then model parameters estimates is updated as the average of their respective estimates, i.e., . Besides, can be naturally estimated as .

Fig. 2: The posterior PDF of the frequency from one-bit samples.

Here a illustrative example is presented where , dB for each snapshot, and the single true frequency is . The VALSE-EP outputs the posterior PDF of the frequency, as shown in Fig. 2. It can be seen that the PDF is very peaked, and increasing the number of snapshots makes the posterior PDF more concentrated, which means that the uncertain degrees is reduced.

Fig. 3: The performance of the VALSE-EP with varied . The SNR for each snapshot is dB.
Fig. 4: The performance of the VALSE-EP with varied . The number of measurements is .

Iv Numerical Simulation

In this section, numerical experiments are conducted to verify the proposed algorithm. We evaluate the signal estimation error, frequency estimation error, the correct model order estimation probability under one-bit quantization. To generate the real signal, we will replace in (4) with true complex sinusoids . The frequencies are randomly drawn such that the minimum wrap around distance is greater than . We evaluate the performance of the VALSE algorithm utilizing noninformative prior, i.e., . The magnitudes of the weight coefficients are drawn i.i.d. from a Gaussian distribution

, and the phases are drawn i.i.d. from a uniform distribution between

. We define the signal to noise ratio (SNR) as for the SMV case, while for the MMVs we fix the SNR for each snapshot. We define the normalized MSE (NMSE) of signal (for unquantized system) and as and , respectively. For one-bit quantization, we calculate the debiased NMSE of the signal for single and multi snapshots defined as or . As for the frequency estimation error, we average only the trials in which all those algorithms estimate the correct model order. The Algorithm 2 stops when the number of iterations exceeds . All the results are averaged over Monte Carlo (MC) trials. The CRB of the frequency estimation for unquantized samples [22] and one-bit samples [23] are also evaluated. We define the empirical which represents the probability of correct model order estimation.

Iv-a Estimation by varying

We examine the performance of the VALSE-EP by varying the number of measurements . The results are shown in Fig. 3. It can be seen that both the debiased NMSE of the signal and frequency decreases as the number of measurements increases. In addition, the frequency estimation error of the VALSE is lower than the CRB, which still makes sense because VALSE is the Bayesian method. Compared to the single snapshot, utilizing the multi snapshot increases the performance. Overall, the correct model order probability increases as the number of measurements increases, and has a slightly fluctuation when the number of measurements exceeds .

Iv-B Estimating by varying the snapshots

The performance of the VALSE-EP is investigated with varied . The results are shown in Fig. 4. It can be seen that as the number of snapshots increases, the NMSE of the signal decreases and becomes stable. In addition, the performance of the VALSE-EP is better for dB than that of dB. The NMSE of the frequency decreases as the number of snapshots increases. As for the correct model order probability, the overall trend is upwards as the number of snapshots increases.

V Conclusion

In this paper, a VALSE-EP algorithm is proposed to deal with the LSE problem from one-bit quantized samples. The VALSE-EP is an off-grid algorithm which iteratively refines the frequency estimates. Besides, the VALSE-EP is extended to deal with the MMVs. Finally, numerical results show the effectiveness of the VALSE-EP.



  • [1] W. Bajwa, A. Sayeed, and R. Nowak, “Compressed channel sensing: A new approach to estimating sparse multipath channels,” Proc. IEEE, vol. 98, pp. 1058-1076, Jun. 2010.
  • [2] B. Ottersten, M. Viberg and T. Kailath, “Analysis of subspace fitting and ML techniques for parameter estimation from sensor array data,” IEEE Trans. Signal Process., vol. 40, pp. 590-600, Mar. 1992.
  • [3] P. Stoica and R. L. Moses, Spectral Analysis of Signals. Upper Saddle River, NJ, USA: Prentice-Hall, 2005.
  • [4] R. Schmidt, “Multiple emitter location and signal parameter estimation,” IEEE Trans. on Antennas and Propagation, vol. 34, no. 3, pp. 276-280, 1986.
  • [5] R. Roy and T. Kailath, “ESPRIT - estimation of signal parameters via rotational invariance techniques,” IEEE Trans. on Acoustics, Speech and Signal Processing, vol. 37, no. 7, pp. 984-995, 1989.
  • [6] D. Malioutov, M. Cetin and A. Willsky, “A sparse signal reconstruction perspective for source localization with sensor arrays,” IEEE Trans. Signal Process., vol. 53, no. 8, pp. 3010-2022, 2005.
  • [7] Y. Chi, L. L. Scharf, A. Pezeshki and R. Calderbank, “Sensitivity of basis mismatch to compressed sensing,” IEEE Trans. on Signal Process., vol. 59, pp. 2182 - 2195, 2011.
  • [8] G. Tang, B. Bhaskar, P. Shah and B. Recht, “Compressed sensing off the grid,” IEEE Trans. Inf. Theory, vol. 59, no. 11, pp. 7465-7490, 2013.
  • [9] Z. Yang and L. Xie, “On gridless sparse methods for line spectral estimation from complete and incomplete data,” IEEE Trans. Signal Process., vol. 63, no. 12, pp. 3139-3153, 2015.
  • [10] Z. Yang and L. Xie, “Continuous compressed sensing with a single or multiple measurement vectors,” IEEE Workshop on Statistical Signal Processing, pp. 288-291, 2014.
  • [11] Y. Li and Y. Chi, “Off-the-grid line spectrum denoising and estimation with multiple measurement vectors,” IEEE Trans. Signal Process., vol. 64, no. 5, pp. 1257-1269, 2016.
  • [12] Z. Yang, L. Xie and C. Zhang, “A discretization-free sparse and parametric approach for linear array signal processing,” IEEE Trans. Signal Process., vol. 62, no. 19, pp. 4959-4973, 2014.
  • [13] S. Boyd and L. Vandenberghe, Convex Optimization, Cambridge University Press, 2004.
  • [14] B. Mamandipoor, D. Ramasamy and U. Madhow, “Newtonized orthogonal matching pursuit: Frequency estimation over the continuum,” IEEE Trans. Signal Process., vol. 64, no. 19, pp. 5066-5081, 2016.
  • [15] J. Fang, F. Wang, Y. Shen, H. Li and R. S. Blum, “Superresolution compressed sensing for line spectral estimation:an iterative reweighted approach,” IEEE Trans. Signal Process., vol. 64, no. 18, pp. 4649-4662, 2016.
  • [16] T. L. Hansen, B. H. Fleury and B. D. Rao, “Superfast line spectral estimation,” avaliable at
  • [17] M. A. Badiu, T. L. Hansen and B. H. Fleury, “Variational Bayesian inference of line spectrum estimation,” IEEE Trans. Signal Process., vol. 65, no. 9, pp. 2247-2261, 2017.
  • [18] Q. Zhang, J. Zhu, P. Gerstoft, M. A. Badiu and Z. Xu, “Variational Bayesian inference of line spectral estimation with multiple measurement vectors,” avaliable at
  • [19] S. Rangan, T. S. Rappaport and E. Erkip, “Millimeter-wave cellular wireless networks: potentials and challenges,” Proc. IEEE, vol. 102, no. 3, pp. 366-385, 2014.
  • [20] O. Mehanna and N. Sidiropoulos, “Frugal sensing: Wideband power spectrum sensing from few bits,” IEEE Trans. on Signal Process., vol. 61, no. 10, pp. 2693-2703, May 2013.
  • [21] Y. Chi and H. Fu, “Subspace learning from bits,” IEEE Trans. on Signal Process., vol. 65, no. 17, pp. 4429 C4442, Sept 2017.
  • [22] L. Han, J. Zhu, R. S. Blum and Z. Xu, “Newtonized orthogonal matching pursuit for line spectrum estimation with multiple measurement vectors,” avaliable at
  • [23] H. Fu and Y. Chi, “Quantized spectral compressed sensing: Cramér-Rao bounds and recovery algorithms,” IEEE Trans. on Signal Process., vol. 66, no. 12, pp. 3268-3279, 2018.
  • [24] T. Minka, “A family of algorithms for approximate Bayesian inference,” Ph.D. dissertation, Mass. Inst. Technol., Cambridge, MA, USA, 2001.
  • [25] X. Meng, S. Wu , and J. Zhu, “A unified Bayesian inference framework for generalized linear models,” IEEE Signal Process. Lett., vol. 25, no. 3, pp. 398-402, 2018.
  • [26] X. Meng, and J. Zhu, “A generalized sparse Bayesian learning algorithm for one-bit DOA estimation,” IEEE Commun. Lett., vol. 22, no. 7, pp. 1414-1417, 2018.
  • [27] H. Cao, J. Zhu and Z. Xu, “Adaptive one-bit quantization via approximate message passing with nearest neighbour sparsity pattern learning,” IET Signal Processing, vol. 12, no. 5, 2018.
  • [28] F. Li, J. Fang, H. Li and L. Huang, “Robust one-bit Bayesian compressed sensing with sign-flip errors,” IEEE Signal Process. Lett., vol. 22, no. 07, 2015.
  • [29] J. Fang, Y. Shen, L. Yang and H. Li, “Adaptive one-bit quantization for compressed sensing,” Signal Processing, vol. 125, pp. 145-155, 2016.