Signal Recovery From 1-Bit Quantized Noisy Samples via Adaptive Thresholding

In this paper, we consider the problem of signal recovery from 1-bit noisy measurements. We present an efficient method to obtain an estimation of the signal of interest when the measurements are corrupted by white or colored noise. To the best of our knowledge, the proposed framework is the pioneer effort in the area of 1-bit sampling and signal recovery in providing a unified framework to deal with the presence of noise with an arbitrary covariance matrix including that of the colored noise. The proposed method is based on a constrained quadratic program (CQP) formulation utilizing an adaptive quantization thresholding approach, that further enables us to accurately recover the signal of interest from its 1-bit noisy measurements. In addition, due to the adaptive nature of the proposed method, it can recover both fixed and time-varying parameters from their quantized 1-bit samples.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

11/30/2018

Deep Signal Recovery with One-Bit Quantization

Machine learning, and more specifically deep learning, have shown remark...
03/24/2021

Quantized Corrupted Sensing with Random Dithering

Corrupted sensing concerns the problem of recovering a high-dimensional ...
05/09/2018

Analysis of Hard-Thresholding for Distributed Compressed Sensing with One-Bit Measurements

A simple hard-thresholding operation is shown to be able to recover L si...
05/20/2018

Adaptive Dictionary Sparse Signal Recovery Using Binary Measurements

One-bit compressive sensing is an extended version of compressed sensing...
07/15/2020

1-Bit Compressive Sensing via Approximate Message Passing with Built-in Parameter Estimation

1-bit compressive sensing aims to recover sparse signals from quantized ...
02/05/2021

LoRD-Net: Unfolded Deep Detection Network with Low-Resolution Receivers

The need to recover high-dimensional signals from their noisy low-resolu...
01/29/2010

Distilled Sensing: Adaptive Sampling for Sparse Detection and Estimation

Adaptive sampling results in dramatic improvements in the recovery of sp...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

Wireless sensor networks (WSNs) present significant potential for usage in spatially wide-scale detection and estimation. However, there exist several practical constraints such as a low power consumption, a low cost of manufacturing, and more importantly, limited computational and transmission capabilities that must be addressed in implementing such networks [1]. As a result, it is highly desirable to develop distributed estimation frameworks with which the nodes can reliably communicate with the fusion center (FC) while satisfying the power and computational constraints. Furthermore, quantization of signal of interest is a necessary first step in many digital signal processing applications such as spectrum sensing, radar, and wireless communications. Analog-to-digital converters (ADCs) play a central role in most modern digital systems as they bridge the gap between the analog world and its digital counterpart. Yet, ADCs are a key implementation bottleneck in WSNs and many other Internet-of-Things (IoT) applications due to the significant accrued power consumption and cost when they are used in large numbers.

In addition, the system bandwidth in a WSN is limited and thus is a fundamental constraint that must be considered. Hence, it is important to use a proper quantization scheme to reduce the communicated bits prior to transmission to address this limitations. Sampling at high data rates with high resolution ADCs would also dramatically increase the manufacturing cost of these electronic components. An immediate solution to such challenges is to use low-resolution, and specifically 1-bit, analog-to-digital converters (ADCs) [2, 3, 4, 5]. Therefore, the problem of recovering a signal from 1-bit measurements has attracted a great deal of interest over the past few years in a wide range of applications—see, e.g., [6, 7, 8, 9, 10, 11, 12], and the references therein. To name a few, the authors in [13, 14, 15, 16, 17, 18, 19], have investigated this problem from a classical statistical viewpoint. Particularly, 1-bit sampling and signal recovery have been extensively studied in the context of recently introduced one-bit Compressive Sensing (CS) problem as well [20, 21, 22, 23, 24, 25, 26, 27, 28]. More specifically, the task of recovering the frequency and phase of temporal and spatial sinusoidal signals utilizing only 1-bit information with fixed quantization thresholds has been extensively investigated in [16] and [17], respectively. On the other hand, the recovery of general signals with high-dimensional parameters from sign comparison information were considered in [18] and [19]

. In the context of CS, it was shown that sparse signals can be accurately recovered with high probability from 1-bit data when sufficient number of measurements is obtained

[21, 22]. However, most of the CS literature have only considered the case of comparing the signal of interst with zero, which makes it impossible to recover the amplitude of the signal of interest.

In this paper, we examine the most extreme case of quantization, i.e. 1-bit case, and propose an efficient signal estimation and threshold design algorithm which can perform the task of signal recovery from its 1-bit noisy measurements under the scenarios of white, colored or correlated noise. Furthermore, the proposed method can handle the task of parameter estimation for both cases of time-varying and fixed signals.

Ii System Model

(a)

(b)
Fig. 1: (a) demonstrates the mutual information of the form (13) for uniform and Gaussian input signal distributions, and (b) illustrates the performance of our proposed algorithm for a time-varying unknown parameter , with .

We consider a wireless sensor network with spatially distributed single-antenna nodes each of which observing an unknown deterministic parameter , at the time index , according to the linear observation model of , where denotes the sensor index, and

is additive zero-mean Gaussian observation noise with a Normal distribution, e.g.,

. Hence, the observed signal at all nodes can be compactly formulated as

(1)

where

denotes the all-one vector.

In order to satisfy the inherent bandwidth and power budget constraints in WSNs, we assume in this case that each node utilizes a 1-bit quantization scheme to encode its observation into 1 bit of information which will be transmitted to the fusion center for further processing. Namely, the th node applies the following quantization function on its observed data prior to transmission:

(2)

where is the sign function, and denotes the quantization threshold at the th node, for the time index . Assuming that each node can reliably transmit 1 bit of information to the FC, the aggregated received data from all nodes at the -th sampling time can be expresses as

(3)

where the sign function is applied element-wise, and is the combined observation noise vector with covariance matrix , and is the quantization threshold vector. Next, the FC utilizes the received 1-bit information to first construct an estimation of the unknown parameter , and then, to further design the next quantization threshold for each node accordingly.

An important observation which we use is that given a set of quantization thresholds , the corresponding vector of 1-bit measurements defined in (3) represents a limitation on the geometric location of the unquantized data . Particularly, one can capture this geometric knowledge on through the following linear inequality:

(4)

where , and is the diagonalization operator, and denotes the all-zero vector.

Iii Centralized 1-Bit Signal Recovery via Quadratic Programming

(a)

(b)
Fig. 2: (a) illustrates the NMSE versus the total power of the colored Gaussian noise for a network with , when the unknown parameter , with , and (b) shows the NMSE versus the total number of nodes , for the same signal and noise model.

We lay the ground for our 1-bit statistical and inference model using the weighted least square (WLS) method. Let denote the covariance matrix of the noise vector . Note that if the unquantized information vector was available at the FC, then the maximum likelihood (ML) estimation of the unknown parameter given can be expressed as:

(5)

where denotes the ML estimate of the unknown parameter according to the vector of observations

. Furthermore, it is well-known that the variance of the ML estimation in (

5) is given by . Alternatively, one can obtain the maximum-likelihood estimator of the unknown parameter by minimizing the following WLS criterion,

(6)

where is our cost function to be minimized over the parameters . A natural approach to obtain an estimate of using the 1-bit quantization vector , is to use an alternating optimization approach and further exploit the limitation on the geometric location of imposed by (4); in other words, to first obtain an estimate of by fixing the variable and then to recover the unknown parameter using (5), given by . Interestingly, for a fixed parameter , the optimal that minimizes coincides with that of the MLE of given in (5).

We can further substitute the optimal of (5) into (6) to simplify the objective function in terms of the parameter , viz.

(7)

where we define as,

(8)

Consequently, one can cast the problem of recovering the unquantized vector from 1-bit noisy measurements as the following constrained quadratic program (CQP):

(9)
(10)

where the inequality in (10) is applied element-wise (equivalent to scalar inequality constraints). Note that the constraint (10) ensures the consistency between the received 1-bit quantized data (as incorporated in ) and the solution . Moreover, note that the matrix at the core of this CQP is positive semi-definite. Therefore, the CQP problem in (9) is convex and can be solved efficiently using standard numerical methods (e.g., the interior point method [29]).

Having obtained via solving (9), we can then easily estimate the unknown parameter by solving the program,

(11)

whose closed-form solution is given in (5).

Iv Quantization Threshold Design

So far, we discussed our inference framework to estimate an unknown deterministic signal from its 1-bit noisy measurements and the corresponding quantization thresholds. In order to further facilitate the estimation process and to restore the exact signal model (both amplitude and phase), it is of essence to design a proper adaptive quantization thresholding scheme. In this section, we devise a stochastic adaptive thresholding method for our estimation algorithm, which thanks to its adaptive nature, empowers the proposed inference framework to accurately recover time-varying signals as well. We first investigate the performance of a 1-bit quantizer of the form (2) in the presence of additive noise from an information theoretic viewpoint, and further show that the presence of noise is indeed improving the performance of a 1-bit quantization scheme. We then propose our threshold design strategy accordingly.

We first consider the case that the unknown parameter is a random variable with a known distribution, and that the observation noise

. Clearly, each 1-bit sample , (at a given time index and quantization threshold value

) can be seen as a random variable that follows a Bernoulli distribution

, whose parameter is given by

(12)

where represents the conditional probability of receiving given the parameter , and denotes the standard Q-function. The mutual information [30] between the unknown parameter and the obtained 1-bit sample can be expressed as

(13)

where for , is the probability mass function (pmf) of the discrete Bernoulli random variable , and

is the probability density function (pdf) of the parameter of interest

, and denotes the entropy function of the argument random variable. Moreover, the conditional entropy in (13) can be further simplified in terms of the noise distribution and according to (12) as follows:

(14)

Moreover, the pmf of the Bernoulli random variable can be recast as

(15)

Eventually, one can easily calculate the mutual information of the form (13) between the unknown parameter and the observed 1-bit samples by utilizing (12)-(15).

Figure 1 (a) illustrates the mutual information

versus the Gaussian noise standard deviation

for two cases: (1)

is uniformly distributed and (2)

follows a Normal distribution. Surprisingly, as the noise power increases, the mutual information shows a non-monotonically behavior in the presence of noise in both cases. Namely, the mutual information between the input signal, and the output of the 1-bit quantizer , first achieves a global maximum and then dampens. This implies that a moderate amount of noise can indeed provide signal processing benefits in the case of 1-bit quantization schemes. Henceforth, this motivates us to employ a stochastic threshold design method in which we artificially induce noise to the system through the thresholds , to exploit this non-monotonic behavior of mutual information . In addition, we further consider the current knowledge of the unknown parameter at each time index to tune the quantization thresholds of each node for the next observation period, which enables us to achieve an even more accurate estimate of the parameter of interest.

— Stochastic Threshold Design: Herein, we propose our adaptive threshold design strategy for the task of 1-bit signal recovery. It can be shown that the optimal threshold given the Bernoulli observations is indeed equal to the unknown parameter; i.e., at each time index. Nevertheless, we cannot use the optimal threshold in that it is a function of the unknown parameter at each sampling period, and therefore, cannot be used in practice. Hence, a natural approach to determine the next quantization threshold is to exploit our current knowledge of the unknown parameter to set the next quantization thresholds. Namely, the fusion center should first obtain an estimate of the parameter based on the received quantized information from the nodes, and then set the next quantization threshold for each node according to the obtained estimate of the unknown parameter (note that is our best estimate of the real value of the unknown parameter , at the time index ). With the current estimate of the unknown parameter at hand, the FC samples the next quantization threshold for each node from a Normal distribution with mean , and variance . In other words, the fusion center adds realizations of a zero-mean random variable to the current estimate of the unknown parameter obtained from (11) in order to choose the next thresholds. Namely, after obtaining at the th cycle, the FC chooses the next quantization thresholds according to the following model,

(16)

where are independent samples drawn from the normal distribution of . Note that, by using a stochastic threshold design strategy, we are able to introduce an artificial noise whose variance can be controlled in such a way to not only maximize the mutual information but also incorporate the current estimate of the unknown parameter in the design. Furthermore, the variance of the random variable can be chosen according to the observation noise variance to maximize the mutual information. More generally, we can model as a multivariate Gaussian random vector with mean vector , and covariance matrix , e.g., .

The proposed signal recovery and threshold design method is summarized in Table 1.

Step 0: Initialize the thresholds vector .
Step 1: Each node performs the quantized measurement of the form , and transmits to the FC ( denotes the node index).
Step 2: FC constructs the quantized matrix , based on the received 1-bit measurements vector , and recover via solving the proposed CQP in (9). Namely,

Step 3: Given from the previous step, FC estimates the unknown parameter the ML/WLS estimate given in (5).
Step 4: The fusion center chooses the next quantization threshold for each node according to the following model,

  1. .

where denotes realizations of the random variable . Repeat steps 1-4 until convergence.

Table 1 The Proposed Adaptive Signal Recovery Method.

V Numerical Results

In this section, we evaluate the performance of the two proposed algorithms for the task of 1-bit signal recovery. We define the normalized mean square error (NMSE) of an estimate of a signal as

(17)

Each data point presented in the numerical results is averaged over independent samples and realizations of the problem parameters. In the following, we analyze the performance of our proposed algorithms in different scenarios. In addition, we compare the performance of our method in the presence of white Gaussian noise (WGN) with the modified mean estimator (MME) proposed in [31]. It must be noted that our algorithm can handle the task of parameter estimation in the presence of both white or colored (correlated) noise. However, the MME method of [31] can only handle the scenario of white Gaussian noise.

Fig. 1(b) demonstrates the normalized mean square error (NMSE) vs. the total number of nodes , for the white Gaussian noise (WGN) scenario when where . It can be seen that in the presence of WGN, our proposed method significantly outperforms the MME method of [31] and attains a very high accuracy for estimating the unknown parameter.

In Fig. 2 we consider the presence of (correlated) colored Gaussian noise at the time of observation, where each node observes a time-varying parameter of the form with . Particularly, Fig. 2(a) illustrates the performance of our proposed method versus the total power of colored noise , and it can be seen that in the presence of colored noise, the proposed method can accurately recover the unknown time-varying parameter. On the other hand, Fig. 2(b) demonstrates the NMSE vs. the total number of nodes assuming that . It can be seen that as the number of nodes (1-bit information) increases, the accuracy of our proposed method improves, and in most of the cases, the NMSE attains values that are virtually zero (note that we used logarithmic scale in our illustrations).

Vi Conclusion

In this paper, we assumed the most extreme case of quantization, i.e. the 1-bit case, and proposed an efficient signal recovery and threshold design method which can perform the task of signal recovery from its 1-bit noisy measurements under both scenarios of the presence of white and colored Gaussian noise. Moreover, the proposed algorithms can accurately recover fixed as well as time-varying unknown parameters (e.g., a sinusoidal signal).

References

  • [1] S. Khobahi and M. Soltanalian, “Optimized transmission for consensus in wireless sensor networks,” in 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP).   IEEE, 2018, pp. 3419–3423.
  • [2] C. Kong, A. Mezghani, C. Zhong, A. L. Swindlehurst, and Z. Zhang, “Nonlinear precoding for multipair relay networks with one-bit ADCs and DACs,” IEEE Signal Processing Letters, vol. 25, no. 2, pp. 303–307, 2018.
  • [3] H. Jedda, A. Mezghani, J. A. Nossek, and A. L. Swindlehurst, “Massive MIMO downlink 1-bit precoding for frequency selective channels,” in IEEE 7th International Workshop on Computational Advances in Multi-Sensor Adaptive Processing (CAMSAP).   IEEE, 2017, pp. 1–5.
  • [4] J. Mo, “MIMO communications with low resolution ADCs,” Ph.D. dissertation, The University of Texas at Austin, 2018.
  • [5] M. S. Stein and M. Fauß, “In a one-bit rush: Low-latency wireless spectrum monitoring with binary sensor arrays,” arXiv preprint arXiv:1802.03180, 2018.
  • [6] S. Jacobsson, G. Durisi, M. Coldrey, U. Gustavsson, and C. Studer, “One-bit massive MIMO: Channel estimation and high-order modulations,” in 2015 IEEE International Conference on Communication Workshop (ICCW).   IEEE, 2015, pp. 1304–1309.
  • [7]

    Y. Plan and R. Vershynin, “One-bit compressed sensing by linear programming,”

    Communications on Pure and Applied Mathematics, vol. 66, no. 8, pp. 1275–1297, 2013.
  • [8] S. Khobahi, N. Naimipour, M. Soltanalian, and Y. C. Eldar, “Deep signal recovery with one-bit quantization,” arXiv preprint arXiv:1812.00797, 2018.
  • [9] L. Jacques, J. N. Laska, P. T. Boufounos, and R. G. Baraniuk, “Robust 1-bit compressive sensing via binary stable embeddings of sparse vectors,” IEEE Transactions on Information Theory, vol. 59, no. 4, pp. 2082–2102, 2013.
  • [10] N. Naimipour and M. Soltanalian, “Graph clustering using one-bit comparison data,” in 2018 IEEE Asilomar Conference on Signals, Systems and Computers.
  • [11] C. Li, R. Zhang, J. Li, and P. Stoica, “Bayesian information criterion for signed measurements with application to sinusoidal signals,” IEEE Signal Processing Letters, vol. 25, no. 8, pp. 1251–1255, 2018.
  • [12] F. Liu, H. Zhu, J. Li, P. Wang, and P. V. Orlik, “Massive MIMO channel estimation using signed measurements with antenna-varying thresholds,” in 2018 IEEE Statistical Signal Processing Workshop (SSP).   IEEE, 2018, pp. 188–192.
  • [13] E. Masry, “The reconstruction of analog signals from the sign of their noisy samples,” IEEE Transactions on Information Theory, vol. 27, pp. 735––745, 1980.
  • [14] Z. Cvetkovic and I. Daubechies, “Single-bit oversampled a/d conversion with exponential accuracy in the bit-rate,” in Data Compression Conference, 2000. Proceedings. DCC 2000.   IEEE, 2000, pp. 343–352.
  • [15] A. Ribeiro and G. B. Giannakis, “Bandwidth-constrained distributed estimation for wireless sensor networks-part I: Gaussian case,” IEEE Transactions on Signal Processing, vol. 54, no. 3, pp. 1131–1143, 2006.
  • [16] A. Host-Madsen and P. Handel, “Effects of sampling and quantization on single-tone frequency estimation,” IEEE Transactions on Signal Processing, vol. 48, no. 3, pp. 650–662, 2000.
  • [17] O. Bar-Shalom and A. J. Weiss, “DoA estimation using one-bit quantized measurements,” IEEE Transactions on Aerospace and Electronic Systems, vol. 38, no. 3, pp. 868–884, 2002.
  • [18] O. Dabeer and A. Karnik, “Signal parameter estimation using 1-bit dithered quantization,” IEEE Transactions on Information Theory, vol. 52, no. 12, pp. 5389–5405, 2006.
  • [19] O. Dabeer and E. Masry, “Multivariate signal parameter estimation under dependent noise from 1-bit dithered quantized data,” IEEE Transactions on Information Theory, vol. 54, no. 4, pp. 1637–1654, 2008.
  • [20] A. Zymnis, S. Boyd, and E. Candes, “Compressed sensing with quantized measurements,” IEEE Signal Processing Letters, vol. 17, no. 2, pp. 149–152, 2010.
  • [21] Y. Plan and R. Vershynin, “One-bit compressed sensing by linear programming,” Communications on Pure and Applied Mathematics, vol. 66, no. 8, pp. 1275–1297, 2013.
  • [22] L. Jacques, J. N. Laska, P. T. Boufounos, and R. G. Baraniuk, “Robust 1-bit compressive sensing via binary stable embeddings of sparse vectors,” IEEE Transactions on Information Theory, vol. 59, no. 4, pp. 2082–2102, 2013.
  • [23]

    M. Yan, Y. Yang, and S. Osher, “Robust 1-bit compressive sensing using adaptive outlier pursuit,”

    IEEE Transactions on Signal Processing, vol. 60, no. 7, pp. 3868–3875, 2012.
  • [24] U. S. Kamilov, A. Bourquard, A. Amini, and M. Unser, “One-bit measurements with adaptive thresholds,” IEEE Signal Processing Letters, vol. 19, no. 10, pp. 607–610, 2012.
  • [25] A. Ai, A. Lapanowski, Y. Plan, and R. Vershynin, “One-bit compressed sensing with non-gaussian measurements,” Linear Algebra and its Applications, vol. 441, pp. 222–239, 2014.
  • [26] L. Zhang, J. Yi, and R. Jin, “Efficient algorithms for robust one-bit compressive sensing,” in

    International Conference on Machine Learning

    , 2014, pp. 820–828.
  • [27] P. T. Boufounos and R. G. Baraniuk, “1-bit compressive sensing,” in 42nd Annual Conference on Information Sciences and Systems, 2008. CISS 2008.   IEEE, 2008, pp. 16–21.
  • [28]

    Y. Plan and R. Vershynin, “Robust 1-bit compressed sensing and sparse logistic regression: A convex programming approach,”

    IEEE Transactions on Information Theory, vol. 59, no. 1, pp. 482–494, 2013.
  • [29] S. Boyd and L. Vandenberghe, Convex optimization.   Cambridge university press, 2004.
  • [30] T. M. Cover and J. A. Thomas, Elements of information theory.   John Wiley & Sons, 2012.
  • [31] T. Wu and Q. Cheng, “Distributed estimation over fading channels using one-bit quantization,” IEEE Transactions on Wireless Communications, vol. 8, no. 12, 2009.