I Introduction
Wireless sensor networks (WSNs) present significant potential for usage in spatially widescale detection and estimation. However, there exist several practical constraints such as a low power consumption, a low cost of manufacturing, and more importantly, limited computational and transmission capabilities that must be addressed in implementing such networks [1]. As a result, it is highly desirable to develop distributed estimation frameworks with which the nodes can reliably communicate with the fusion center (FC) while satisfying the power and computational constraints. Furthermore, quantization of signal of interest is a necessary first step in many digital signal processing applications such as spectrum sensing, radar, and wireless communications. Analogtodigital converters (ADCs) play a central role in most modern digital systems as they bridge the gap between the analog world and its digital counterpart. Yet, ADCs are a key implementation bottleneck in WSNs and many other InternetofThings (IoT) applications due to the significant accrued power consumption and cost when they are used in large numbers.
In addition, the system bandwidth in a WSN is limited and thus is a fundamental constraint that must be considered. Hence, it is important to use a proper quantization scheme to reduce the communicated bits prior to transmission to address this limitations. Sampling at high data rates with high resolution ADCs would also dramatically increase the manufacturing cost of these electronic components. An immediate solution to such challenges is to use lowresolution, and specifically 1bit, analogtodigital converters (ADCs) [2, 3, 4, 5]. Therefore, the problem of recovering a signal from 1bit measurements has attracted a great deal of interest over the past few years in a wide range of applications—see, e.g., [6, 7, 8, 9, 10, 11, 12], and the references therein. To name a few, the authors in [13, 14, 15, 16, 17, 18, 19], have investigated this problem from a classical statistical viewpoint. Particularly, 1bit sampling and signal recovery have been extensively studied in the context of recently introduced onebit Compressive Sensing (CS) problem as well [20, 21, 22, 23, 24, 25, 26, 27, 28]. More specifically, the task of recovering the frequency and phase of temporal and spatial sinusoidal signals utilizing only 1bit information with fixed quantization thresholds has been extensively investigated in [16] and [17], respectively. On the other hand, the recovery of general signals with highdimensional parameters from sign comparison information were considered in [18] and [19]
. In the context of CS, it was shown that sparse signals can be accurately recovered with high probability from 1bit data when sufficient number of measurements is obtained
[21, 22]. However, most of the CS literature have only considered the case of comparing the signal of interst with zero, which makes it impossible to recover the amplitude of the signal of interest.In this paper, we examine the most extreme case of quantization, i.e. 1bit case, and propose an efficient signal estimation and threshold design algorithm which can perform the task of signal recovery from its 1bit noisy measurements under the scenarios of white, colored or correlated noise. Furthermore, the proposed method can handle the task of parameter estimation for both cases of timevarying and fixed signals.
Ii System Model
We consider a wireless sensor network with spatially distributed singleantenna nodes each of which observing an unknown deterministic parameter , at the time index , according to the linear observation model of , where denotes the sensor index, and
is additive zeromean Gaussian observation noise with a Normal distribution, e.g.,
. Hence, the observed signal at all nodes can be compactly formulated as(1) 
where
denotes the allone vector.
In order to satisfy the inherent bandwidth and power budget constraints in WSNs, we assume in this case that each node utilizes a 1bit quantization scheme to encode its observation into 1 bit of information which will be transmitted to the fusion center for further processing. Namely, the th node applies the following quantization function on its observed data prior to transmission:
(2) 
where is the sign function, and denotes the quantization threshold at the th node, for the time index . Assuming that each node can reliably transmit 1 bit of information to the FC, the aggregated received data from all nodes at the th sampling time can be expresses as
(3) 
where the sign function is applied elementwise, and is the combined observation noise vector with covariance matrix , and is the quantization threshold vector. Next, the FC utilizes the received 1bit information to first construct an estimation of the unknown parameter , and then, to further design the next quantization threshold for each node accordingly.
An important observation which we use is that given a set of quantization thresholds , the corresponding vector of 1bit measurements defined in (3) represents a limitation on the geometric location of the unquantized data . Particularly, one can capture this geometric knowledge on through the following linear inequality:
(4) 
where , and is the diagonalization operator, and denotes the allzero vector.
Iii Centralized 1Bit Signal Recovery via Quadratic Programming
We lay the ground for our 1bit statistical and inference model using the weighted least square (WLS) method. Let denote the covariance matrix of the noise vector . Note that if the unquantized information vector was available at the FC, then the maximum likelihood (ML) estimation of the unknown parameter given can be expressed as:
(5) 
where denotes the ML estimate of the unknown parameter according to the vector of observations
. Furthermore, it is wellknown that the variance of the ML estimation in (
5) is given by . Alternatively, one can obtain the maximumlikelihood estimator of the unknown parameter by minimizing the following WLS criterion,(6) 
where is our cost function to be minimized over the parameters . A natural approach to obtain an estimate of using the 1bit quantization vector , is to use an alternating optimization approach and further exploit the limitation on the geometric location of imposed by (4); in other words, to first obtain an estimate of by fixing the variable and then to recover the unknown parameter using (5), given by . Interestingly, for a fixed parameter , the optimal that minimizes coincides with that of the MLE of given in (5).
We can further substitute the optimal of (5) into (6) to simplify the objective function in terms of the parameter , viz.
(7) 
where we define as,
(8) 
Consequently, one can cast the problem of recovering the unquantized vector from 1bit noisy measurements as the following constrained quadratic program (CQP):
(9)  
(10) 
where the inequality in (10) is applied elementwise (equivalent to scalar inequality constraints). Note that the constraint (10) ensures the consistency between the received 1bit quantized data (as incorporated in ) and the solution . Moreover, note that the matrix at the core of this CQP is positive semidefinite. Therefore, the CQP problem in (9) is convex and can be solved efficiently using standard numerical methods (e.g., the interior point method [29]).
Iv Quantization Threshold Design
So far, we discussed our inference framework to estimate an unknown deterministic signal from its 1bit noisy measurements and the corresponding quantization thresholds. In order to further facilitate the estimation process and to restore the exact signal model (both amplitude and phase), it is of essence to design a proper adaptive quantization thresholding scheme. In this section, we devise a stochastic adaptive thresholding method for our estimation algorithm, which thanks to its adaptive nature, empowers the proposed inference framework to accurately recover timevarying signals as well. We first investigate the performance of a 1bit quantizer of the form (2) in the presence of additive noise from an information theoretic viewpoint, and further show that the presence of noise is indeed improving the performance of a 1bit quantization scheme. We then propose our threshold design strategy accordingly.
We first consider the case that the unknown parameter is a random variable with a known distribution, and that the observation noise
. Clearly, each 1bit sample , (at a given time index and quantization threshold value) can be seen as a random variable that follows a Bernoulli distribution
, whose parameter is given by(12) 
where represents the conditional probability of receiving given the parameter , and denotes the standard Qfunction. The mutual information [30] between the unknown parameter and the obtained 1bit sample can be expressed as
(13)  
where for , is the probability mass function (pmf) of the discrete Bernoulli random variable , and
is the probability density function (pdf) of the parameter of interest
, and denotes the entropy function of the argument random variable. Moreover, the conditional entropy in (13) can be further simplified in terms of the noise distribution and according to (12) as follows:(14) 
Moreover, the pmf of the Bernoulli random variable can be recast as
(15) 
Eventually, one can easily calculate the mutual information of the form (13) between the unknown parameter and the observed 1bit samples by utilizing (12)(15).
Figure 1 (a) illustrates the mutual information
versus the Gaussian noise standard deviation
for two cases: (1)is uniformly distributed and (2)
follows a Normal distribution. Surprisingly, as the noise power increases, the mutual information shows a nonmonotonically behavior in the presence of noise in both cases. Namely, the mutual information between the input signal, and the output of the 1bit quantizer , first achieves a global maximum and then dampens. This implies that a moderate amount of noise can indeed provide signal processing benefits in the case of 1bit quantization schemes. Henceforth, this motivates us to employ a stochastic threshold design method in which we artificially induce noise to the system through the thresholds , to exploit this nonmonotonic behavior of mutual information . In addition, we further consider the current knowledge of the unknown parameter at each time index to tune the quantization thresholds of each node for the next observation period, which enables us to achieve an even more accurate estimate of the parameter of interest.— Stochastic Threshold Design: Herein, we propose our adaptive threshold design strategy for the task of 1bit signal recovery. It can be shown that the optimal threshold given the Bernoulli observations is indeed equal to the unknown parameter; i.e., at each time index. Nevertheless, we cannot use the optimal threshold in that it is a function of the unknown parameter at each sampling period, and therefore, cannot be used in practice. Hence, a natural approach to determine the next quantization threshold is to exploit our current knowledge of the unknown parameter to set the next quantization thresholds. Namely, the fusion center should first obtain an estimate of the parameter based on the received quantized information from the nodes, and then set the next quantization threshold for each node according to the obtained estimate of the unknown parameter (note that is our best estimate of the real value of the unknown parameter , at the time index ). With the current estimate of the unknown parameter at hand, the FC samples the next quantization threshold for each node from a Normal distribution with mean , and variance . In other words, the fusion center adds realizations of a zeromean random variable to the current estimate of the unknown parameter obtained from (11) in order to choose the next thresholds. Namely, after obtaining at the th cycle, the FC chooses the next quantization thresholds according to the following model,
(16) 
where are independent samples drawn from the normal distribution of . Note that, by using a stochastic threshold design strategy, we are able to introduce an artificial noise whose variance can be controlled in such a way to not only maximize the mutual information but also incorporate the current estimate of the unknown parameter in the design. Furthermore, the variance of the random variable can be chosen according to the observation noise variance to maximize the mutual information. More generally, we can model as a multivariate Gaussian random vector with mean vector , and covariance matrix , e.g., .
The proposed signal recovery and threshold design method is summarized in Table 1.
V Numerical Results
In this section, we evaluate the performance of the two proposed algorithms for the task of 1bit signal recovery. We define the normalized mean square error (NMSE) of an estimate of a signal as
(17) 
Each data point presented in the numerical results is averaged over independent samples and realizations of the problem parameters. In the following, we analyze the performance of our proposed algorithms in different scenarios. In addition, we compare the performance of our method in the presence of white Gaussian noise (WGN) with the modified mean estimator (MME) proposed in [31]. It must be noted that our algorithm can handle the task of parameter estimation in the presence of both white or colored (correlated) noise. However, the MME method of [31] can only handle the scenario of white Gaussian noise.
Fig. 1(b) demonstrates the normalized mean square error (NMSE) vs. the total number of nodes , for the white Gaussian noise (WGN) scenario when where . It can be seen that in the presence of WGN, our proposed method significantly outperforms the MME method of [31] and attains a very high accuracy for estimating the unknown parameter.
In Fig. 2 we consider the presence of (correlated) colored Gaussian noise at the time of observation, where each node observes a timevarying parameter of the form with . Particularly, Fig. 2(a) illustrates the performance of our proposed method versus the total power of colored noise , and it can be seen that in the presence of colored noise, the proposed method can accurately recover the unknown timevarying parameter. On the other hand, Fig. 2(b) demonstrates the NMSE vs. the total number of nodes assuming that . It can be seen that as the number of nodes (1bit information) increases, the accuracy of our proposed method improves, and in most of the cases, the NMSE attains values that are virtually zero (note that we used logarithmic scale in our illustrations).
Vi Conclusion
In this paper, we assumed the most extreme case of quantization, i.e. the 1bit case, and proposed an efficient signal recovery and threshold design method which can perform the task of signal recovery from its 1bit noisy measurements under both scenarios of the presence of white and colored Gaussian noise. Moreover, the proposed algorithms can accurately recover fixed as well as timevarying unknown parameters (e.g., a sinusoidal signal).
References
 [1] S. Khobahi and M. Soltanalian, “Optimized transmission for consensus in wireless sensor networks,” in 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2018, pp. 3419–3423.
 [2] C. Kong, A. Mezghani, C. Zhong, A. L. Swindlehurst, and Z. Zhang, “Nonlinear precoding for multipair relay networks with onebit ADCs and DACs,” IEEE Signal Processing Letters, vol. 25, no. 2, pp. 303–307, 2018.
 [3] H. Jedda, A. Mezghani, J. A. Nossek, and A. L. Swindlehurst, “Massive MIMO downlink 1bit precoding for frequency selective channels,” in IEEE 7th International Workshop on Computational Advances in MultiSensor Adaptive Processing (CAMSAP). IEEE, 2017, pp. 1–5.
 [4] J. Mo, “MIMO communications with low resolution ADCs,” Ph.D. dissertation, The University of Texas at Austin, 2018.
 [5] M. S. Stein and M. Fauß, “In a onebit rush: Lowlatency wireless spectrum monitoring with binary sensor arrays,” arXiv preprint arXiv:1802.03180, 2018.
 [6] S. Jacobsson, G. Durisi, M. Coldrey, U. Gustavsson, and C. Studer, “Onebit massive MIMO: Channel estimation and highorder modulations,” in 2015 IEEE International Conference on Communication Workshop (ICCW). IEEE, 2015, pp. 1304–1309.

[7]
Y. Plan and R. Vershynin, “Onebit compressed sensing by linear programming,”
Communications on Pure and Applied Mathematics, vol. 66, no. 8, pp. 1275–1297, 2013.  [8] S. Khobahi, N. Naimipour, M. Soltanalian, and Y. C. Eldar, “Deep signal recovery with onebit quantization,” arXiv preprint arXiv:1812.00797, 2018.
 [9] L. Jacques, J. N. Laska, P. T. Boufounos, and R. G. Baraniuk, “Robust 1bit compressive sensing via binary stable embeddings of sparse vectors,” IEEE Transactions on Information Theory, vol. 59, no. 4, pp. 2082–2102, 2013.
 [10] N. Naimipour and M. Soltanalian, “Graph clustering using onebit comparison data,” in 2018 IEEE Asilomar Conference on Signals, Systems and Computers.
 [11] C. Li, R. Zhang, J. Li, and P. Stoica, “Bayesian information criterion for signed measurements with application to sinusoidal signals,” IEEE Signal Processing Letters, vol. 25, no. 8, pp. 1251–1255, 2018.
 [12] F. Liu, H. Zhu, J. Li, P. Wang, and P. V. Orlik, “Massive MIMO channel estimation using signed measurements with antennavarying thresholds,” in 2018 IEEE Statistical Signal Processing Workshop (SSP). IEEE, 2018, pp. 188–192.
 [13] E. Masry, “The reconstruction of analog signals from the sign of their noisy samples,” IEEE Transactions on Information Theory, vol. 27, pp. 735––745, 1980.
 [14] Z. Cvetkovic and I. Daubechies, “Singlebit oversampled a/d conversion with exponential accuracy in the bitrate,” in Data Compression Conference, 2000. Proceedings. DCC 2000. IEEE, 2000, pp. 343–352.
 [15] A. Ribeiro and G. B. Giannakis, “Bandwidthconstrained distributed estimation for wireless sensor networkspart I: Gaussian case,” IEEE Transactions on Signal Processing, vol. 54, no. 3, pp. 1131–1143, 2006.
 [16] A. HostMadsen and P. Handel, “Effects of sampling and quantization on singletone frequency estimation,” IEEE Transactions on Signal Processing, vol. 48, no. 3, pp. 650–662, 2000.
 [17] O. BarShalom and A. J. Weiss, “DoA estimation using onebit quantized measurements,” IEEE Transactions on Aerospace and Electronic Systems, vol. 38, no. 3, pp. 868–884, 2002.
 [18] O. Dabeer and A. Karnik, “Signal parameter estimation using 1bit dithered quantization,” IEEE Transactions on Information Theory, vol. 52, no. 12, pp. 5389–5405, 2006.
 [19] O. Dabeer and E. Masry, “Multivariate signal parameter estimation under dependent noise from 1bit dithered quantized data,” IEEE Transactions on Information Theory, vol. 54, no. 4, pp. 1637–1654, 2008.
 [20] A. Zymnis, S. Boyd, and E. Candes, “Compressed sensing with quantized measurements,” IEEE Signal Processing Letters, vol. 17, no. 2, pp. 149–152, 2010.
 [21] Y. Plan and R. Vershynin, “Onebit compressed sensing by linear programming,” Communications on Pure and Applied Mathematics, vol. 66, no. 8, pp. 1275–1297, 2013.
 [22] L. Jacques, J. N. Laska, P. T. Boufounos, and R. G. Baraniuk, “Robust 1bit compressive sensing via binary stable embeddings of sparse vectors,” IEEE Transactions on Information Theory, vol. 59, no. 4, pp. 2082–2102, 2013.

[23]
M. Yan, Y. Yang, and S. Osher, “Robust 1bit compressive sensing using adaptive outlier pursuit,”
IEEE Transactions on Signal Processing, vol. 60, no. 7, pp. 3868–3875, 2012.  [24] U. S. Kamilov, A. Bourquard, A. Amini, and M. Unser, “Onebit measurements with adaptive thresholds,” IEEE Signal Processing Letters, vol. 19, no. 10, pp. 607–610, 2012.
 [25] A. Ai, A. Lapanowski, Y. Plan, and R. Vershynin, “Onebit compressed sensing with nongaussian measurements,” Linear Algebra and its Applications, vol. 441, pp. 222–239, 2014.

[26]
L. Zhang, J. Yi, and R. Jin, “Efficient algorithms for robust onebit
compressive sensing,” in
International Conference on Machine Learning
, 2014, pp. 820–828.  [27] P. T. Boufounos and R. G. Baraniuk, “1bit compressive sensing,” in 42nd Annual Conference on Information Sciences and Systems, 2008. CISS 2008. IEEE, 2008, pp. 16–21.

[28]
Y. Plan and R. Vershynin, “Robust 1bit compressed sensing and sparse logistic regression: A convex programming approach,”
IEEE Transactions on Information Theory, vol. 59, no. 1, pp. 482–494, 2013.  [29] S. Boyd and L. Vandenberghe, Convex optimization. Cambridge university press, 2004.
 [30] T. M. Cover and J. A. Thomas, Elements of information theory. John Wiley & Sons, 2012.
 [31] T. Wu and Q. Cheng, “Distributed estimation over fading channels using onebit quantization,” IEEE Transactions on Wireless Communications, vol. 8, no. 12, 2009.
Comments
There are no comments yet.