I Introduction
Compressive sensing (CS) has provided a new signal processing paradigm whereby perfect/stable sparse signal recovery is provably true when using measurements sampled at rates below the Nyquist frequency [1, 2]. Such a subNyquist nature potentially economizes data gathering and storage; the reduction in measurement size can further facilitate efficient signal processing and conserve subsequent data transmission overheads. All these benefits have made CS pretty suitable for the design of resourceconstrained wireless sensor networks (WSNs) [3, 4, 5, 6, 7]. Support identification is an important step in CSbased signal reconstruction, from both theoretical and application aspects [2]. In the literature of CSbased WSNs, acquisition of a signal support estimate, or partial support knowledge, is crucial for the design of efficient distributed signal processing algorithms. For example, in the context of distributed sparse signal detection [5, 6], each local node first identifies a support and then projects its measurement onto the estimated signal subspace for noise reduction and reliable signal detection. For costaware WSNs, knowledge of a support estimate at the fusion center (FC) is needed to design sensor scheduling protocol towards energy reduction [8]. Regarding support identification in [5] and [6]
, each sensor node needs to gather a vector measurement of a sufficiently large size, and to conduct a CSbased reconstruction algorithm, such as orthogonal matching pursuit (OMP), for support estimation. This would place large data storage and computational burdens at the sensing devices.
To reduce the cost of support knowledge acquisition, in this paper we propose a fusionbased cooperative support identification and sparse signal reconstruction scheme. In the proposed approach, the th sensor (i) employs a sparse sensing vector^{1}^{1}1Notably, sparse sensing vectors/matrices have been considered in the study of CSbased data acquisition and inference[9]. with support for data gathering, (ii) observes a scalar measurement (rather than a vector measurement) for partial support inference, and (iii) adopts 1bit information exchange during the collaborative support identification phase. Notably, (i) and (ii) can economize data measurement and storage costs, whereas (iii) can reduce communication overhead. On the basis of (i) and (ii), we devise a binary local decision rule at each node to infer if the sensing vector support overlaps with the desired signal support . If is judged to be nonempty, the sensor broadcasts a 1bit message to all the other nodes (i.e., step (iii)), while otherwise keeping silent to conserve energy. Using the 1bit messages received from all active nodes, each sensor forms a common support estimate by means of a simple counting rule. To the best of our knowledge, our study is the first in the literature which shows support identification can be realized by means of a cooperative binary decisionfusion based protocol. Once is available, only those nodes with nonempty will forward their measurements to the FC for global signal reconstruction. The mean communication cost of the proposed scheme, which involves 1bit information exchange for cooperative support identification and realvalued data transmission for global signal reconstruction, is analyzed. To fully exploit knowledge about , the FC employs the weighted minimization algorithm [10] for global signal reconstruction, with the weighting coefficients determined by . Simulation results show that the proposed scheme outperforms the conventional method, which activates all the sensor nodes with realvalued data transmission, at a lower communication cost.
Ii System model
We consider a WSN, in which sensor nodes are coordinated by a FC to collaboratively estimate a sparse signal with unknown support (). The th sensor node makes a scalar observation obeying the following model
(1) 
where is the scalar measurement, is a sparse sensing vector with support (), and
is the observation noise assumed to be independent and identically distributed (i.i.d.) zeromean Gaussian with variance
, i.e., . The considered model can find applications in, e.g., cooperative wideband spectrum sensing in cognitive radio [11, 12], in which networked cognitive users and an FC collaboratively estimate/detect a common primary user’s signal occupying only a few (but unknown) frequency bands.Thanks to the sparse nature of the unknown signal and sensing vectors , (1) can be rewritten as
(2) 
which in turn enables us to infer some partial knowledge about the signal support at th node. For instance, when the power of the noise is very small, certain elements in shall be included in if is not close to , whereas all elements in can be precluded from whenever . Therefore, by exploiting such prior information conveyed by about the unknown signal support, this paper proposes a fusionbased cooperative support and signal reconstructing scheme. The following assumptions are made in the sequel.
Assumption 1
The signal support is uniformly drawn from the collection of all possible sparsity pattern sets, where with and .
Assumption 2
The nonzero entries of , say , for , are i.i.d. with , and are independent of the observation noise ’s.
Assumption 3
For each , the sensing vector is binary with nonzero entries, i.e., for , and .
Assumption 4
For each , the sensing vector support is uniformly drawn from the collection of all possible sparsity pattern sets, where with and .
Assumption 5
The sensing vectors ’s, , are known at the FC, whereas only their supports ’s, , are known at each sensor.
Remark: The uniform support location made in Assumption 1 is widely used in the literature of CS signal detection and estimation. This assumption is typically true in the cooperative spectrum sensing scenario, in which no prior knowledge about the frequency bands occupied by the primary user is available to the cognitive users. Assumption 2 regarding Gaussian signal entries is also quite standard in study of CS (e.g., [13, 14, 15, 16]); related applications can also be found in spectrum sensing when the primary user adopts OFDM modulation^{2}^{2}2
OFDM symbol is the inverse Fourier transform of independent finitealphabet sources symbols, and is approximately Gaussian distributed especially when the number of subcarriers is large
[14].. Binary sparse sensing considered in Assumption 3 can be seen in, e.g., biomedical imaging [17], for reducing the data storage cost and execution time. Meanwhile, on account of the uniform assumption on the signal support distribution, a natural and reasonable rule for generating the sparse sensing vector supports is likewise the uniform distribution (Assumption 4). Finally, Assumption 5 is valid in scenarios such as cooperative spectrum sensing and source localization, in which networkwide knowledge of sensing vectors (either the full sparse sensing vectors or just their supports) can be acquired during the system builtup phase.
Iii Cooperative Support Identification
Iiia Proposed Protocol
Step I: Local Partial Support Inference

Using the scalar observation , the th sensor node first makes its local decision to infer whether the desired sparse signal lies in its sensing region or not, i.e.,
(3) 
Afterwards, sensors with broadcast their binary local decisions to all the other nodes, while those with keep silent to conserve energy.
Step II. Fusion for Support Identification

Upon receiving the binary decisions ’s from the active nodes, each sensor computes for each index the “relative frequency” that is activated during the sparse sensing process, namely,
(4) where is an indicator function, and is the active node index set during the support identification phase.

Sort the values of as . The proposed support estimate is obtained as
(5) where is an integer with . Note that is known to each sensor node.

Nodes with forward their realvalued measurements to the FC, which employs a weighted minimization for global signal reconstruction as to be discussed later.
Some comments are in order.

In the literature of CS for WSNs, support recovery is typically done via greedy based search, e.g., the OMP or subspace pursuit [3]; this involves computing a series of orthogonal projections, or least squares solutions (matrix inversion). Implementation of OMPbased iterations on the sensor nodes (e.g., [5, 6]) would thus require large computation and data storage costs. Leveraging sparse sensing and collaboration among sensor nodes, the proposed scheme offers a fundamentally different methodology for support identification free from the need of matrix computation. Indeed, our scheme relies solely on local binary decision making and exchange, followed by a simple binary decision fusion (4). This makes the proposed approach rather suitable for WSNs subject to limited data storage, computation, and communication resources.

Knowledge of will be exploited at the FC for conducting weighted minimization based global signal recovery (see Section IV). As a result, the signal reconstruction performance depends crucially on the support estimation quality. It is noted that the proposed cooperative support identification scheme via local 1bit decision making and cooperative decision fusion are subject to two types of error: (i) Misidentification: is decided but is true. (ii) False Alarm: is decided but is true. Among the two error types, false alarm causes support overestimation and is less harmful. This is because the computed signal amplitude on the overestimated support element will typically assume a small value, leading to just a slight increase in the global signal reconstruction error. On the contrary, misidentification will be more dominant because missed support elements (i.e., support underestimate) cause severe model mismatch, which will incur a large reconstruction error.

The cardinality of the proposed support estimate in (5) is allowed to range from (the true support size) to (the ambient dimension). Different values of will result in different degree of robustness against the two error types and, thus, different signal reconstruction performance. If , a false alarm is necessarily accompanied by a misidentification, resulting in model mismatch. Such a drawback can be resolved by setting . For example, if one chooses , the proposed scheme can accommodate up to two false alarms. Hence, increasing the value of is expected to improve quality of signal recovery. However, in the extreme case , there is no prior support knowledge; accordingly, all sensors directly forward their realvalued measurements to the FC, and the weighted minimization based signal reconstruction is reduced to the conventional minimization scheme without weighting. In light of the above discussions, the best global signal reconstruction performance will be achieved when ; this will be confirmed by our simulation study.

Define to be the index set of the participating nodes during the signal reconstruction phase. Notably, does not necessarily coincide with .

Implementation of the proposed scheme requires the knowledge of the true support size (or an upper bound). In CSbased WSNs, support size estimation is commonly done by using the residualbased algorithms or crossvalidation [18], which is typically implemented at the FC during the training phase [18]. For the proposed distributed protocol, a simple thresholding based approach is as follows. A sensor node broadcasts a onebit decision if is above a certain threshold. A coarse support size estimate can be obtained at each node as , where is the active node index set during such a “supportsize estimation phase”. Detailed design of the threshold for accurate support size estimation is beyond the scope of this paper.
IiiB Bayesian Local Decision Rule
The proposed support identification rule (5) relies on fusion of the local binary decisions according to (4). Hence, the quality of support estimate depends crucially on the accuracy of ’s. Motivated by this fact, we obtain by solving the following problem:
With some manipulations, the optimal solution to Problem in the form of (3) is expressed as
(6) 
where and
are the conditional probability density functions of
, and are the a priori probabilities. By Assumptions 1, 2, and 3, the likelihood ratio of the measurement can be derived as(7) 
where
(8) 
Clearly, the restriction of on , say , is a bijection and thus the corresponding inverse function exists. Using this property together with some manipulations, the optimal decision rule in (6) can be expressed as
(9) 
where and .
IiiC Communication Cost Analysis
Based on the estimated signal support , the th node forwards its realvalued measurement to the FC if its sensing vector support overlaps with the estimated signal support, and keeps silent when otherwise. Accordingly, the expected communication cost of the th node can be written as
(10) 
where is the communication cost when the th node is active during the support identification phase, i.e., transmitting , and is the cost when the th node participates in global signal reconstruction and transmits its realvalued measurement to the FC. It is noted that, in general, transmitting a realvalued data requires a higher communication cost than a binary bit, and hence is assumed. We have the following theorem.
Theorem 1
Proof:
Since , it can be verified that
(12) 
The probability in (10) can be expressed as
(13) 
where follows from Assumption 4 and the dependence among elements in (observed from (4)), and is the sample space of the estimated support . With (10), (IIIC) and (IIIC), it can be shown that the cost is constant for all , and hence, (1) follows immediately.
Iv Global Sparse Signal Reconstruction
Iva Signal Model
Let be the sensing matrix. Collecting all measurements into a vector, the received signal model at FC is given by
(14) 
where consists of , is obtained by retaining the rows of indexed by , and is the noise vector. With the aid of , the estimated signal is obtained by solving the following weighted minimization problem
where with being the weighting coefficient assigned to the th index, and specifies the error level. Following [20], in this paper we assign a smaller value when , and a greater value, otherwise.
IvB Coherence of Sparse Sensing Matrix
On account of Assumptions 34, the following theorem shows that, with a very high probability, the scaled sparse sensing matrix satisfies the restricted isometry property (RIP) of order with a small restricted isometry constant (RIC) .
Theorem 2
For every sparsity level and every , if
(15) 
the scaled sensing matrix satisfies the RIP of order with RIC with probability exceeding , where and are positive absolute constants.
Proof:
See Appendix A.
Also, under the above RIP assumption and with [19, Lemma 2.1], we have the following theorem.
Theorem 3
If the scaled sensing matrix satisfies the RIP of order with RIC , the coherence of the sensing matrix satisfies
(16) 
where is the th column of , .
Proof:
See Appendix B.
Hence, the coherence can be kept small with a high probability, thereby guaranteeing the robustness of the proposed collaborative sparse signal estimation scheme.
V Performance evaluation
In this section, computer simulations are provided to demonstrate the effectiveness of the proposed scheme. The ambient signal dimension is set to be and the network size is . The number of nonzeros in the compression vectors ’s is . The weighting coefficient is set to be if , whereas if . The SNR of the local sensor measurement is defined as SNR. The quality of signal recovery is evaluated by using the normalized mean square error (NMSE), defined as , where is the reconstructed sparse signal at the FC. In the discussions below, the method in [21], which also addressed distributed sparse signal estimation via sparse measurement matrices, is used as the comparative scheme. The simulation results are obtained from independent trials.
In the first example, we evaluate the proposed scheme with different value of size . For , Fig. 1 plots the NMSE with respect to (w.r.t.) different SNR for , , , , , and . As mentioned earlier, when , the proposed scheme reduces to the conventional CS approach, which activates all sensor nodes and utilizes standard minimization for signal reconstruction. The figure shows the NMSE performance of [21] is very close to the conventional CS system, and the proposed scheme with outperforms these two methods. Note that our method with achieves the lowest NMSE, confirming our discussions that the best value of falls between the range from to . For SNR=9 dB, Fig. 2 compares the NMSE for different sparsity level . The figure shows the performances of all methods degrade as increases. The proposed scheme incurs larger NMSE as is above 14; this is because, as increases, support size overestimation () becomes severe, resulting in undesirable error floor. To compare the required communication costs, we set and . For SNR=9 dB, Fig. 3 plots the average communication costs w.r.t. for three sparsity levels ; both the theoretical upper bounds (1) and the simulated results are included. The blue curve depicts the baseline communication cost (equal to ) of the proposed scheme that accounts for the communication during the support identification phase and the data transmission phase with all sensor activated. We observe the following: (i) the communication cost of our method increases with and ; (ii) compared with the conventional CS method, the proposed scheme can reduce the cost when and , but incurs more cost as and increase since more sensors are activated. We note that the communication cost of the method in [21] is large (equal to ) because the protocol in [21] involves a large amount of realvalued data transmission.
Appendix A Proof of Theorem 2
We will first prove that is an isotropic subGaussian random vector. Then, with the aid of Theorem 5.65 in [2], the assertion of Theorem 2 immediately follows. The subGaussianalty of is established by the following lemma.
Lemma 1
Let be a sparse vector with support uniformly drawn from the collection of all possible sparsity patterns. The nonzero entries of
are assumed to be independent symmetric Bernoulli random variables, i.e.,
with for . Then is an isotropic subGaussian random vector with constant , where is a constant and .Proof:
First, for each , straightforward manipulations show that the conditional expectation , where is a diagonal matrix with if and when otherwise. Then, the
second moment matrix
of can be obtained as follows,(17) 
Hence, by definition, the random vector is isotropic. To prove is a subGaussian random vector, we need to check that, for every , the inner product is subGaussian random variable. To see this, let and then we have
(18) 
where (a) holds due to the fact that ’s are independent symmetric Bernoulli random variables for all and thus, by Proposition 5.10 in [2, Chap. 5], the inequality is valid, where is obtained by keeping the entries of indexed by and is an absolute constant. Inequality (A) shows that the random vector is subGaussian random vector and the corresponding subGaussian norm is bounded above by , where . Therefore, is an isotropic subGaussian random vector in with constant .
Based on Lemma 1, the assertion of Theorem 2 immediately follows the next lemma.
Lemma 2 [2, Theorem 5.65]: Let be an
subGaussian random matrix, which each row is independent isotropic subGaussian random vector. Then for every sparsity level
and every , if(19) 
the scaled matrix satisfies the RIP of order with RIC with probability exceeding , where and are positive absolute constants and depend only on the subGaussian norm of the rows of .
Appendix B Proof of Theorem 3
If has RIP, then for any , we have
(20) 
where (a) follows from Lemma 2.1 in [19] and (b) holds because satisfies RIP with constant , and (c) is true since for . Therefore, .
References
 [1] R. G. Baraniuk, “Compressive sensing,” IEEE Signal Processing Magazine, vol. 24, no. 4, pp. 118–124, July 2007.
 [2] Y. C. Eldar, and G. Kutyniok, Compressed Sensing: Theory and Applications. Cambridge University Press, 2011.
 [3] Z. Han, H. Li, and W. Yin, Compressive Sensing for Wireless Networks. Cambridge University Press, 2013.
 [4] C. H. Chen, and J. Y. Wu, “AmplitudeAided 1bit compressive sensing over noisy wireless sensor networks,” IEEE Wireless Commun. Lett., vol. 4, no. 5, pp. 473–476, October 2015.
 [5] T. Wimalajeewa and P. K. Varshney, “Cooperative sparsity pattern recovery in distributed networks via distributedOMP,” in 2013 IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), pp. 5288–5292, 2013.
 [6] T. Wimalajeewa and P. K. Varshney, “Sparse signal detection with compressive measurements via partial support set estimation,” IEEE Trans. Signal Inf. Process. Netw., vol. 3, no. 1, pp. 46–60, March 2017.
 [7] X. Li, X. Tao, and Z. Chen, “SpatioTemporal compressive sensingbased data gathering in wireless sensor networks,” IEEE Wireless Commun. Lett., vol. 7, no. 2, pp. 198–201, April 2018.
 [8] W. Chen and I. Wassell, “Costaware activity scheduling for compressive sleeping wireless sensor networks,” IEEE Trans. Signal Process., vol. 64, no. 9, pp. 2314–2323, May 2016.
 [9] A. Gilbert and P. Indyk, “Sparse recovery using sparse matrices,” Proceedings of the IEEE, vol. 98, no. 6, pp. 937–947, June 2010.
 [10] E. J. Candès, M. B. Wakin, and S. P. Boyd, “Enhancing sparsity by reweighted minimization,” J. Fourier Anal. Appl., vol. 14, no. 56, pp. 877–905, 2008.
 [11] J. Meng, W. Yin, H. Li, E. Hossain, and Z. Han, “Collaborative spectrum sensing from sparse observations in cognitive radio networks,” IEEE J. Sel. Areas Commun., vol. 29, no. 2, pp. 327–337, February 2011.
 [12] L. Liu, Z. Han, Z. Wu, and L. Qian, “Collaborative compressive sensing based dynamic spectrum sensing and mobile primary user localization in cognitive radio networks,” Global Telecommunications Conference (GLOBECOM 2011) 2011 IEEE, pp. 1–5, 2011.
 [13] T. Wimalajeewa, and P. K. Varshney, “Performance bounds for sparsity pattern recovery with quantized noisy random projections,” IEEE J. Sel. Topics Signal Process., vol. 6, no. 1, pp. 43–57, Feb. 2012.
 [14] A. Mishra, and A. K. Jagannatham, “SBLbased GLRT for spectrum sensing in OFDMAbased cognitive radio networks,” IEEE Communications Letters, vol. 20, no. 7, pp. 1433–1436, July 2016.
 [15] B. Kailkhura, T. Wimalajeewa, and P. K. Varshney, “Collaborative compressive detection with physical layer secrecy constraints,” IEEE Trans. Signal Process., vol. 65, no. 4, pp. 1013–1025, Feb. 2017.
 [16] D. Baron, S. Sarvotham, and R. G. Baraniuk, “Bayesian compressive sensing via belief propagation,” IEEE Trans. Signal Process., vol. 58, no. 1, pp. 269–280, January 2010.
 [17] H. Mamaghanian, N. Khaled, D. Atienza, and P. Vandergheynst, “Compressed sensing for realtime energyefficient ECG compression on wireless body sensor nodes,” IEEE Trans. Biomed. Eng., vol. 58, no. 9, pp. 2456–2466, Sep. 2011.
 [18] J. W. Choi, B. Shim, Y. Ding, B. Rao, and D. Kim, “Compressed sensing for wireless communications: Useful tips and tricks,” IEEE Commun. Surveys & and Tutorials, vol. 19, no. 3, pp. 1527–1550, 2017.
 [19] L. H. Chang, and J. Y. Wu, “An improved RIPbased performance guarantee for sparse signal recovery via orthogonal matching pursuit,” IEEE Trans. Inform. Theory, vol. 60, no. 9, pp. 5702–5715, Sept. 2014.
 [20] M. P. Friedlander, H. Mansour, R. Saab, and O. Yilmaz, “Recovering compressively sampled signals using partial support information,” IEEE Trans. Inform. Theory, vol. 58, no. 2, pp. 1122–1134, Feb. 2012.
 [21] T. Wimalajeewa and P. K. Varshney, “Wireless compressive sensing over fading channels with distributed sparse random projections,” IEEE Trans. Signal Inf. Process. Netw., vol. 1, no. 1, pp. 33–44, March 2015.
Comments
There are no comments yet.