I Introduction
Automatic modulation classification (AMC) is a digital signal processing technique that aims to blindly estimates the modulation scheme of information signals present in the spectrum. AMC has been discussed widely for military and cognitive radio (CR) applications. For military applications, it is used for electronic warfare, surveillance, signal jamming and target acquisition [1]. Whereas in the CR applications, it is required for rate adaptation [2] and licensed user characterization to avoid the emulation attacks [3]. Hence, apart from determining the vacant bands, these applications also require the determination of modulation schemes of busy bands over a widerange of spectrum. Thus, there is a need to perform both wideband spectrum sensing (WSS) and AMC on the detected busy bands.
The processing of a wideband spectrum demands highspeed Nyquist rate analogtodigital converters (ADCs) which are not only computationally expensive but also area and power hungry. Recently, subNyquist sampling (SNS) methods are being explored to overcome the drawbacks of Nyquist sampling based digitization. These SNS methods exploit the sparsity of a wideband spectrum to perform digitization via lowrate ADCs [4, 5, 6]. Hence, to perform WSS and AMC (WSSAMC), we require SNS based WSSAMC.
Significant work has been done in the literature to perform SNS based WSS [4, 5, 6], however, very little attention has been made to SNS based AMC [7, 8, 9]. Various AMCs methods [27, 28, 29, 30, 31, 32] such as likelihood ratio and Bayesian classifier , wavelet classifier , cyclostationary feature classifier [40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52]
and machine learning classifiers such as support vector machine (SVM)
[11], knearest neighbor (KNN)
[1], random forest (RF)
[10]and neural networks
[14] have been discussed in literature. However, all these classifiers deal with Nyquist sampled signal and may not work well under subNyquist scenario. [7, 8, 9] are the only work which performed SNS based AMC. But these SNSAMCs consider random SNS over a single preprocessed narrowband signal (i.e. on the modulated symbols of a baseband signals) for the modulation classification. Furthermore, no work has been done to perform AMC on the subNyquist sampled wideband signal and SNS based joint WSSAMC.To overcome the limitations of existing works, in this paper, we propose a novel architecture called SenseNet which combines the task of spectrum sensing and modulation classification into a single unified pipeline. This architecture allows to perform WSSAMC directly on the subNyquist samples of the wideband signal. Hence, unlike the existing methods [27, 28, 29, 30, 31, 32], it does not require preprocessed signal narrowband signal as its input. Furthermore, to analyse and compare the performance of the proposed SenseNet architecture with previously proposed AMC methods, we extend the analysis of the proposed SNS based AMC on the preprocessed inphase and quadraturephase (IQ) and amplitudephase (AP) samples of the modulated symbols.
We conduct extensive experiments to analyse the effect of input signal representation on the classification accuracy by testing our model on subNyquist sampled wideband signal, IQ samples and amplitudephase (AP) samples, separately, for wide range of signal to noise ratio (SNR). We also test our model predictions under various channel impairments.
We show that the classification accuracy of the proposed SNSAMC approaches to the Nyquist sampling based AMC (NSAMC) with an increase in SNR.
The contribution of this paper is many fold 

The proposed SenseNet provides an endtoend pipeline which takes in multiband subnyquist sampled wideband signal as its input and outputs the band status (vacant/busy) and modulation scheme of busy bands. To achieve this, the SenseNet first performs WSS followed by the modulation classification.

The proposed architecture uses a CNN based model for WSS to simultaneously classify all bands as vacant/busy making it more efficient than iterative approaches like orthogonal matching pursuit (OMP). Unlike OMP, the proposed deep learning architecture does not requires prior knowledge of the sparsity of wideband spectrum.

Since the SenseNet perform AMC directly on the recovered wideband signal, we formulate a modified crossentropy loss function which, based on the occupancy status of bands classifies the modulation scheme of detected busy bands.
Rest of the paper is organized as follows. The literature review of the existing WSS and AMC techniques are discussed in Section 2. Section 3 describes the signal model. The proposed endtoend pipeline for WSSAMC is presented in Section 4.Section5 discusses the application of the proposed DLMC on unprocessed raw wideband samples, followed by its extension on the preprocessed IQ and AP samples in Section 6. The datasets and the implementation details of WSSAMC are discussed in Section 7. The simulation results are presented in Section 8 followed by the conclusions in Section 9.
Ii Literature Review
In this section, we review WSS and AMC techniques that are studied extensively in the literature.
Iia WSS Methods
The extensive use of wireless devices and scarcity of radio spectrum has lead to extensive research in the field of WSS [15, 16, 17]. The use of conventional method for the processing of such a wideband signal requires very high Nyquist rate ADCs which are power and computationally inefficient. By taking the advantage of the sparsity of radio spectrum, SNS techniques like multicoset sampling (MCS) and modulated wideband converter (MWC) have been proposed for the digitization of wideband signal. These SNS techniques use multiple low rate ADCs such that the average sampling rate Nyquist Rate of the wideband signal. In MCS, each low rate ADC digitizes the wideband signal a unique timeoffset. In [4], Multicoset sampling (MCS) is applied where each ADC uniformly samples the incoming wideband signal with a unique time offset with respect to the other ADCs. Outputs of all ADCs are combined and subsequently reconstructed in the digital domain. However, MCS has several limitations. First, it requires an accurate time offset in the order of the Nyquist period (around picoseconds), which is very difficult to generate in the analog domain. Second, the analog bandwidth of the required ADCs is too high. To overcome these limitations, the modulated wideband converter (MWC) was proposed [5]. In MWC, a specific analog mixing function is used, followed by a low pass filter at the input of each ADC. This results in need for a low analog bandwidth ADCs which are easily available. Furthermore, the MWC has been successfully realized in hardware [21] making it a stateoftheart SNS approach. However, these approaches and subsequent extensions are limited to contiguous SNS.
For digital reconstruction of signal, extensive research has been focused on compressive sensing algorithms based on greedy approaches, norm minimization, and Bayesian methods for digital reconstruction in existing literature [62]. Although the greedy approaches offer less reconstruction accuracy when compared with norm minimization [63, 64, 65] based algorithm [65],their lower computational complexity makes them ideal for practical use. However,greedy approaches suffer from the drawback that they require prior knowledge of spectrum sparsity which might be unavailable in practice. The Bayesian approach tend to perform better than the above mentioned approaches as they offers better reconstruction accuracy than greedy algorithms [62] and lower computational complexity than norm minimization based algorithms.
After reconstruction, various algorithms can then be used to determine the status of frequency bands. These include matched filter [68], energy detection [69], Eigenvalue based detectors and cyclostationary detector [70]. Interested readers may refer to [71] for more details about the advantages and drawbacks of these detectors.
In practice,the most commonly used techniques for WSS are greedy algorithms e.g thresholding algorithms [66, 67, 72],matching pursuit,OMP and its derivatives that build the solution iteratively[24, 25, 26].These algorithms involve predicting the location of nonzero entries in the first step and then calculating the estimate of the sparse vector[19]
.This process goes on at each iteration till the stopping cretria is reached. Several deep learning based approaches have also been proposed in the literature that integrate a deep/convolution neural networks or LSTM into this greedy iterative pipeline
[19, 20, 22, 23]. However our proposed method is different from these approaches in several ways. Firstly, unlike OMP our algorithm does not require the knowledge of number of occupied bands beforehand. Secondly,the proposed method detects the vacant and occupied bands in a single forward pass through the network without the need for any iterative process. This makes our method more efficient than the above mentioned approaches at inference time and provides real time sensing of wideband signal. Also, through experimentation we show that the proposed method is better than OMP at detecting the status of bands(vacant/occupied).IiB AMC Methods
To determine the modulation scheme, various automatic modulation classification techniques such as likelihood ratio (LR) based classifiers, feature based (FB) classifier and intelligent learning (IL) classifiers have been discussed in the literature [39, 12, 13, 14] . LR based classifiers treat automatic modulation classification (AMC) as a multiplecomposite hypothesis testing problem[39].The modulation scheme is determined by applying the MLE (maximum liklihood estimation) criteria.The drawback with this approach is that the accuracy depends on the knowledge of channel and noise model, which varies dynamically in the real environment . To overcome this drawback of LR, FB classification methods [12, 13]
are studied. FB classification uses variety of statistical features like moments, cumulants
[12] and cyclostationary features[40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52] of the received signal for AMC. An extensive research has been focused on the analysis of these methods.Recently, to further improve the accuracy of AMC, IL based AMC, which uses features and learning algorithm such as support vector machine (SVM) [11], knearest neighbor (KNN) [1], random forest (RF) [10] and shallow neural networks (NNs) [14] have become an active field of research. Out of these IL methods, NNs based AMC has shown the highest modulation classification accuracy [29]. NNs are deep learning (DL) models and basically work as function approximators that recognize the underlying relation in data and extract complex patterns from it.
Most of the DL based AMC techniques use IQ samples as an input to the model. In [30], it is shown that a simple convolutional NN (CNN) model with just 2 convolution layers outperforms expert FB AMC methods by a considerable margin.Since then many researchers have tried to find the optimal deep learning architectures for AMC.In[27, 28] authors discuss the principle architectures used for the task of image recognition and adapt them to the task of AMC.An extensive study is conducted to analyse the effect of network depth, filter sizes and number of filters on the accuracy of classification.
The representation of input signal for IL algorithms is analysed in [29]
and it has shown that the use of amplitudephase samples with longshort term memory (LSTM) learning algorithm outperforms the IQ sample based CNN AMC
[27, 28]. For further improving the performance at low SNR, [31, 32] use noise resistant features such as ambiguity function [31] and spectral correlation function [32] as an input feature to the deep learning models.The existing AMC methods [27, 28, 29, 30, 31, 32] work on narrow band Nyquist signal and thus can determine the modulation scheme of a single narrowband signal.These methods also require signal preprocessing (after the its detection via wideband spectrum sensing) before it can be fed to the classifier. But for WSS, we need to determine the modulation schemes of all detected narrowband signals present in the wideband signal. Hence, there is a need of a widebandAMC and is the aim of the proposed AMC. To accomplish this task, we propose an endto end classification system that directly takes subNyquist samples as an input, and then outputs the band status (i.e. vacant or busy) and modulation schemes of all detected busy bands.
Iii Signal Model
We consider a wideband signal consisting of multiple uncorrelated and disjoint narrowband signals of maximum possible bandwidth, . Mathematically, the received wideband signal, can be modeled as
(1) 
where is the maximum possible number of transmissions in , is the modulated narrowband signal of carrier frequency , is the channel response faced by signal, is additive white Gaussian noise (AWGN) and is a convolution operator. The modulated narrowband signal, can be represented as
(2) 
where is the impulse response of a root raised cosine (RRC) pulse shaping filter, is the symbol period, is the modulated symbol of order modulation scheme where and is the length of symbol sequence. Similar to [4, 5, 6], we made the following assumptions on the wideband signal:

The wideband spectrum, , of maximum frequency, is divided into frequency bands of bandwidth, .

The bandwidth of a narrowband signal does not exceed .
For the ease of understanding, the frequently used notations are summarized in Table I.
Notation  Definitions 

Received wideband signal  
Maximum number of active transmissions in  
active transmission in  
Carrier frequency of active transmission  
Channel response faced by  
Impulse response of RRC filter  
Modulated symbol  
Symbol period  
Number of symbols considered for Dataset 2 and 3  
Order of a modulation scheme  
Number of frequency bands in  
SubNyquist samples  
Number of subNyquist samples considered for Dataset  
Number of ADCs used for SNS  
Number of frequency bands considered for SNS  
A  Sensing Matrix of dimension 
Z  A matrix of DTFT of 
A matrix containing FT of frequency bands  
Pseudoreconstruction of 
Iv Proposed WSSAMC
The proposed WSSAMC consists of two phases: 1) Digitization and 2) Classification. The digitization phase digitizes the wideband signal, via SNS based RF to digital conversion followed by the deep learning based model for WSSAMC. The classification phase utilizes the digitized subNyquist samples to identify the occupancy status followed by signal recovery and the determination of modulation scheme of all detected busy bands frequency bands present in the digitized signal.
Iva SNS Architecture for Digitization
With the help of low rate ADCs, the SNS architecture aims to digitize a sparse wideband signal. As discussed in the Section II
, various SNS architectures like MCS, RD, and MWC which sense the entire contiguous wideband spectrum and FRI based SNS method are available for the subNyquist rate digitization. The discrete time Fourier transform (DTFT) of subNyquist samples,
, obtained from all these SNS models can be represented as(3) 
where A is a sensing matrix corresponding to the used SNS architecture, is a vector containing Fourier transform of frequency subbands and is the number of sensed frequency bands which in case of contiguous sensing is same as the total number of bands, in and for noncontiguous sensing it is the number of selected frequency bands.
IvB Proposed EndtoEnd Pipeline of WSSAMC
The block diagram of the proposed endtoend pipeline for the WSSAMC is shown in Fig. 1. The subNyquist samples, Z of a wideband signal and its corresponding sensing matrix A are the inputs to the proposed pipeline (line1 Algorithm1). The outputs of the endtoend pipeline are status of each frequency band (i.e. vacant/busy) and the modulation scheme of detected busy frequencybands. The proposed pipeline performs three tasks: 1) WSS with the help of a convolution neural network (referred to as ), 2) Wideband signal recovery/reconstruction and 3) AMC on the reconstructed wideband signal. Unlike the existing classification models [27, 28, 30], we design our entire pipeline in such a way that it can handle different sized inputs, i.e different size of input signals, without making any changes to the architecture.
The input to the proposed WSSAMC is the normalized pseudo reconstructed signal where the pseudo reconstructed signal(line2 Algorithm1) is computed as
(4) 
The is a complex valued signal of dimension where is the number of snapshots. Next, it is converted into a real matrix (line3 Algorithm1) of size where its third dimension of size represents the real and imaginary values of (line4 Algorithm1). Finally, for the faster convergence of CNNSS training, the higher dimensional pseudoreconstructed matrix is normalized in the range of (line5 Algorithm1). Let us represent the normalized matrix as and is fed into the intelligent learning block as shown in Fig. 1. This block consists of two subblocks:
IvB1 Deep Learning based spectrum sensing (DLSS)
The DLSS subblock aims to determine the vacancy/busy status of frequency bands of the digitized frequency bands. To perform spectrum sensing, as shown in Fig. 2
, we employ deep learning based convolutional neural netwrok (CNN) which works in two modes: 1) Offline training mode and 2) Online inference mode (i.e. testing mode). During the offline training mode (Subroutine1), CNN is trained using the supervised learning framework with the Dataset,
where denotes the number of samples (or observations) over which training is performed, is the preprocessed and is the label of observation indicating the vacant (i.e. ) or busy (i.e. ) status of all frequency bands of .The training process involves the minimization of a loss function, which is a measure of inconsistency between the predicted and actual label. Since more than one frequency band can be busy in a wideband spectrum, the problem can be classified as a multilabel binary classification. Hence, we use binary cross entropy as the loss function which can be calculated as
(5) 
where and is the actual and predicted vacancy/busy status of frequency band.
Furthermore, the network parameters are optimized using stochastic gradient descent algorithm, such that the training loss is minimized. The loss gradients are back propagated and used to update the learnable weights of the network at each iteration. This process is repeated till the validation loss no more decreases.The final output of training is the optimized parameters
(line6 Algorithm1)After the training mode, the CNN model enters in the inference mode where it uses the trained weights of the model(i.e
) to find the vacancy/busy status of real time wideband signal(line7 Algorithm1).The probability of detection of our proposed method and its comparison with other methods is done in section8A
Please note that to decide the architecture of deep learning model for spectrum sensing, we perform experiments with LSTM based as well as CNN based architectures.Through experiments it is validated that CNN perform much better than LSTM in deciding the band status (vacant/occupied) of the frequency bands.
We further perform experiment with different network depths and filter setting for the CNN based DL model. The filters used are of the form 1 ntaps where ntaps denotes the width of the convolution filter. We observe that filters with larger width perform better as compared to those with smaller width dimension and saturates when the width is increased further. The best classification accuracy is obtained using the architecture shown in Table II.
Now, once we determine the vacancy/busy status of all frequency bands, we perform signal recovery and then modulation classification on the busy frequency bands.
Layers  Filter Size  Number of Filters  Output Dimension 
Input 
14x299x2  
Conv/relu 
1x150  256  14x150x256 
Conv/relu  1x100  128  14x51x256 
Conv/relu  1x51  64  14x1x64 
FC/sigmoid  14 
IvB2 Deep Learning based modulation classification (DLMC)
The DLMC subblock uses the estimated frequency bands status to determine the modulation scheme of the detected busy frequency bands. For AMC, we first perform the recovery of wideband signal(line810 Algorithm1) as shown in Fig. 3. Here, we take band status vector (determined by the and sensing matrix A as input. Firstly, we generate a new sensing matrix by selecting the columns of A which corresponds to the busy frequency bands followed by the reconstruction of wideband signal. Mathematically, the recovery step can be written as
(6) 
where is the recovered wideband signal, is the pseudoinverse of new sensing matrix and Z is the DTFT of subNyquist samples. Next, to determine the modulation schemes of busy frequency bands, we directly pass the recovered wideband signal for the deep learning based modulation classification(Subroutine2). This also consists of an offline training step which outputs the optimized parameters (lin11 Algorithm1).These parameters are then used in the deep learning based modulation classifier to get the final classification output(line12 Algorithm1)
V DLMC on Raw Wideband Signal
Since unlike the existing AMCs, the proposed method does not require any preprocessing like intermediate frequency (IF) conversion and root raised cosine filtering on the busy frequency bands of
, we refer samples of reconstructed wideband signal as raw samples. Here, we explore convolutional neural network (CNN) and recurrent neural network (RNN) architectures for the modulation classification.
CNNs [34] can effectively model the spatial dependencies and extract robust features relative to a given classification task. They also involve sharing of parameters across various regions of the input, which makes them computationally more efficient than dense neural nets. On the other hand, RNNs make use of sequential information and capture temporal dependencies in data. Long shortterm memory units (LSTM) [53, 54]are the most widely used variant of recurrent networks. Unlike conventional RNN models, they maintain an internal memory vector in addition to the hidden state vector at every time step. This helps them to remember the information from previous and current time steps more effectively. Thus, LSTM are better at modeling longterm dependencies in data which helps them to make better predictions.
Similar to the CNN modeling for spectrum sensing, discussed in the previous subsection, to optimise the training weights for the CNN and LSTM models, we use stochastic gradient descent algorithm, such that the training loss (discussed in detail in section7D) can be minimised. To finalize the architectures of both RNN and CNN based models, we perform extensive ablation study by varying their hyperparameters discussed in section8 in detail. For the RNN based model, we transform the 2 layered LSTM architecture in [29] and adapt it for our multibandAMC task with some minor changes to boost accuracy. The LSTM network consists of N parallel (one for each band) neural network based modulation scheme prediction modules with shared weight. We perform the ablation study for different network depth and the size of hidden state vector in Fig. 14. The best performing architecture is shown in Fig. 5.
While analysing the architecture of CNN for the AMC, we observe that the CNN architecture that best performs on raw samples is similar to the one used for the task of spectrum sensing. This is mainly due to the reason that the underlying signal used in both cases (i.e. for spectrum sensing and modulation classification) are similar. The CNN architecture for AMC is shown in Table III.
The results of modulation classification on raw wideband signal and performance comparison of our methodology with other methods are discussed in section8
Layers  Filter Size  Number of Filters  Output Dimension 

Input  NxQx2  
Conv/ReLu  1x150  256  Nx150x256 
Conv/ReLu  1x100  128  Nx51x256 
Conv/ReLu  1x51  64  Nx1x64 
Conv/ReLU  1x1  8  Nx1x8 
Custom pool/softmax  Nx8 
Vi DLMC on PreProposed Wideband Signal
Since the existing AMC methods work on a single preprocessed (i.e. baseband and RRC filtered) Nyquist narrowband signal, in this section we extend the proposed endtoend pipeline of WSSAMC to perform classification on the every preprocessed detected busy bands of the reconstructed subnyquist wideband signal. To perform AMC, three types of datasets of the preprocessed signals have been studied extensively in the literature. These datasets are: 1) IQ (inphase and quadrature phase) samples, 2) AmplitudePhase (AP) samples and 3) Constellation diagram images of modulation schemes. As realtime signal processing generates output in the form of samples, IQ and AP datasets are readily available for AMC as compared to the constellation images. Hence, here, we explore IQ and AP samples of the preprocessed wideband signals. Furthermore, motivated by the fact that CNN based models perform well on IQ samples and LSTM based models perform well on AP samples [29], we analyse the CNN architecture on IQ samples and LSTM architecture on AP samples. In the next section we perform ablation study of CNN and LSTM architectures which are modified to take wideband signal as an input and simultaneously predict the modulation schemes of all detected busy bands.
Via CNN model Architecture
Since this is the first work that deals with modulation classification of multiband subnyquist signal,in this section we perform analysis to get the optimal CNN architecture for modulation classification on IQ samples of the multiband input signal.The architectures are designed in such a way that they give an N X 8 output which includes the status of bands and the modulation schemes of busy bands(explained in detail in section 7D.
The baseline CNN architecture that we consider is shown in Table IV.
It consists of three convolution layers, a custom average pool/softmax activation layer at the end. The last convolution layer along with the custom pool layer aim to match the dimension of the output label and it is performed by averaging the input along the column dimension (i.e. ). Whereas the softmax activation layer associates the output with the probability of occurrence of every modulation scheme.
Since the input signal to our model can be viewed as a set of multiple one dimensional signal bands that have to be simultaneously classified according to their modulation scheme, we consider filter sizes of the form ntaps as in [28]. The ablation study which helps us decide the baseline model’s architecture is discussed in section 8.
Next, we discuss four stateoftheart CNN architectures and use them to enhance the performance beyond our baseline performance
Layers  Filter Size  Number of Filters  Output Dimension 
Input      
Conv/relu  64  
Conv/relu  64  
Conv  8  
Custom pool/softmax     
Nin  ResNET  Inception  

Layer 
Output dimension  Layer  Output dimension  Layer  Output dimension 
Input  Input  Input  
NiN Block  ResNet Block  Inception Block  
NiN Block  ResNet Block  Inception Block  
Conv/Relu  Conv/Relu  Conv/Relu  
Custom pool/softmax  Custom pool/softmax  Custom pool/softmax 
ViA1 Network in Network(NiN)
The Network in Networks architecture was introduced by et al in 2003 [35]. The architecture proposed the concept of micro networks that were integrated into the structure of a CNN to enhance the local modelling capability of the model.The authors argued that it was essential to compute abstract features for local patches before combining them into higher level concepts.The method can be viewed as an addition of 1x1 convolution layers.These 1x1 convolutions are similar to fully connected layers with tied weights acting independently at each pixel value to come up with a better abstraction on each local patch.The architecture of the NiN block and the model used is shown in Fig. 12 (a). The average pool custom layer instead of fully connected layers acts as a structural regularizer and helps us to train the network without any dropout at train time.
ViA2 Inception network
The Inception architecture was introduced in [37].The Inception block was based on the intution of processing visual information at different scales before aggregating it so that the model can abstract features from different scales simultaneously.The block consists of three parallel paths each of which has different sized filters as shown in Fig. 12 (b). The information from various sized kernels is combined by concatenating it at the end.
ViA3 Residual network
The Resnet architecture was introduced by He et al to mitigate the vanishing/exploding gradient problem faced while training deeper networks
[36]. They addressed this issue by using skip connections between layers.The skip connections provide a direct path for the gradients to flow between layers without adding any extra parameters or increasing the model complexity.The stablised the training process and prevented the degradation of accuracy as more layers were added.The architecture of the Residual block and the model used is shown in Fig. 12 (c).ViA4 DenseNet architecture
The Densenet architecture [38] uses concatenation instead of skip connections to combine feature maps.This leads to improved flow of information throughout the network.For each layer the feature maps of all preceding layers are used as inputs as shown Fig. 12
(d). The Densenet architecture mitigates the vanishing gradient problem which makes the training easier. It also encourages feature reuse and reduces the number of parameters which leads to model compactness and less overfitting encourages feature reuse.
Please refer Table V for the architecture details of NiN, Inception and Residual networks and Table VII for performance of these models at different SNR
ViB LSTM Model
In [29] authors show that the representation of the input signal to the AMC network can lead to significant performance differences. The authors propose to use amplitudephase samples as an input to a LSTM based model for the AMC of a preprocessed narrow band signal. Hence,we use an LSTM based network for AP wideband samples. The architecture of the network that we use is shown in IVB2 (same as the one we used for raw samples)). Through extensive experiments we find that this architecture performs the best for the multiband AMC problem. We discuss the ablation for LSTM model in section 8
Vii Implementation Details
In this section, we discuss the dataset used for DLSS and DLMC task followed by the description of the proposed loss function used for DLMC on the preprocessed wideband signal. The dataset is generated synthetically using MATLAB tool. It consists of seven widely used modulation schemes namely: BPSK, QPSK, 16QAM, 64QAM, 128QAM, 256QAM and 8PAM. The dataset is keyed by both modulation and SNR. We consider the SNR range from 10dB to 20dB at a step of 2dB.
Viia Dataset for DLSS
To determine the vacancy/busy status of frequency bands of a wideband signal, we utilize the complexvalued normalized pseudo reconstructed signal, of size . This signal is determined directly through the subNyquist samples as discussed in Section IVB. For generating the dataset, real and complex parts of are separated. Hence, the dataset is of the size . Here, we considered frequency bands and number of samples of each frequency bands. Since, this dataset is used to spectrum sensing, the label, of each frequency band will either be vacant (i.e. ) or busy (i.e. ). Since this dataset allows spectrum sensing, we refer the dataset as . We normalize this dataset in therange(0,1) before passing it to the deep learning models.
ViiB Dataset for DLMC on Raw Wideband Samples
For determining the modulation schemes of the detected busy bands, we use the complex valued reconstructed wideband signal, of size . Similar to the , this dataset is also separated into real and imaginary parts and hence, it is of size . But since the dataset classifies the modulation schemes, its label, where denotes the vacant frequency bands and belong to seven modulations schemes considered in the paper. Please note that from here on we refer this dataset for DLMC on raw wideband samples as .
ViiC Dataset for DLMC on Preprocessed Wideband Samples
Like the existing AMC methods, the consists of the preprocessed wideband samples.
However, unlike , the preprocessing is done on the entire reconstructed wideband signal, . The labels of this dataset is same as that of the . To perform DLMC on preprocessed wideband smaples we considered the following two dataset:
: It considers the time domain IQ sample vectors of preprocessed . The dataset has a shape of where is the number of modulated symbols and the two vectors of third dimension denote inphase and quadrature phase components of preprocessed . We normalize this dataset in the range(0,1) before passing it to the deep learning models. The dataset is studied for three types of channel models:
 with AWGN channel.
 with flat fading and AWGN channel.
 with flat fading and AWGN channel with a doppler shift of 10 Hz.
: It comprises of time domain amplitudephase vectors (i.e. polar representation of IQ samples) of preprocessed . Similar to , it has a shape of where the amplitude and phase part form two vectors of third dimension. The amplitude is 2 normalized and phase (in radians) is normalized between the range 1 and 1 [29]. The dataset is also studied for for three types of channel models:
 with AWGN channel.
 with flat fading and AWGN channel.
 with flat fading and AWGN channel with a Doppler shift of 10 Hz.
ViiD Loss Function
Seven modulation schemes are considered for the classification, thus for each of the bands, the output of the deep learning classifier (i.e. both CNN and LSTM) is a vector of unnormalized log probabilities and has the size of . The values in vector
are converted to probabilities by applying a softmax activation function which (for a particular band) is calculated as
(7) 
where is the predicted probability of (i=1,2…6,7) modulation scheme for a frequency band. This gives a output vector of size
.
The training loss for a particular frequency band depends on the status of the band. It is defined as the categorical cross entropy if the band is busy and zero if the band is vacant.
(8) 
where
is the actual probability distribution of the
modulation scheme for a particular frequency band. Next, the dimension output is concatenated with the band status vector from CNNSS to give the final dimensional output .Thus, the entire loss function can be precisely expresses as(9) 
where is the status (i.e. 0 for vacant and 1 for busy) of frequency band.
ViiE Training Parameters and Tools
The neural networks are implemented using Keras
[58](with Tensorflow backend
[59]) on Nvidia Cuda [60] enabled Quadro P4000 GPU. The weights of the models are initialized using default Keras initializers. We use an Adam optimizer whose parameters are set as learning rate , and [61]. Furthermore, each dataset consists of 112,000 examples, out of which 75% (i.e 84,000 examples) are used for training and remaining (i.e 28,000 examples) are used for testing.Viii Simulation Results
Viii1 Ablation Study for CNN
We studied the classification performance to decide our baseline model architecture for different sizes of filter (i.e. ntaps), number of filter and depth of the network.Figure 6 shows the classification performance for various values of ntaps. We notice that smaller filter (ntaps ) perform better than larger filters which is contrary to what we found in the case of raw sample inputs (as shown in Table III). Thus, we use small filter sizes (ntaps) in baseline CNN. Furthermore, the results obtained by varying the number of filters are very similar to the ones obtained in [28].Thus, we use 64 number of filters for further analysis as it is efficient from both computation, memory and performance point of view. . Also, as shown in Fig. 7, we observe no significant performance improvements when we increase the depth of our network beyond layer depth of . Hence, the baseline CNN uses number of filter and network depth of .
Viii2 Ablation Study for LSTM
To validate our LSTM based architecture, we carry out ablation study for hidden state vector size as shown in Fig. 13. It can be clearly seen that hidden state vector(HSV) size 32 underperforms especially at SNR ranges of 212 dB.We further notice that there is improve in performance as the HSV size is increased to 64.On further increasing the size to 128 we do not see any significant improvement in the average accuracy but the computational time increases.Thus we choose HSV=64 as the hidden state vector size for our model.
Viiia Results
In this section, we show the results of our proposed method on the task of spectrum sensing and modulation classification.
ViiiA1 Spectrum Sensing
Fig 10. shows the results for probability of detection of occupied bands using our proposed method.For comparison,we choose the detection results of the OMP algorithm at different SNR.It can be seen clearly that at lower SNR values(till 5db) our method performs much better than OMP based detection. As the SNR increases further the performance of our method and OMP become similar and converge to the same value.
ViiiA2 Modulation Classification
For modulation classification,we compare the performance of the proposed subnyquist WSSAMC on the raw wideband signal and the preprocessed IQ and AP samples of reconstructed wideband signal. We consider the performance of various machine learning (ML) algorithms to establish a baseline for our analysis. As the traditional ML algorithms cannot handle multiband signal as input, we pass only the detected busy bands one at a time sequentially for performance comparison. Furthermore, as the signal is sparse with very few busy bands, the classification accuracies are averaged over only the accuracy of busy bands for the fair comparison. The shown results have been generated for the MCS based digitization of subNyquist sampling. However, similar observations are made with other SNS techniques.The performance comparison is done with the orthogonal matching pursuit (OMP) based AMC and other subNyquist sampled machine learning AMCs. OMP based AMC uses OMP algorithm to perform WSS whereas the proposed method uses CNN deep learning model for WSS(as discussed in Section IVB).
The classification accuracy comparison is performed for the raw wideband data, IQ samples and AP data samples under AWGN and Rayleigh Fading channel models. For all three data samples it is observed that the proposed WSSAMC offers higher classification accuracy when compared to OMP based WSSAMC and other ML based WSSAMCs. The classification accuracy and comparison for different channel models is discussed in detail below :
ViiiA3 AWGN channel condition
The classification accuracy for raw wideband data, preprocessed IQ samples and AP samples under AWGN channel conditions at different SNR is shown in Fig. 19(a)(c), respectively.
In Fig. 19(a)(c) the subscript NS refers to the Nyquist sampled signal,OMP refers to signal reconstructed using OMP algorithm.The use of Palgorithm in the legends refers to subnyquist signal sensed and recovered using proposed algorithm.
Fig. 19(a) shows the performance comparison for raw samples.
Here CNN refers to the proposed model and LSTM refers to the proposed LSTM architecture in fig5.
For the raw samples,it is observed that,on averaging the classification accuracy over the entire range of SNRs, offers 69.5% classification accuracy and is better than that for the LSTM model which achieves the average accuracy of 67.7%
For ,it is observed that at high SNR (i.e 0db and above) the best case(i.e 0db) and worst case(i.e 20db) performance accuracies are (67.12 ) and (81.46 ) . The average accuracy on raw samples over SNR range 020db is 78 percent in the case of raw samples.
Fig. 19(b)and (c) shows the performance comparison for IQ samples and AP samples.
Here CNN refers to the proposed model (given in table 5(c)) and LSTM refers to the proposed LSTM architecture in fig5. .
It is observed that due to preprocessing on the reconstructed wideband signal, the classification accuracy of all classifiers is higher for the IQ samples and AP samples based AMC. For IQ samples the average accuracy for the proposed CNN based WSSAMC is 88.95%.
For high SNR it varies from 85.23 at 0db to 100 at 20db.Furthermore, The average accuracy over SNR range 020db is 96.42 percent for IQ samples.
Whereas for the AP samples, the average classification accuracy achieved by the proposed LSTM based WSSAMC is 88%.
For high SNR it varies from 85.53 at 0db to 99.71 at 20db.Furthermore, The average accuracy over SNR range 020db is 95.83 percent for AP samples.
ViiiA4 Impaired channel condition
Here, we consider the slowfading and fastfading with a Doppler shift of 10Hz for Rayleigh fading channel model. Under these channel models we find that the best performance is obtained using proposed LSTM based WSSAMC on the AP samples. Fig. 19 (d) shows the classification accuracy of the proposed LSTM under Rayleigh Fading channel model. The average accuracy for Rayleigh fading channel conditions is 69.27 %. Under Rayleigh fading channel with a doppler shift, the average accuracy is found to be 76.48%. For all the scenarios, it is observed that the classification accuracy of the proposed SNS based WSSAMC approaches to the NS based AMC with increase in SNR.
To understand the classifier performance and interclass discrepancies better, we analyse the confusion plots of the proposed WSSAMC at 4db, 0db and 18db in Fig. 29. At 18db we can clearly see a sharp diagonal with almost perfect classification except for 16QAM and 64QAM. As the SNR reduces, the sharpness of the diagonal further reduces in the 16/64/256 QAM region. This happens due to the fact that the higher order classification like 16/64/256 QAM modulation schemes, the intersymbol distance decreases which makes it difficult for the classifier to distinguish out the correct QAM class.
Method  

OMP 
68.9  88.11  86.64 
Proposed  69.5  88.95  88 

Baseline  AWGN  Rayleigh Fading  Rayleigh Fading with Doppler  
Classifiers  
Baseline  79.7  87.4  93.2  96.3  96.7  48.6  62.7  69.3  71.7  73.9  64.9  73.9  76.9  76.2  77.8 
NiN  79.2  88.5  98.3  99.9  100  52.8  68.1  79.4  83.4  85.1  65.9  77  85.2  88.6  89.6 
ResNet  79.9  88.6  98.2  99.9  100  52.5  66.6  79  83.3  86.6  66.6  76.9  84.6  89.1  90.8 
Inception  80.4  88.9  98.2  99.7  100  53.7  67.7  79.5  85.4  88.9  67.5  77  85.5  90.3  91.2 
Densenet  80  88.4  97.7  99.7  99.9  53.1  66.4  77.8  82.5  84.1  65.7  77.2  86  90  90.9 
Ix Conclusions
In this paper, we propose a deep learning based unified pipeline for realtime WSS and modulation classification of sparse wideband signal. The performance of the proposed SenseNet model is validated for different datasets and channel models. Simulation results show that the proposed model outperforms OMP in the task of spectrum sensing for all the different channel models considered.Being the first model to perform AMC on raw unprocessed wideband signal,we also show that the model gives 78 percent accuracy on raw wideband samples at high SNR(020dB) which is much higher than the performance of any other machine learning classifier.To validate our reconstruction method,we show that the modulation classification performance of our method is higher than that of OMP reconstructed signal. Furthermore,we also show the performance of the proposed model on IQ samples and AP samples .It is also shown that classification performance of the proposed SenseNet model approaches to that of Nyquist sampled AMC, with an increase in SNR for all datasets considered.
References

[1]
M. W. Aslam, Z. Zhu, and A. K. Nandi, “Automatic modulation classification using combination of genetic programming and KNN,” in
IEEE Trans. on Wireless Commun., vol. 11, no. 8, pp. 27422750, Aug. 2012.  [2] M. AbuRomoh, A. Aboutaleb and Z. Rezki, “Automatic Modulation Classification Using Moments and Likelihood Maximization,” in IEEE Communications Letters, vol. 22, no. 5, pp. 938941, May 2018.
 [3] A. Alahmadi, M. Abdelhakim, J. Ren, and T. Li, “Defense against primary user emulation attacks in cognitive radio networks using advanced encryption standard,” in IEEE Trans. Inf. Forens. Security, vol. 9, no. 5, pp. 772781, May 2014.
 [4] R. Venkataramani and Y. Bresler, “Optimal NonUniform Sampling and Reconstruction for Multiband Signals,” in IEEE Transactions on Signal Processing, vol. 49, no. 10, pp. 23012313, Oct. 2001.
 [5] M. Mishali and Y. C. Eldar, “From Theory to Practice: SubNyquist Sampling of Sparse Wideband Analog Signals,” in IEEE Journal of Selected Topics in Signal Processing, vol. 4, no. 2, pp. 375391, April 2010.
 [6] H. Joshi, S. J. Darak, A. A. Kumar and R. Kumar, “Throughput Optimized NonContiguous Wideband Spectrum Sensing via Online Learning and SubNyquist Sampling,” in IEEE Wireless Communications Letters, vol. 8, no. 3, pp. 805808, June 2019.
 [7] C. W. Lim and M. B. Wakin, “Automatic modulation recognition for spectrum sensing using nonuniform compressive samples,” in IEEE International Conference on Communications (ICC), Ottawa, ON, pp. 35053510, June 2012.
 [8] L. Zhou and H. Man, “Wavelet Cyclic Feature Based Automatic Modulation Recognition Using Nonuniform Compressive Samples,” 78th IEEE Vehicular Technology Conference, Las Vegas, Nevada, pp. 16, Sept. 2013.
 [9] S. Ramjee, S. Ju, D. Yang, X. Liu, A. El Gamal, Y. C. Eldar, “Fast deep learning for automatic modulation classification” in arXiv:1901.05850, Jan. 2019.
 [10] K. Triantafyllakis, M. Surligas, G. Vardakis, and S. Papadakis,““Phasma: An automatic modulation classification system based on Random Forest,”” in IEEE Int. Symp. Dyn. Spectr. Access Netw. (DySPAN),Piscataway, NJ, USA, Mar. 2017, pp. 1–3.
 [11] J. Li, Q, Meng, Y, Sun, L. Qiu, and W. Ma,““Automatic modulation classification using support vector machines and error correcting output codes,” in Proc. ITNEC 2017, ,pp. 6063, Chengdu, China, Dec. 2017.
 [12] Gardner W.A. S “Signal interception: A unifying theoretical framework for feature detection.” in IEEE Trans. Commun., 1988;36:897–906. doi: 10.1109/26.3769
 [13] Dandawate A.V., Giannakis G.B. “Statistical tests for presence of cyclostationarity. ” in IEEE Trans. Signal Process., 1994;42:2355–2369. doi: 10.1109/78.317857.
 [14] Fehske A., Gaeddert J., Reed J.H. “A new approach to signal classification using spectral correlation and neural networks; ” in Proceedings of the First IEEE International Symposium on New Frontiers in Dynamic Spectrum Access Networks;, Baltimore, MD, USA. 8–11 November 2005; pp. 144–150.
 [15] Kaabouch N., Hu W.C.“Handbook of Research on SoftwareDefined and Cognitive Radio Technologies for Dynamic Spectrum Managemen. Volume 2. ” in IGI Global;, Hershey, PA, USA: 2014.
 [16] AlFuqaha A., Guizani M., Mohammadi M., Aledhari M., Ayyash M.“Internet of Things: A Survey on Enabling Technologies, Protocols, and Applications. ” in IEEE Commun. Surv. Tutor., 2015;17:2347–2376. doi: 10.1109/COMST.2015.2444095.
 [17] Rawat P., Singh K.D., Bonnin J.M.“Cognitive radio for M2M and Internet of Things: A survey.” in Comput. Commun., 2016;94:1–29. doi: 10.1016/j.comcom.2016.07.012.
 [18] Arjoune Y, Kaabouch N.“A Comprehensive Survey on Spectrum Sensing in Cognitive Radio Networks: Recent Advances, New Challenges, and Future Research Directions.” in Sensors (Basel). 2019;19(1):126. Published 2019 Jan 2. doi:10.3390/s19010126
 [19] Palangi, Hamid, Rabab Ward, and Li Deng. “”Convolutional deep stacking networks for distributed compressive sensing.”” in Signal Processing 131 (2017): 181189.
 [20] H. Palangi, R. Ward, L. Deng, .“Using deep stacking network to improve structured compressed sensing with multiple measurement vectors,” in IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP),2013.
 [21] M. Mishali, Y. C. Eldar, O. Dounaevsky, and E. Shoshan, “Xampling: Analog to digital at subNyquist rates,” in IET Circuits, Devices Syst., vol. 5, pp. 8–20, Jan. 2011.
 [22] A. Mousavi, A.B. Patel, R.G. Baraniuk,“A deep learning approach to structured signal recovery,” in arXiv:abs/1508.04065.
 [23] H. Palangi, R. Ward, L. Deng,“Distributed compressive sensing: a deep learning approach,” in arXiv:abs/1508.04924
 [24] Karahanoglu N.B., Erdogan H.“A orthogonal matching pursuit: Bestfirst search for compressed sensing signal recovery.” in Digit. Signal Process. 2012;22:555–568. doi: 10.1016/j.dsp.2012.03.003.
 [25] Donoho D.L., Tsaig Y., Drori I., Starck J.L.“Sparse Solution of Underdetermined Systems of Linear Equations by Stagewise Orthogonal Matching Pursuit.” in IEEE Trans. Inf. Theory. 2012;58:1094–1121. doi: 10.1109/TIT.2011.2173241.
 [26] Tropp J.A., Gilbert A.C.“Signal Recovery from Random Measurements Via Orthogonal Matching Pursuit.” in IEEE Trans. Inf. Theory. 2007;53:4655–4666. doi: 10.1109/TIT.2007.909108.
 [27] Timothy J. O’Shea ,Tamoghna Roy ,T. Charles Clancy “Over the Air Deep Learning Based Radio Signal Classification” in IEEE Journal of Selected Topics in Signal Processing , Volume: 12 , Issue: 1 , Feb. 2018
 [28] Nathan E. West and Timothy J. O’Shea “Deep Architectures for Modulation Recognition” in 2017 IEEE International Symposium on Dynamic Spectrum Access Networks (DySPAN)
 [29] Sreeraj Rajendran, Wannes Meert, Domenico Giustiniano, Vincent Lenders, Sofie Pollin, “Deep Learning Models for Wireless Signal Classification With Distributed LowCost Spectrum Sensors”, in IEEE Transactions on Cognitive Communications and Networking, vol. 4, no. 3, pp. 433445, May 2018.
 [30] T.J. O’Shea, J. Corgan, T. C. Clancy, “Convolutional Radio Modulation Recognition Networks,” in Springer: Engineering Applications of Neural Networks (EANN), vol 629, pp. 213226, Aug. 2016.
 [31] Ao Dai ; Haijian Zhang ; Hong Sun “Automatic modulation classification using stacked sparse autoencoders” in 2016 IEEE 13th International Conference on Signal Processing (ICSP)
 [32] Gihan J. Mendis ; Jin Wei ; Arjuna Madanayake “Deep learningbased automated modulation classification for cognitive radio” in 2016 IEEE International Conference on Communication Systems (ICCS)
 [33] Timothy O’Shea ; Jakob Hoydis “An Introduction to Deep Learning for the Physical Layer” in IEEE Transactions on Cognitive Communications and Networking, Volume: 3 , Issue: 4 , Dec. 2017

[34]
A. Krizhevsky, I. Sutskever, and G. E. Hinton, “Imagenet classification with deep convolutional neural networks,” in
Advances in neural information processing systems, 2012, pp. 10971105, 2012.  [35] M. Lin, Q. Chen and S. Yan, “Network in network,” in International Conference on Learning Representations, 2014.

[36]
Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun, “Deep Residual Learning for Image Recognition,” in
2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
 [37] Christian Szegedy ; Wei Liu ; Yangqing Jia ; Pierre Sermanet ; Scott Reed ; Dragomir Anguelov ; Dumitru Erhan,Vincent Vanhoucke,Andrew Rabinovich “Going deeper with convolutions,” in 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
 [38] Gao Huang ; Zhuang Liu ; Laurens van der Maaten ; Kilian Q. Weinberger “Densely Connected Convolutional Networks,” in 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
 [39] O.A. Dobre ; A. Abdi ; Y. BarNess ; W. Su, “Survey of automatic modulation classification techniques: classical approaches and new trends,” in IET Communications Volume: 1 , Issue: 2 , April 2007
 [40] Swami, A., and Sadler, B.M., “Hierarchical digital modulation classification using cumulants,” in IEEE Trans. Commun., 2000, 48, pp. 416–429
 [41] Dai, W., Wang, Y., and Wang, J., “‘Joint power and modulation classification using second and higher statistics’,” in Proc. WCNC, 2002, pp. 155–158
 [42] Hatzichristos, G., and Fargues, M.P, “‘A hierarchical approach to the classification of digital modulation types in multipath environments’,” in ASILOMAR, 2001, pp. 1494– 1498
 [43] Swami, A., Barbarossa, S., and Sadler, B. “‘Blind source separation and signal classification”’ in Proc. ASILOMAR, 2000, pp. 1187– 1191
 [44] Martret, C., and Boiteau, D.M. “‘Modulation classification by means of different order statistical moments’,” in IEEE MILCOM, 1997, pp. 1387–1391
 [45] Marchand, P., Lacoume, J.L., and Le Martret, C. “‘Classification of linear modulations by a combination of different orders cyclic cumulants’.” in Proc. ICASSP, 1998, pp. 2157–2160
 [46] Spooner, C.M. “‘Classification of cochannel communication signals using cyclic cumulants’,” in Proc. ASILOMAR, 1995, pp. 531–536
 [47] Spooner, C.M., Brown, W.A., and Yeung, G.K. “‘Automatic radiofrequency environment analysis”’ in Proc. ASILOMAR, 2000, pp. 1181–1186
 [48] Spooner, C.M.: “‘On the utility of sixthorder cyclic cumulants for RF signal classification”’ in Proc. ASILOMAR, 2001, pp. 890 –897
 [49] Dobre, O.A., BarNess, Y., and Su, W. “‘Higherorder cyclic cumulants for high order modulation classification”’ in Proc. IEEE MILCOM, 2003, pp. 112–117
 [50] Dobre, O.A., BarNess, Y., and Su, W.: “‘Selection combining for modulation recognition in fading channels’.” in Proc. IEEE MILCOM, 2005, pp. 2499–2505
 [51] Dobre, O.A., Abdi, A., BarNess, Y., Su, W“‘Automatic radiofrequency environment analysis”’ in Proc. ASILOMAR, 2000, pp. 1181–1186
 [52] Venalainen, J., Terho, L., and Koivunen, V.“‘Modulation classification in fading multipath channel’.” in Proc. ASILOMAR, 2002, pp. 1890–1894
 [53] S. Hochreiter and J. Schmidhuber, “Long shortterm memory,” in Neural Comput., vol. 9, no. 8, pp. 1735–1780, Nov. 1997.
 [54] A. Karpathy, J. Johnson, and L. FeiFei, “Visualizing and understanding recurrent networks,” in arXiv preprint arXiv:1506.02078, 2015.
 [55] A. Karpathy, J. Johnson, and L. FeiFei, “Visualizing and understanding recurrent networks,” in arXiv preprint arXiv:1506.02078, 2015.
 [56] N. Srivastava, G. E. Hinton, A. Krizhevsky, I. Sutskever, and R. Salakhutdinov, “Dropout: A simple way to prevent neural networks from overfitting.,” Journal of Machine Learning Research, vol. 15, no. 1, pp. 1929– 1958, 2014.
 [57] R. A. Dunne and N. A. Campbell, “On the Pairing of the Softmax Activation and CrossEntropy Penalty Functions and the Derivation of the Softmax Activation Function,” in Proceedings of the Australian Conference on Neural Networks, pp. 181185, Melbourne, Australia, 1997.
 [58] F. Chollet, Keras, https://github.com/fchollet/keras, 2015.
 [59] M. Abadi, A. Agarwal, et al., “Tensorflow: largescale machine learning on heterogeneous systems”, 2015. Software available from tensorflow.org.
 [60] C. Nvidia, “Compute unified device architecture programming guide,”2007. https://docs.nvidia.com/cuda/cudacprogrammingguide/index.html
 [61] D. P. Kingma and J. B, “Adam: A Method for Stochastic Optimization,” in International Conference on Learning Representations (ICLR), San Diego, California, May 2015.
 [62] Y. Arjoune, N. Kaabouch, H. E. Ghazi and A. Tamtaoui, “Compressive sensing: Performance comparison of sparse recovery algorithms,” in IEEE Computing and Communication Workshop and Conference, pp. 17, USA, Jan. 2017.
 [63] R. Chartrand, “Exact Reconstruction of Sparse Signals via Nonconvex Minimization,” in IEEE Signal Processing Letters, vol. 14, no. 10, pp. 707710, Oct. 2007.
 [64] E. J. Candes, M. B. Wakin, and S. P. Boyd,“Enhancing sparsity by reweighted l 1 minimization,” in Journal of Fourier analysis and applications, vol. 14, no. 56 ,pp. 877905, Dec. 2008.
 [65] J. F. C. Mota, J. M. F. Xavier, P. M. Q. Aguiar and M. Puschel, “Distributed Basis Pursuit,” in IEEE Transactions on Signal Processing, vol. 60, no. 4, pp. 19421956, April 2012.
 [66] J. D. Blanchard, M. Cermak, D. Hanle and Y. Jing, “Greedy Algorithms for Joint Sparse Recovery,” in IEEE Transactions on Signal Processing, vol. 62, no. 7, pp. 16941704, April 2014.
 [67] T. T. Cai and L. Wang, “Orthogonal matching pursuit for sparse signal recovery with noise,” in IEEE Transactions on Information theory, vol. 57, no. 7, pp. 46804688, July 2011.
 [68] S. Kapoor, S. Rao and G. Singh, “Opportunistic Spectrum Sensing by Employing Matched Filter in Cognitive Radio Network,”International Conference on Communication Systems and Network Technologies, Jammu, India, 2011, pp. 580583, July 2011.
 [69] M. LopezBenitez and F. Casadevall, “Improved energy detection spectrum sensing for cognitive radio,” in IET Communications, vol. 6, no. 8, pp. 785796, May 2012.
 [70] H.S. Chen, W. Gao, and D. G. Daut, “Spectrum sensing using cyclostationary properties and application to IEEE 802.22 WRAN,” in Proc. of Global Telecommunications Conference, pp. 3133–3138, Nov. 2007
 [71] T. Yucek and H. Arslan, “A survey of spectrum sensing algorithms for cognitive radio applications,” in IEEE Communications Surveys & Tutorials, vol. 11, no. 1, pp.‘116130, First Quarter 2009.
 [72] T. Blumensath and M. E. Davies.“Iterative hard thresholding for compressed sensing,” inElsevier: Applied and computational harmonic analysis,vol. 27, no. 3, pp. 265274, Nov. 2009.
Comments
There are no comments yet.