Spectrum Sensing for Cognitive Radio Using Kernel-Based Learning

05/15/2011 ∙ by Shujie Hou, et al. ∙ Tennessee Tech University 0

Kernel method is a very powerful tool in machine learning. The trick of kernel has been effectively and extensively applied in many areas of machine learning, such as support vector machine (SVM) and kernel principal component analysis (kernel PCA). Kernel trick is to define a kernel function which relies on the inner-product of data in the feature space without knowing these feature space data. In this paper, the kernel trick will be employed to extend the algorithm of spectrum sensing with leading eigenvector under the framework of PCA to a higher dimensional feature space. Namely, the leading eigenvector of the sample covariance matrix in the feature space is used for spectrum sensing without knowing the leading eigenvector explicitly. Spectrum sensing with leading eigenvector under the framework of kernel PCA is proposed with the inner-product as a measure of similarity. A modified kernel GLRT algorithm based on matched subspace model will be the first time applied to spectrum sensing. The experimental results on simulated sinusoidal signal show that spectrum sensing with kernel PCA is about 4 dB better than PCA, besides, kernel GLRT is also better than GLRT. The proposed algorithms are also tested on the measured DTV signal. The simulation results show that kernel methods are 4 dB better than the corresponding linear methods. The leading eigenvector of the sample covariance matrix learned by kernel PCA is more stable than that learned by PCA for different segments of DTV signal.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

Spectrum sensing is a cornerstone in cognitive radio  [1, 2], which detects the availability of radio frequency bands for possible use by secondary user without interference to primary user. Some traditional techniques proposed for spectrum sensing are energy detection, matched filter detection, cyclostationary feature detection, covariance-based detection and feature based detection  [3, 4, 5, 6, 7, 8, 9, 10, 11]. Spectrum sensing problem is nothing but a detection problem.

The secondary user receives the signal . Based on the received signal, there are two hypotheses: one is that the primary user is present , another one is the primary user is absent . In practice, spectrum sensing involves detecting whether the primary user is present or not from discrete samples of .

(1)

in which are samples of the primary user’s signal and are samples of zero mean white Gaussian noise. In general, the algorithms of spectrum sensing aim at maximizing corresponding detection rate at a fixed false alarm rate with low computational complexity. The detection rate and false alarm rate are defined as

(2)

in which

represents probability.

Kernel methods  [12, 13, 14, 15] have been extensively and successfully applied in machine learning, especially in support vector machine (SVM)  [16, 17]. Kernel methods are counterparts of linear methods which implement in feature space. The data in original space can be mapped to different feature spaces with different kernel functions. The diversity of feature spaces gives us more choice to gain better performance’s algorithm than only in the original space.

A kernel function which just relies on the inner-product of feature space data is defined as  [18]

(3)

to implicitly map the original space data into a higher dimensional feature space , where is the mapping from original space to feature space. The dimension of can be infinite, such as Gaussian kernel. Thus the direct operation on may be computationally infeasible. However, with the use of the kernel function, the computation will only rely on the inner-product between the data points. Thus the extension of some algorithms to even an arbitrary dimensional feature space becomes possible.

is the inner-product between and . A function is a valid kernel if there exists a mapping satisfying Eq. (3). Mercer’s condition  [18] gives us the condition about what kind of functions are valid kernels. Kernel functions allow the linear method to generalize to a non-linear method without knowing explicitly. If the data in original space embodies nonlinear structure, kernel methods can usually obtain better performance than linear methods.

Spectrum sensing with leading eigenvector of the sample covariance matrix is proposed and hardware demonstrated in  [11] successfully under the framework of PCA. The leading eigenvector of non-white wide-sense stationary (WSS) signal has been proved stable  [11]. In this paper, spectrum sensing with leading eigenvector of the sample covariance matrix of feature space data is proposed. The kernel trick is employed to implicitly map the original space data to a higher dimensional feature space. In the feature space, the inner-product is taken as a measure of similarity between leading eigenvectors without knowing leading eigenvectors explicitly. That is to say spectrum sensing with leading eigenvector under the framework of kernel PCA is proposed with the inner-product as a measure of similarity.

Several generalized likelihood ratio test (GLRT)  [19, 20] algorithms have been proposed for spectrum sensing. Kernel GLRT  [21] algorithm based on matched subspace model  [22] is proposed and applied to hyperspectral target detection problem, which assumes that the target and background lie in the known linear subspaces and . and are orthonormal matrices with the columns of each spanning the subspaces and . and

consist of eigenvectors corresponding to nonzero eigenvalues of the sample covariance matrices of target and background, respectively. The identity projection operator in the feature space is assumed to map

onto the subspace consisting of the linear combinations of column vectors of and .

In this paper, modified kernel GLRT algorithm based on matched subspace model will be the first time employed for spectrum sensing without consideration of background. On the other hand, the identity projection operator in the feature space is assumed to map as in this paper.

The contribution of this paper is as follows: Detection algorithm with leading eigenvector will be generalized to feature spaces which are determined by the choice of kernel functions. Simply speaking, leading eigenvector detection based on kernel PCA is proposed for spectrum sensing. Different from PCA, the similarity of leading eigenvectors will be measured by inner-product instead of the maximum absolute value of cross-correlation. A modified version of kernel GLRT will be introduced to spectrum sensing which considers the perfect identity projection operator in feature space without involving background signal. DTV signal  [23] captured in Washington D.C. will be employed to test the proposed kernel PCA and kernel GLRT algorithms for spectrum sensing.

The organization of this paper is as follows. In section II, spectrum sensing with leading eigenvector under the framework of PCA will be reviewed. Detection with leading eigenvector will be extended to feature space by use of kernel. The proposed algorithm that spectrum sensing with leading eigenvector under the framework of kernel PCA will be introduced in section II. GLRT and modified kernel GLRT algorithms for spectrum sensing based on matched subspace model will be introduced in section III. The experimental results on simulated sinusoidal signal and DTV signal are shown in section IV. The corresponding kernel methods will be compared with linear methods. Finally, the paper is concluded in section  V.

Ii Spectrum Sensing with PCA and Kernel PCA

The dimensional received vector is , therefore,

(4)

in which and . Assuming the samples of the primary user’s signal is known priorly with length , . The training set consists of

(5)

where is the number of vectors in the training set and is the sampling interval. represents transpose.

Ii-a Detection Algorithm with Leading Eigenvector under the Framework of PCA

The leading eigenvector (eigenvector corresponding to the largest eigenvalue) of the sample covariance matrix of the training set can be obtained which is taken as the template of PCA method. Given -dimensional column vectors of the training set, the sample covariance matrix can be obtained by

(6)

which assumes that the sample mean is zero,

(7)

The leading eigenvector of can be extracted by eigen-decomposition of ,

(8)

where is a diagonal matrix. are eigenvalues of . is an orthonormal matrix, the columns of which are the eigenvectors corresponding to the eigenvalues . For simplicity, take as the eigenvector corresponding to the largest eigenvalue. The leading eigenvector is the template of PCA.

For the received samples , likewise, vectors can be obtained by (5). (Indeed, the number of the training set is not necessarily equal to the number of the received vectors, here, for simplicity, we use the same to denote both of them.) The leading eigenvector of the sample covariance matrix is obtained. The presence of in is determined by

(9)

where is the threshold value for PCA method, and is the similarity between and template which is measured by cross-correlation. is assigned to arrive a desired false alarm rate. The detection with leading eigenvector under the framework of PCA is simply called PCA detection.

Ii-B Detection Algorithm with Leading Eigenvector under the Framework of Kernel PCA

A nonlinear version of PCA–kernel PCA  [24]– has been proposed based on the classical PCA approach. Kernel function is employed by kernel PCA to implicitly map the data into a higher dimensional feature space, in which PCA is assumed to work better than in the original space. By introducing the kernel function, the mapping need not be explicitly known which can obtain better performance without increasing much computational complexity.

The training set and received set in kernel PCA are obtained the same way as with PCA framework.

The training set in the feature space are which are assumed to have zero mean, e.g., . Similarly, the sample covariance matrix of is

(10)

The leading eigenvector of corresponding to the largest eigenvalue satisfies

(11)

The last equation in (11) implies that the eigenvector is the linear combination of the feature space data ,

(12)

Substituting (12) into (11),

(13)

and left multiplying to both sides of (13), yields

(14)

By introducing the kernel matrix and vector , eq. (14) becomes

(15)

It can be seen that is the leading eigenvector of the kernel matrix . The kernel matrix is positive semidefinite.

Thus, the coefficients in (12) for can be obtained by eigen-decomposition of the kernel matrix which has been proved in  [24] before. The normalization of can be derived by  [24]

(16)

in which is the eigenvalue corresponding to the eigenvector of .

In the traditional kernel PCA approach  [24], the first principal component of a random point in the feature space can be extracted by

(17)

without knowing explicitly.

However, instead of computing principal components in the feature space, the leading eigenvector is needed as the template for the detection problem. Though can be written as the linear combination of in which the coefficients are entries of leading eigenvector of , because are not given, the leading eigenvector is still not explicitly known.

In this paper, a detection scheme based on the leading eigenvector of the sample covariance matrix in the feature space is proposed without knowing explicitly.

Given the received vectors , likewise, the leading eigenvector of the sample covariance matrix is the linear combination of the feature space data , e.g.,

(18)

is the leading eigenvector of the kernel matrix

(19)

As is well-known that inner-product is one kind of similarity measure. Here, the similarity between and is measured by inner-product.

(20)

is the kernel matrix between and . A measure of similarity between and has been obtained without giving and based on (20).

Fig. 1: The flow chart of the proposed kernel PCA algorithm for spectrum sensing

The proposed detection algorithm with leading eigenvector under the framework of kernel PCA is summarized here as follows:

  1. Choose a kernel function . Given the training set of the primary user’s signal , the kernel matrix is . is positive semidefinite. Eigen-decomposition of to obtain the leading eigenvector .

  2. The received vectors are . Based on the chosen kernel function, the kernel matrix is obtained. The leading eigenvector is also obtained by eigen-decomposition of .

  3. The leading eigenvectors for and can be expressed as

    (21)
  4. Normalize and by (16).

  5. The similarity between and is

    (22)
  6. Determine the presence or absence of primary signal in by evaluating or not.

is the threshold value for kernel PCA algorithm. The flow chart of the proposed kernel PCA algorithm for spectrum sensing is shown in Fig. 1. The detection with leading eigenvector under the framework of kernel PCA is simply called kernel PCA detection. The templates of PCA can be learned blindly even at very low signal to noise ratio (SNR)  [25].

So far the mean of has been assumed to be zero. In fact, the zero mean data in the feature space are

(23)

The kernel matrix for this centering or zero mean data can be derived by  [24]

(24)

in which . The centering in feature space is not done in this paper.

Some commonly used kernels are as follows: polynomial kernels

(25)

where is the order of the polynomial, radial basis kernels (RBF)

(26)

and Neural Network type kernels

(27)

in which the heavy-tailed RBF kernel is in the form of

(28)

and Gaussian RBF kernel is

(29)

Iii Spectrum Sensing with GLRT and Kernel GLRT

GLRT and kernel GLRT methods considered in this paper also assume that there is a training set for the primary user’s signal, in which are dimensional column vectors. The primary user’s signal is assumed to lie on a given linear subspace

. The training set is used to estimate this subspace

.

Given the training set , the sample covariance matrix is obtained by (6). The eigenvectors of corresponding to nonzero eigenvalues are taken as the bases of the subspace .

Kernel GLRT  [21] based on matched subspace model for hyperspectral target detection has been proposed which takes into account the background. The background information can be taken as interference in spectrum sensing. In this paper the modified kernel GLRT algorithm based on matched subspace model is proposed for spectrum sensing without taking into consideration the interference.

Iii-a GLRT Based on Matched Subspace Model

The GLRT approach in this paper is based on the linear subspace model  [22] in which the primary user’s signal is assumed to lie on a linear subspace . Receiving one dimensional vector , the two hypotheses and can be expressed as

(30)

is spanned by the column vectors of . is an orthonormal matrix, in which

is an identity matrix.

is the coefficient’s vector in which each entry representing the magnitude on each basis of .

is still white Gaussian noise vector which obeys multivariate Gaussian distribution

.

For the received vector , LRT approach detects between the two hypotheses and by

(31)

in which is the threshold value of LRT approach. and are conditional probability densities which follow Gaussian distributions,

(32)

In general, the parameters are unknown to us under which the GLRT approach is explored. In GLRT, the parameters are replaced by their maximum likelihood estimates . The maximum likelihood estimate of is equivalent to the least square estimate of  [21],

(33)

can be cast as

(34)

Substituting the maximum likelihood estimates of the parameters into (31) and taking root, GLRT is expressed as  [22]

(35)

where is the identity projection operator, and is the projection onto the subspace ,

(36)

The detection result is evaluated by comparing of GLRT with a threshold value .

Iii-B Kernel GLRT Based on Matched Subspace Model

Accordingly, if , also obey Gaussian distributions  [21]

(37)

then GLRT can be extended to the feature space of ,

(38)

where is the identity projection operator in the feature space. is the linear space that the primary user’s signal in the feature space lies on. Each column of is the eigenvector corresponding to the nonzero eigenvalue of

(39)

Likewise, is a projection operator onto the primary signal’s subspace,

(40)

Here, we assume that can perfectly project as in the feature space which is different from the method proposed in  [21],

(41)

Based on the derivation of kernel PCA, the eigenvectors corresponding to nonzero eigenvalues of the sample covariance matrix are . are eigenvectors corresponding to nonzero eigenvalues of . is the number of nonzero eigenvalues of . Accordingly, can be represented as

(42)

The derivation of (38) is based on the assumption that the hypotheses , obey Gaussian distributions. The paper  [21] has claimed that, though without strict proof, if is Gaussian kernel , are still distributed as Gaussian’s.

Gaussian kernel is employed for the kernel GLRT approach, thus . Substituting (42) into (38),

(43)

in which

(44)

The centering of in the feature space  [21] is

(45)
Fig. 2: The flow chart of the proposed kernel GLRT algorithm for spectrum sensing

The procedure of kernel GLRT for spectrum sensing based on Gaussian kernels without consideration of centering is summarized here as follows:

  1. Given a training set of the primary user’s signal , the kernel matrix is . is positive semidefinite. Eigen-decomposition of to obtain eigenvectors corresponding to all of the nonzero eigenvalues.

  2. Normalize the received dimensional vector by

    (46)
  3. Compute the kernel vector of by (44).

  4. Compute the value of defined in (43).

  5. Determine a threshold value for a desired false alarm rate.

  6. Detect the presence or absence of in by checking or not.

The flow chart of the proposed kernel GLRT algorithm for spectrum sensing with Gaussian kernels is shown in Fig. 2.

The detection rate and false alarm rate for all of the above methods can be calculated by

(47)

where is the threshold value determined by each of the above algorithm. In general, threshold value is determined by false alarm rate of .

Iv Experiments

The experimental results will be compared with the results of estimator-correlator (EC)  [26] and maximum minimum eigenvalue (MME)  [7]. EC method assumes that the signal follows zero mean Gaussian distribution with the covariance matrix ,

(48)

Both and are given priorly. Consequently, when signal obeys Gaussian distribution, EC method is optimal. The hypothesis is when

(49)

where is the threshold value designed for the EC method.

MME is a totally blind method without any prior knowledge on the covariance matrix of the signal and . The hypothesis is when

(50)

where is the threshold value designed for the MME method. and are the maximal and minimal eigenvalues of the sample covariance matrix .

PCA, kernel PCA, GLRT, and kernel GLRT methods considered in this paper bear partial prior knowledge, that is, the sample covariance matrix of the signal is given priorly.

Iv-a Experiments on the Simulated Sinusoidal Signal

The primary user’s signal assumes to be the sum of three sinusoidal functions with unit amplitude of each. The generated sinusoidal samples with length are taken as the samples of . The training set is taken from with and . Received signal is the same length as . Vectorized are with and . For the received vectors , EC detection is implemented on every vector and then do average (same implementation for GLRT and kernel GLRT)

(51)

Polynomial kernel of order 2 with is applied for kernel PCA.

The detection rates varied by SNR for kernel PCA and PCA compared with EC and MME with are shown in Fig. 3 for 1000 experiments. From Fig. 3, it can be seen that when SNR -10 dB, kernel PCA is about 4 dB better than PCA method. Kernel PCA can compete with EC method but with less known prior knowledge. It should be noticed that the types of kernel functions and parameters in kernel functions can both affect the performance of the kernel PCA approach.

The detection rates varied by SNR for kernel GLRT and GLRT compared with EC and MME with are shown in Fig. 4 for 1000 experiments. Kernel GLRT is still better than GLRT method. Kernel GLRT can even beat EC method. The underlying reason is that EC method assumes sinusoidal signal also following zero-mean Gaussian distribution with the actual distribution of which being shown in Fig. 12. As is well known that sinusioal signal lies on a linear subspace which can be nearly perfectly estimated from the sample covariance matrix. Therefore, the matched subspace model for GLRT and kernel GLRT considered in this paper is more suitable for sinusoial signal. Gaussian kernel is used with the parameter . The width of Gaussian kernel is the major factor that affects the performance of the kernel GLRT approach.

Fig. 3: The detection rates for kernel PCA and PCA compared with EC and MME with for the simulated signal
Fig. 4: The detection rates for kernel GLRT and GLRT compared with EC and MME with for the simulated signal

The calculated threshold values with for kernel PCA, PCA, kernel GLRT, and GLRT methods are shown in Fig. 5 and Fig. 6, respectively. The threshold values are normalized by dividing the corresponding maximal values in , , and , respectively. The threshold values assigned for the kernel methods are more stable than the corresponding linear methods.

Fig. 5: Normalized threshold values for kernel PCA and PCA
Fig. 6: Normalized threshold values for kernel GLRT and GLRT

The simulation results are tested by choosing the kernel function as . In this manner, the selected feature space is the original space. If the operations in the feature space and original space are identical, (for example, the centering is done in both of the spaces, and similarity measure is the inner-product for both PCA and kernel PCA), the results for kernel and corresponding linear methods should be the same. The tested results verified the correctness of the simulation.

Iv-B Experiments on Captured DTV Signal

DTV signal  [23] captured in Washington D.C. will be employed to the experiment of spectrum sensing in this section. The first segment of DTV signal with is taken as the samples of the primary user’s signal .

First, the similarities of leading eigenvectors of the sample covariance matrix between first segment and other segments of DTV signal will be tested under the frameworks of PCA and kernel PCA. The DTV signal with length is obtained and divided into segments with the length of each segment . Similarities of leading eigenvectors derived by PCA and kernel PCA between the first segment and the rest segments are shown in Fig. 7. The result shows that the similarities are very high between leading eigenvectors of different segment’s DTV signal (which are all above ), on the other hand, kernel PCA is more stable than PCA.

Fig. 7: Similarities of leading eigenvectors derived by PCA and kernel PCA between the first segment and other segments

The detection rates varied by SNR for kernel PCA and PCA (kernel GLRT and GLRT) compared with EC and MME with are shown in Fig. 8 (Fig. 9) for 1000 experiments. The ROC curves are shown in Fig. 10 ( Fig. 11) for kernel PCA and PCA (kernel GLRT and GLRT) with SNR = -16, -20, -24 dB. Experimental results show that kernel methods are 4 dB better than the corresponding linear methods. Kernel methods can compete with EC method. Howerver, kernel GLRT in this example cannot beat EC method due to the fact that the distribution of DTV signal (shown in Fig. 12) is more approximated Gaussian than the above simulated sinusoidal signal. Gaussian kernel with parameter is applied for kernel GLRT. Polynomial kernel of order 2 with is applied for kernel PCA.

Fig. 8: The detection rates for kernel PCA and PCA compared with EC and MME with for DTV signal
Fig. 9: The detection rates for kernel GLRT and GLRT compared with EC and MME with for DTV signal
Fig. 10: ROC curves for kernel PCA and PCA for DTV signal
Fig. 11: ROC curves for kernel GLRT and GLRT for DTV signal
Fig. 12: The histograms of sinusoidal and DTV signal

V Conclusion

Kernel methods have been extensively and effectively applied in machine learning. Kernel is a very powerful tool in machine learning. Kernel function can extend the linear method to nonlinear one by defining the inner-product of data in the feature space. The mapping from the original space to a higher dimensional feature space is indirectly defined by the kernel function. Kernel method makes the computation in an arbitrary dimensional feature space become possible.

In this paper, the detection with the leading eigenvector under the framework of kernel PCA is proposed. The inner-product between leading eigenvectors is taken as the similarity measure for kernel PCA approach. The proposed algorithm makes the detection in an arbitrary dimensional feature space become possible. Kernel GLRT based on matched subspace model is also introduced to spectrum sensing. Different from  [21], the kernel GLRT approach proposed in this paper assumes that identity projection operator is perfect in the feature space, that is, it can map as . The background information is not considered in this paper.

Experiments are conducted with both simulated sinusoidal signal and captured DTV signal. When the second order polynomial kernel with is used for kernel PCA approach, the experimental results show that kernel PCA is 4 dB better than PCA whether on the simulated signal or DTV signal. Kernel PCA can compete with EC method. Kernel GLRT method is about 4 dB better than GLRT for DTV signal with appropriate choice of the width of Gaussian kernel’s. Depending on the signal, kernel GLRT can even beat the EC method which owns the perfect prior knowledge.

In this paper, the types of kernels and parameters in kernels are chosen manually by trial and error. How to choose an appropriate kernel function and parameter is still an open problem for us. In PCA and kernel PCA approaches, only the leading eigenvector is used for detection. Can both of the methods extend to the case that detection by subspaces consist of eigenvectors corresponding to nonzero eigenvalues? Motivated by kernel PCA approach, we know that a suitable choice of similarity measure is very important. What kind of similarity measure can be used for detection with the use of subspaces seems also an interesting and promising future direction.

Acknowledgment

This work is funded by National Science Foundation through two grants (ECCS-0901420 and ECCS-0821658), and Office of Naval Research through two grants (N00010-10-1- 0810 and N00014-11-1-0006).

References

  • [1] S. Haykin, “Cognitive radio: brain-empowered wireless communications,” Selected Areas in Communications, IEEE Journal on, vol. 23, no. 2, pp. 201–220, 2005.
  • [2] J. Mitola III and G. Maguire Jr, “Cognitive radio: making software radios more personal,” Personal Communications, IEEE, vol. 6, no. 4, pp. 13–18, 1999.
  • [3] S. Haykin, D. Thomson, and J. Reed, “Spectrum sensing for cognitive radio,” Proceedings of the IEEE, vol. 97, no. 5, pp. 849–877, 2009.
  • [4] J. Ma, G. Li, and B. Juang, “Signal processing in cognitive radio,” Proceedings of the IEEE, vol. 97, no. 5, pp. 805–823, 2009.
  • [5] D. Cabric, S. Mishra, and R. Brodersen, “Implementation issues in spectrum sensing for cognitive radios,” in Signals, Systems and Computers, 2004. Conference Record of the Thirty-Eighth Asilomar Conference on, vol. 1, pp. 772–776, Ieee, 2004.
  • [6] T. Yucek and H. Arslan, “A survey of spectrum sensing algorithms for cognitive radio applications,” Communications Surveys & Tutorials, IEEE, vol. 11, no. 1, pp. 116–130, 2009.
  • [7] Y. Zeng and Y. Liang, “Maximum-minimum eigenvalue detection for cognitive radio,” in Personal, Indoor and Mobile Radio Communications, 2007. PIMRC 2007. IEEE 18th International Symposium on, pp. 1–5, IEEE, 2006.
  • [8] Y. Zeng, C. Koh, and Y. Liang, “Maximum eigenvalue detection: theory and application,” in Communications, 2008. ICC’08. IEEE International Conference on, pp. 4160–4164, IEEE, 2008.
  • [9] Y. Zeng and Y. Liang, “Spectrum-sensing algorithms for cognitive radio based on statistical covariances,” Vehicular Technology, IEEE Transactions on, vol. 58, no. 4, pp. 1804–1815, 2009.
  • [10] Y. Zeng and Y. Liang, “Covariance based signal detections for cognitive radio,” in New Frontiers in Dynamic Spectrum Access Networks, 2007. DySPAN 2007. 2nd IEEE International Symposium on, pp. 202–207, IEEE, 2007.
  • [11] P. Zhang, R. Qiu, and N. Guo, “Demonstration of spectrum sensing with blindly learned feature,” accepted by Communications Letters, IEEE, 2011.
  • [12] B. Scholkopf and A. Smola, Learning with kernels. The MIT Press, 1st ed., December 2001.
  • [13]

    K. Weinberger and L. Saul, “Unsupervised learning of image manifolds by semidefinite programming,”

    International Journal of Computer Vision

    , vol. 70, no. 1, pp. 77–90, 2006.
  • [14] G. Lanckriet, N. Cristianini, P. Bartlett, L. Ghaoui, and M. Jordan, “Learning the kernel matrix with semidefinite programming,” The Journal of Machine Learning Research, vol. 5, pp. 27–72, 2004.
  • [15] C. Cortes and V. Vapnik, “Support-vector networks,” Machine learning, vol. 20, no. 3, pp. 273–297, 1995.
  • [16]

    C. Burges, “A tutorial on support vector machines for pattern recognition,”

    Data mining and knowledge discovery, vol. 2, no. 2, pp. 121–167, 1998.
  • [17] A. Smola and B. Scholkopf, “A tutorial on support vector regression,” Statistics and computing, vol. 14, no. 3, pp. 199–222, 2004.
  • [18] N. Cristianini and J. Shawe-Taylor, An Introduction to Support Vector Machines and other kernel-based learning methods. Cambridge university press, 2000.
  • [19] T. Lim, R. Zhang, Y. Liang, and Y. Zeng, “GLRT-based spectrum sensing for cognitive radio,” in Global Telecommunications Conference, 2008. IEEE GLOBECOM 2008. IEEE, pp. 1–5, IEEE, 2008.
  • [20] J. Font-Segura and X. Wang, “GLRT-based spectrum sensing for cognitive radio with prior information,” Communications, IEEE Transactions on, vol. 58, no. 7, pp. 2137–2146, 2010.
  • [21] H. Kwon and N. Nasrabadi, “Kernel matched subspace detectors for hyperspectral target detection,” IEEE transactions on pattern analysis and machine intelligence, pp. 178–194, 2006.
  • [22] L. Scharf and B. Friedlander, “Matched subspace detectors,” Signal Processing, IEEE Transactions on, vol. 42, no. 8, pp. 2146–2157, 1994.
  • [23] V. Tawil, “51 captured dtv signal.” http://grouper.ieee.org/groups /802/22/Meeting documents/2006 May/Informal Documents, May 2006.
  • [24] B. Scholkopf, A. Smola, and K. Muller, “Nonlinear component analysis as a kernel eigenvalue problem,” Neural computation, vol. 10, no. 5, pp. 1299–1319, 1998.
  • [25] P. Zhang and R. Qiu, “Spectrum sensing based on blindly learned signal feature,” Arxiv preprint arXiv:1102.2840, 2011.
  • [26] M. Steven, Fundamentals of Statistical Signal Processing Volume II: Detection Theory. New Jersey: Prentice Hall PTR, 1998.