Optimal Discriminant Functions Based On Sampled Distribution Distance for Modulation Classification

02/19/2013 ∙ by Paulo Urriza, et al. ∙ 0

In this letter, we derive the optimal discriminant functions for modulation classification based on the sampled distribution distance. The proposed method classifies various candidate constellations using a low complexity approach based on the distribution distance at specific testpoints along the cumulative distribution function. This method, based on the Bayesian decision criteria, asymptotically provides the minimum classification error possible given a set of testpoints. Testpoint locations are also optimized to improve classification performance. The method provides significant gains over existing approaches that also use the distribution of the signal features.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

Modulation classification is the process of choosing the most likely scheme from a set of predefined candidate schemes that a received signal could belong to. Various approaches have been proposed to address this problem. There has recently been growing interest in modulation classification for applications such as software defined radio, cognitive radio and interference identification [1].

Existing classification methods can generally be categorized into two main groups: feature based classifiers and likelihood based (ML) classifiers. The ML classifiers give the minimum possible classification error of all possible discriminant functions given perfect knowledge of the signal’s probability distribution. However, this approach is very sensitive to modeling errors such as imperfect knowledge of the signal to noise ratio (SNR) or phase offset. Further, such approaches have very high computational complexity and are thus impractical in actual hardware implementation. To address these issue, various feature based techniques such as cumulant-based classifiers

[2] and cylostationary-based classifiers have been proposed [3].

Recently, Goodness-of-Fit (GoF) tests such as the Kolmogorov-Smirnov (KS) [4] distribution distance have been proposed to identify the constellation used in QAM modulation [5]. Based on the KS classifier, we proposed a new reduced complexity Kuiper (rcK) classifer in [6]

. The rcK classifier only finds the empirical cumulative distribution function (ECDF) in a small set of predetermined testpoints that have the highest probability of giving the maximum distribution distance, effectively sampling the distribution function. The algorithm offered reduced computational complexity by removing the need to estimate the full ECDF while still providing better performance than the KS classifier. It also increased the robustness of the classifier to imperfect parameter estimates.

The idea of improving the classification accuracy of the rcK classifier by using more testpoints was proposed in [7]. The method is referred to as Variational Distance (VD) classifier where testpoints are selected to be the pdf-crossings of two classes being recognized. The sum of the absolute distances is then used as the final discriminating statistic. We refer to methods such as rcK and VD, that utilize the value of the ECDF at a small number of testpoints, as sampled distribution distance classifiers. In this work we derive the optimal discriminant functions for classification with the sampled distribution distance given a set of testpoint locations. We also provide a systematic way of finding testpoint locations that provide near optimal performance by maximizing the Bhattacharyya distance between classes. Finally, we present results that compare the performance of this approach with existing techniques.

Ii Proposed Classifier

Ii-a System Model

Following [5], we assume a sequence of discrete, complex, i.i.d. and sampled baseband symbols, , drawn from a constellation , transmitted over AWGN channel. The received signal, under constellation , is given as , where , . We further define the SNR as . The task of the modulation classifier is to find , from which is drawn from. Without loss of generality, we consider unit power constellations.

Ii-B Classification Based on Sampled Distribution Distance

Let where is the chosen mapping from received symbols

to the extracted feature vector

, where is the length of the feature vector. Possible feature maps include (magnitude, ), the concatenation of and (quadrature, ), the phase information (angle, ), among others. The theoretical CDF of given and , denoted as , is assumed to be known a priori (methods of obtaining these distributions, both empirically and theoretically, are presented in [5, Sec. III-A]).

In this paper we focus on algorithms that use the ECDF defined as

(1)

as the discriminating feature for classification. Here, is the indicator function whose value is 1 if the function argument is true, and 0 otherwise. If the complete ECDF resulting from the entire feature vector, , is used for classification, we get the conventional distribution distance measures such as Kuiper, Kolmogorov-Smirnov, Anderson-Darling and others. Details of these measures are discussed in [4]. Once the ECDF is found and the appropriate distribution distance is calculated, the candidate constellation with minimum distance is chosen. However, prior work in [6, 7] have shown that improved classification accuracy can be achieved at much lower computational complexity and with increased model robustness by finding the value of the ECDF at a small number of specific testpoints.

We describe these methods formally by defining a set of testpoints: , with . For notational consistency, we also define the following virtual test points, and in addition to . Evaluating the ECDF from (1) at gives us , . We refer to any classifier that utilizes the feature vector as a sampled distribution distance-based classifier. As an example, the variational distance (VD) classifier from [7] proposed forming from ECDF points that give either a local maxima or minima of the difference between two theoretical cdfs of the candidate classes. Instead of using the sampled ECDF directly, VD classifier finds the number of samples that fall between two consecutive testpoints, which is equivalent to taking the difference of the ECDF at consecutive testpoints, .

In this paper our goal is to optimize the classification accuracy of the sampled distribution distance classification approach defined as

(2)

Intuitively, there are two ways to improve . First, since different testpoints have varying distribution distance, it is expected that different weights should be assigned to each testpoint. Second, the choice of the number and location of the points along the ECDF should also be investigated to find the proper balance between complexity and classification accuracy. Both of these improvements are addressed in the following subsection.

Ii-C Proposed Classifier

We first assume that has been selected a priori and our goal is to find the optimal classifier for the resulting feature vector . We want to find a discriminant function for each , for every candidate constellation . Where we follow the rule:

(3)

It is well established in decision theory that if the performance metric used is average classification error, the optimal classifier is based on the Bayes decision procedure [8]. This procedure can be stated as:

(4)

Using the prior probabilities

, the posterior probabilities

could be found from using Bayes formula. Thus, finding the pdf of the feature vector conditioned on the modulation scheme, , effectively gives us the optimal classifier in the minimum error rate sense.

The testpoints partition into regions. An individual sample, , can be in region , such that , with a given probability, completely determined by the cdf, . The number of samples that fall into each of the regions, , where corresponds to region ,

, is jointly distributed according to a multinomial probability mass function (pmf) given as

(5)

where , and is the probability of an individual sample being in region . Given that is drawn from , , for .

Given a particular , the number of samples in each of the regions could be found as where and . This gives a mapping from any given to and therefore to the pmf as defined in (5). Therefore we have the complete class-conditional pdf, with in (5) determined by , the cdf of class . Thus we have the optimal classifier. We will refer to and conditioned on class as and .

Although the multinomial pmf in (5) can be used for minimum error rate classification, its calculation is very computationally intensive. To address this issue we note that asymptotically the multinomial pmf, in (5

), approaches a multivariate Gaussian distribution,

as . Where,

(6)
(7)

Since is simply the cumulative sum of (i.e. ), which is a linear operation, it follows that where,

(8)
(9)

Having shown that the feature vector is asymptotically Gaussian distributed, we can proceed to apply the Bayes decision procedure in (4). However, the full multivariate pdfs are not required to perform classification because the optimal discriminant functions for Gaussian feature vectors are known to be quadratic with the following form [8]:

(10)

where

(11)

and

(12)

In the following sections we will simply refer to this classifier as the Bayesian approach.

Ii-D Note on Implementation

Similar to rcK [6] and VD [7] the Bayesian approach only needs to store the testpoint locations for a fixed set of SNRs since the theoretical cdf is dependent on SNR. Given a of size , VD and rcK require both and for each class . In contrast, the Bayesian approach requires the same vector , an matrix , a vector of size , and a scalar for each class . However, there are typically no more than 12 testpoints (total number of pdf-crossings), so this additional storage requirements are negligible. The Bayesian approach also requires the calculation of a quadratic form expression (10). Again, due to the fact that only a relatively small number of testpoints is used, the additional complexity is minimal.

Ii-E Testpoint Selection

In this subsection we present a method for choosing testpoint locations,

, that provide good classification performance. The method of using the pdf-crossings make intuitive sense, since it tries to find the testpoints that provide the maximum difference in the theoretical cdf while providing some heuristic rule that the testpoints will be spaced apart. Tespoints that are too close to each other are not as effective because the ECDF tends to be highly correlated and thus provide minimal additional information.

Another issue with using the pdf-crossing is that it does not factor in knowledge of the correlation between testpoints. As we have shown in Section II-C, the distribution follows an approximate multivariate Gaussian with statistics given in (8) and (9). Therefore, the class-conditional means and covariance matrices are sufficient to completely describe the distribution of the feature vectors conditioned on . Thus, these statistics are also sufficient to find the optimal testpoint locations, .

However, since are clearly not equal for all , a closed form expression for the classification accuracy for this problem does not exist. Instead, a -dimensional integration is required and the limits, determined by the decision boundaries defined by (10), are non-trivial. As is typically done in this scenario, we replace exact with a sub-optimum distance metric that is easier to evaluate and does not require a -dimensional integral. In particular we use the Bhattacharyya distance first studied for signal selection in [9] shown to be a very effective as a “goodness” criterion in the process of of selecting effective features to be used in classification. The metric is shown here for reference:

(13)

Note that the Bhattacharyya distance is calculated between 2 classes. As a result, the search for testpoints can only be performed for the case. However, this could be done sequentially through all the possible pairs of . As is a function of and which are functions of our testpoint selection, , then we can express it as . We thus find the good candidate testpoint by

(14)

under the constraint .

As this is an -dimensional optimization problem, a closed-form solution is beyond the scope of this letter paper. Instead, we turn to numerical optimization methods (gradient descent methods) to find local maxima. The intial point of these procedures could be chosen to coincide with the pdf-crossings or equally spaced over some interval.

Iii Results and Discussion

Iii-a Testpoint Selection

For the results section we focus on the quadrature feature which is a concatenation of the I and Q component of each symbol. In Fig. 1, we show the results of the testpoint selection procedure with , under 0 dB SNR, for varying number of testpoints with the two class being 4-QAM and 16-QAM.

Fig. 1: Optimized testpoint locations for varying number of testpoints, . The solid line shows the CDF difference between the two classes (4-QAM and 16-QAM, under SNR=0 dB, )

The solid line plot corresponds to the difference of the two theoretical CDFs. We note that in the VD classifier the local maxima and minima of this plot are used as the testpoints. However, we find that the numerical optimization finds “good” testpoints to be close, but not exactly at the local maxima and minima. This is due to the additional information provided by the covariance matrices.

In contrast to VD classifier that has a fixed number of testpoints (4 for this particular problem) corresponding to the number of local maxima and minima, the optimization procedure allows more flexibility in choosing the number of testpoints. In Fig. 1, we show the result of the optimization procedure for a range of 1 to 8 testpoints. It confirms our intuition that “good” testpoints tend to be 1) spaced apart to avoid high correlation, 2) concentrated around locations that have high CDF difference, and 3) are not necessarily the same for different values of . This result further confirms the need to jointly optimize the testpoint locations.

Iii-B Comparison With Existing Techniques

As mentioned in the previous section, the proposed approach has the flexibility of varying the number of testpoints. This effectively gives more flexibility to trade-off classification accuracy with computational complexity. This idea is illustrated in Fig. 2. For and SNR=0 dB, we show the classification accuracy of the proposed method as the number of testpoints is increased from 1 to 8, for all possible pairs of . The dotted lines correspond to the accuracy of the ML classifier which serves as an upperbound to classification accuracy, while the dashed lines correspond to that of the VD classifier. Note that both are plotted as horizontal lines because ML does not utilize testpoints, while VD has a fixed number of testpoints corresponding to the pdf-crossings.

Fig. 2: Effect of increasing number of testpoints on for all possible pairs of constellations of interest.The classification accuracy of both ML and VD classifiers are also shown for comparison. (SNR=0 dB, =200)

We see that the proposed method is able to exceed the accuracy of the VD classifier with as low as 3 testpoints. Further, the method’s accuracy could be improved by adding more testpoints but at the cost of higher complexity. We also note that with additional testpoints, the Bayesian classifier reaches classification accuracy close to the ML classifier.

Finally, in Fig. 3, we compare the performance of the proposed method with the existing techniques under varying SNR with symbols used for classification. To have a fair comparison, the same number of testpoints are used for both VD and Bayesian. For the entire range of SNR the proposed Bayesian approach is shown to provide substantial gains over the VD classifier. We emphasize again that asymptotically, the proposed approach is the optimal classifier when using the sampled distribution distance as the discriminating feature. Also shown in the plot are the classification accuracy of the ML classifier which acts as the upperbound, and the conventional Kuiper classifier.

Fig. 3: Comparison of the proposed Bayesian method with other existing approaches under varying SNR with =200 symbols used for classification. The same number of testpoints are used for both VD and Bayesian.

Iv Conclusion

In this letter we presented the optimal discriminant functions for classifying using the sampled distribution distance. This method was shown to provide substantial gains compared to other existing approaches. The performance of this method is also shown to be close to the ML classifier but at significantly lower computational complexity. Although modulation classification is presented in this paper to illustrate the basic concept, the approach is not limited to this application. The same classifier can be generalized to any classification problem where the cdf of each class is available.

References

  • [1] J. Lee, D. Toumpakaris, and W. Yu, “Interference mitigation via joint detection,” IEEE J. Sel. Areas Commun., vol. 29, no. 6, pp. 1172 –1184, june 2011.
  • [2] A. Swami and B. M. Sadler, “Hierarchical digital modulation classification using cumulants,” IEEE Trans. Commun., vol. 48, no. 3, pp. 416–429, Mar. 2000.
  • [3]

    E. Rebeiz and D. Cabric, “Low complexity feature-based modulation classifier and its non-asymptotic analysis,” in

    Proc. IEEE GLOBECOM, Dec. 5-9, 2011.
  • [4] M. A. Stephens, “EDF statistics for goodness of fit and some comparisons,” Journal of the American Statistical Association, vol. 69, no. 347, pp. 730–737, Sep. 1974.
  • [5] F. Wang and X. Wang, “Fast and robust modulation classification via Kolmogorov-Smirnov test,” IEEE Trans. Wireless Commun., vol. 58, no. 8, pp. 2324–2332, Aug. 2010.
  • [6] P. Urriza, E. Rebeiz, P. Pawelczak, and D. Cabric, “Computationally efficient modulation level classification based on probability distribution distance functions,” IEEE Commun. Lett., vol. 15, no. 5, pp. 476–478, may 2011.
  • [7] F. Wang and C. Chan, “Variational-distance-based modulation classifier,” in Proc. IEEE ICC, Ottawa, Canada, Jun. 10-15, 2012.
  • [8] R. O. Duda, P. E. Hart, and D. G. Stork, Pattern Classification.   John Wiley & Sons, Inc., 2001.
  • [9] T. Kailath, “The divergence and bhattacharyya distance measures in signal selection,” IEEE Trans. Commun. Technol., vol. 15, no. 1, pp. 52–60, Feb. 1967.