I Introduction
Modulation level classification (MLC) is a process which detects the transmitter’s digital modulation level from a received signal, using a priori knowledge of the modulation class and signal characteristics needed for downconversion and sampling. Among many modulation classification methods [1], a cumulant (Cm) based classification [2] is one of the most widespread for its ability to identify both the modulation class and level. However, differentiating among cumulants of the same modulation class, but with different levels, i.e. 16QAM vs. 64QAM, requires a large number of samples. A recently proposed method [3] based on a goodness-of-fit (GoF) test using Kolmogorov-Smirnov (KS) statistic has been suggested as an alternative to the Cm-based level classification which require lower number of samples to achieve accurate MLC.
In this letter, we propose a novel MLC method based on distribution distance functions, namely Kuiper (K) [4] [5, Sec. 3.1]
and KS distances, which is a significant simplification of methods based on GoF. We show that using a classifier based only on K-distance achieves better classification than the KS-based GoF classifier. At the same time, our method requires only
additions in contrast to additions for the KS-based GoF test, where is the number of distinct modulation levels, is the sample size and is the number of test points used by our method.Ii Proposed MLC Method
Ii-a System Model
Following [3], we assume a sequence of discrete, complex, i.i.d. and sampled baseband symbols, , drawn from a modulation order
, transmitted over AWGN channel, perturbed by uniformly distributed phase jitter and attenuated by an unknown factor
. Therefore, the received signal is given as , where , and . The task of the modulation classifier is to find , from which was drawn, given . Without loss of generality, we consider unit power constellations and define SNR as .Ii-B Classification based on Distribution Distance Function
The proposed method modifies MLC technique based on GoF testing using the KS statistic [3]
. Since the KS statistic, which computes the minimum distance between theoretical and empirical cumulative distribution function (ECDF), requires
all CDF points, we postulate that similarly accurate classification can be obtained by evaluating this distance using a smaller set of points in the CDF.Let where is the chosen feature map and is the number extracted features. Possible feature maps include (magnitude, ) or the concatenation of and (quadrature, ). The theoretical CDF of given and , , is assumed to be known a priori (methods of obtaining these distributions, both empirically and theoretically, are presented in [3, Sec. III-A]). The CDFs, one for each modulation level, define a set of test points
(1) |
with the distribution distances given by
(2) |
for , , and , corresponding to the maximum positive and negative deviations, respectively. Note the symmetry in the test points such that . Thus, there are test points for a order classification.
The ECDF, given as
(3) |
is evaluated at the test points to form , . Here, equals to one if the input is true, and zero otherwise. By evaluating only at the test points in (1), we get
(4) |
which are then used to find an estimate of the maximum positive and negative deviations
(5) |
of the ECDF to the true CDFs. The operation of finding the ECDF at the given testpoints (4) can be implemented using a simple thresholding and counting operation and does not require samples to be sorted as in [3]. The metrics in (5) are used to find the final distribution distance metrics
(6) |
which are the reduced complexity versions of the KS distance (rcKS) and the K distance (rcK), respectively^{1}^{1}1Note, that other non-parametric distances used in hypothesis testing exist (see introduction in e.g. [4]), although for brevity they are not addressed here. We note, however, that our approach is easily applied to any assumed distance metric.. Finally, we use the metrics in (6) as substitutes to the true distance-based classifiers with the following rule: choose such that
(7) |
In the remainder of the letter, we define and , where .
Ii-C Analysis of Classification Accuracy
Let denote the set of test points, , sorted in ascending order. For notational consistency, we also define the following points, and . Given that these points are distinct, they partition into regions. An individual sample, , can be in region , such that , with a given probability, determined by .
Assuming are independent of each other, we can conclude that given , the number of samples that fall into each of the regions,
, is jointly distributed according to a multinomial PMF given as
(8) |
where , and is the probability of an individual sample being in region . Given that is drawn from , , for .
Now, with particular , the ECDF at all the test points is
(9) |
Therefore, we can analytically find the probability of classification to each of the classes as
(10) |
for the rcK classifier. A similar expression can be applied to rcKS, replacing with in (10).
Ii-D Complexity Analysis
Given that the theoretical CDFs change with SNR, we store distinct CDFs for SNR values for each modulation level (impact of the selection of on the accuracy is discussed further in Section III-B.) Further, we store theoretical CDFs of length each. For the non-reduced complexity classifiers that require sorting samples, we use a sorting algorithm whose complexity is . From Table I, we see that for rcK/rcKS tests use less addition operations than K/KS-based methods [3] and Cm-based classification [2]. For , the rcK method is more computationally efficient when implemented in ASIC/FPGA, and is comparable to Cm in complexity when implemented on a CPU. In addition, the processing time would be shorter for an ASIC/FPGA implementation, which is an important requirement for cognitive radio applications. Furthermore, their memory requirements are also smaller since has to be large for a smooth CDF. It is worth mentioning that the authors in [3] used the theoretical CDF, but used as the number of samples to generate the CDF in their complexity figures. The same observation favoring the proposed rcK/rcKS methods holds for the magnitude-based (mag) classifiers [3, Sec III-A].
Method | Multiply | Add | Memory |
---|---|---|---|
Cm | |||
rcKS/rcK | |||
KS/K | |||
rcKS/rcK (mag) | |||
KS/K (mag) |
Iii Results
As an example, we assume that the classification task is to distinguish between M-QAM, where . For comparison we also present classification result based on maximum likelihood estimation (ML).
Iii-a Detection Performance versus SNR
In the first set of experiments we evaluate the performance of the proposed classification method for different values of SNR. The results are presented in Fig. 1. We assume fixed sample size of , in contrast to [3, Fig. 1] to evaluate classification accuracy for a smaller sample size. We confirm that even for small sample size, as shown in [3, Fig. 1], Cm has unsatisfying classification accuracy at high SNR. In (10,17) dB region rcK clearly outperforms all detection techniques, while as SNR exceeds 17 dB all classification methods (except Cm) converge to one. In low SNR region, (0,10) dB, KS, rcKS, rcK perform equally well, with Cm having comparable performance. The same observation holds for larger sample sizes, not shown here due to space constraints. Note that the analytical performance metric developed in Section II-C for rcK and rcKS matches perfectly with the simulations. For the remaining results, we set dB, unless otherwise stated.
Iii-B Detection Performance versus Sample Size
In the second set of experiments, we evaluate the performance of the proposed classification method as a function of sample size . The result is presented in Fig. 2. As observed in Fig. 1, also here Cm has the worst classification accuracy, e.g. 5% below upper bound at . The rcK method performs best at small sample sizes, . With , the accuracy of rcK and KS is equal. Classification based on rcKS method consistently falls slightly below rcK and KS methods. In general, rcKS, rcK and KS converge to one at the same rate.
Iii-C Detection Performance vs SNR Mismatch and Phase Jitter
In the third set of experiments we evaluate the performance of the proposed classification method as a function of SNR mismatch and phase jitter. The result is presented in Fig. 3. In case of SNR mismatch, Fig. 3, our results show the same trends as in [3, Fig. 4]; that is, all classification methods are relatively immune to SNR mismatch, i.e. the difference between actual and maximum SNR mismatch is less than 10% in the considered range of SNR values. This justifies the selection of the limited set of SNR values for complexity evaluation used in Section II-D. As expected, ML shows very high sensitivity to SNR mismatch. Note again the perfect match of analytical result presented in Section II-C with the simulations.
In the case of phase jitter caused by imperfect downconversion, we present results in Fig. 3 for dB as in [2], in contrast to dB used earlier, for comparison purposes. We observe that our method using the magnitude feature, rcK/rcKS (mag), as well as the Cm method, are invariant to phase jitter. rcK and rcKS perform almost equally well, while Cm is worse than the other three methods by 10%. As expected, the ML performs better than all other methods. Quadrature-based classifiers, as expected, are highly sensitive to phase jitter. Note that in the small phase jitter, , quadrature-based classifiers perform better than others, since the sample size is twice as large as in the former case.
Iv Conclusion
In this letter we presented a novel, computationally efficient method for modulation level classification based on distribution distance functions. Specifically, we proposed to use a metric based on Kolmogorov-Smirnov and Kuiper distances which exploits the distance properties between CDFs corresponding to different modulation levels. The proposed method results in faster MLC than the cumulant-based method, by reducing the number of samples needed. It also results in lower computational complexity than the KS-GoF method, by eliminating the need for a sorting operation and using only a limited set of test points, instead of the entire CDF.
References
- [1] O. A. Dobre, A. Abdi, Y. Bar-Ness, and W. Su, “Survey of automatic modulation classification techniques: Classical approaches and new trends,” IET Communications, vol. 1, no. 2, pp. 137–156, Apr. 2007.
- [2] A. Swami and B. M. Sadler, “Hierarchical digital modulation classification using cumulants,” IEEE Trans. Commun., vol. 48, no. 3, pp. 416–429, Mar. 2000.
- [3] F. Wang and X. Wang, “Fast and robust modulation classification via Kolmogorov-Smirnov test,” IEEE Trans. Wireless Commun., vol. 58, no. 8, pp. 2324–2332, Aug. 2010.
- [4] G. A. P. Cirrone, S. Donadio, S. Guatelli, A. Mantero, B. Masciliano, S. Parlati, M. G. Pia, A. Pfeiffer, A. Ribon, and P. Viarengo, “A goodness of fit statistical toolkit,” IEEE Trans. Nucl. Sci., vol. 51, no. 5, pp. 2056–2063, Oct. 2004.
- [5] M. A. Stephens, “EDF statistics for goodness of fit and some comparisons,” Journal of the American Statistical Association, vol. 69, no. 347, pp. 730–737, Sep. 1974.
Comments
There are no comments yet.