DeepAI
Log In Sign Up

Classification of Intra-Pulse Modulation of Radar Signals by Feature Fusion Based Convolutional Neural Networks

Detection and classification of radars based on pulses they transmit is an important application in electronic warfare systems. In this work, we propose a novel deep-learning based technique that automatically recognizes intra-pulse modulation types of radar signals. Re-assigned spectrogram of measured radar signal and detected outliers of its instantaneous phases filtered by a special function are used for training multiple convolutional neural networks. Automatically extracted features from the networks are fused to distinguish frequency and phase modulated signals. Simulation results show that the proposed FF-CNN (Feature Fusion based Convolutional Neural Network) technique outperforms the current state-of-the-art alternatives and is easily scalable among broad range of modulation types.

READ FULL TEXT VIEW PDF
10/09/2017

Vehicle classification based on convolutional networks applied to FM-CW radar signals

This paper investigates the processing of Frequency Modulated-Continuos ...
06/29/2020

Joint Radar-Communication Waveform Design Based on Composite Modulation

Joint radar-communication (JRC) waveform can be used for simultaneous ra...
06/30/2021

HybridDeepRx: Deep Learning Receiver for High-EVM Signals

In this paper, we propose a machine learning (ML) based physical layer r...
04/10/2021

Regression Networks For Calculating Englacial Layer Thickness

Ice thickness estimation is an important aspect of ice sheet studies. In...
06/20/2022

A Machine Learning Data Fusion Model for Soil Moisture Retrieval

We develop a deep learning based convolutional-regression model that est...
01/27/2023

Automatic Modulation Classification with Deep Neural Networks

Automatic modulation classification is a desired feature in many modern ...

I Introduction

Automatic intra-pulse modulation recognition plays a pivoted role in radar classification systems [10]

. Various methods are proposed to classify different intra-pulse modulations. Most of these methods are based on two major phases — feature extraction and classification. The classification phases do not vary much while methods are predominantly differentiated by their differences in the feature extraction phase.

Before the emergence of the convolutional neural network (CNN) solution, various signal processing methods are employed in feature extraction step for the differentiation between various intra-pulse modulation classes. In [22, 23, 12], and [8] features are derived based on time-frequency analysis, and in [17] and [19]

features are extracted through autocorrelation functions. Apart from these methods, principal component analysis is performed in

[21] and entropy method is applied in [9]

. For the classification phase, common machine learning methods are directly employed to classify extracted features. In

[10]

, artificial neural networks are employed. Support vector machines are used in

[16] and [11]. Clustering techniques are used in [12] and [21], and probabilistic graphical models are adopted in [19].

The major weakness of aforementioned standard 2-phase techniques of feature extraction and classification, is that it is hard to extract features which facilitate classification. To overcome these weaknesses, a simple CNN based approach has been employed in [20]. In this method, feature extraction and classification are performed on a single network, yielding the highest performance and scalability reported to date. However, this method is mostly evaluated for frequency modulated signals and its classification performance on the some of the phase modulated pulses has not been investigated. Moreover, all previously done research focuses on low SNR levels up to -10 dB with the assumption that pulses are detected prior to the classification, which is not realistic considering it is far below the typical lowest SNR value for real-time pulse detection of EW receivers.

In this work, to overcome shortcomings of the previously mentioned techniques, we propose a feature fusion based convolutional neural network model (FF-CNN), that is capable of automatically performing feature extraction and classification of any type of frequency or phase modulated pulses. In the proposed technique, a previously detected radar pulse is first pre-processed to obtain a frequency and a phase related input. Then, the resultant data is input to a combined deep network structure composed of two CNNs followed by feature fusion layer that fuses the outputs of two independent CNNs. Such a feature fusion has been applied with significant success on other problems [6, 5, 15, 2]

. Finally, the class probabilities are observed at the output.

The details of pre-processing and proposed CNN model are covered in Chapter II. Simulation results are presented in Chapter III. The conclusions are drawn in chapter IV.

Ii Proposed FF-CNN Technique

Fig. 1:

The proposed feature fusion based convolutional neural network (FF-CNN) model. First, two pre-processed inputs are subjected to feature extraction procedure through convolutional neural networks, then two network outputs are simultaneously combined and applied to fusion layers, and finally, softmax layer provides class probabilities.

Detected noisy radar pulse can be modelled as follows:

(1)

where denotes the pulse envelope, denotes instantaneous signal phase and denotes zero mean circularly symmetric complex Gaussian noise. Two different pre-processing procedures are applied before network in order to facilitate both frequency and phase modulation identification of . This approach is different than traditional learning based methods with handcrafted features, since in FF-CNN these 2 automatically generated inputs are used in a network in an end-to-end manner. In other words, feature learning and classification is performed automatically. First processing extracts Time-Frequency Images (TFIs) of the time-series complex signals which are good for differentiating frequency modulations. However, psuedo-random sequenced phase modulations have very similar TFIs. Thus, the second preprocessing is employed that makes the discrimination of phase modulated signals easier. Below, the pre-processing technique and proposed deep network structure are detailed.

Ii-a Pre-processing Stages

In the first stage of pre-processing Reassigned Short-Time Fourier Transform (RSTFT)

[1] of is computed to generate high-resolution TFI of to emphasize frequency modulations. Let denote the STFT of , given as:

(2)

where is the windowing function controlling the desired time and frequency resolution of the resulting TFI. Then, RSTFT of the detected signal is computed as:

(3)

where , , and are defined as:

(4)
(5)
(6)

with and . Fig. 2 illustrates the STFT ((a)a) and the RSTFT ((b)b) of a frequency modulated measured at 10 dB SNR. As demonstrated, the RSTFT provides a higher resolution TFI than STFT. However, since the high resolution TFI’s are spatially sparse, they are downsampled to

by the nearest-neighbor interpolation method with a neglegible information loss

[14] to train the FF-CNN on a standardized input size with decreased training duration.

(a)
(b)
Fig. 2: TFI’s of a Costas-10 modulated pulse at 10dB SNR using (a) STFT, and (b) RSTFT at 100 MHz sampling frequency.
(a)
(b)
Fig. 3: Second pre-processing steps for a 16-PSK (phase) modulated pulse at 5 dB SNR. (a) Phase of the modulated signal, detected by applying a threshold. (b) Convolution of the pulse phase with HG (blue), detected phase jumps by robust least squares (red).

In the second stage of pre-preprocessing, first the unwrapped instantaneous phase of the measured signal is convolved with n=1 order HG (Hermit Gaussian) as:

(7)

where and are amplitude and time parameters, respectively. can be chosen so that effective time support of the is set to half of the minimum chip duration. should be chosen as . On the result of convolution, discontinuities in phase can be detected robustly by using Recursive Least Squares (RLS) technique.

Convolution of detected signal’s instantaneous phase with the function is equal to effectively smoothed derivation operation, and provides more apparent phase jumps, as illustrated in Fig. 3. Outliers of the convolved phase are detected by RLS method [3] and thereby phase shift points are determined, as illustrated in Figure (b)b

. This procedure does not provide any output for phase changes in frequency modulated signals and does not contribute to distinguishing frequency and phase modulated signals. Also, phase jump levels are discretized and vectorized, therefore the second pre-processing input is obtained to take place in classification of phase-shifted signals.

Ii-B Convolutional Neural Network Model and Feature Fusion

Convolutional Neural Networks are widely used in image processing related problems for the automatic feature extraction and classification purposes. Input is convolved with a set of filters that of each is specialized for the detection of different local patterns. These convolution filter weights are updated during training phase so that they can detect local similarities in the image better. At the last layer of the CNN, class probabilities are given.

The proposed CNN model, as can be seen in Fig. 1, has two inputs of reassigned TFI of the signal, which is obtained by preprocessing of the signal, and discretized phase difference vector, which is determined by and RLS adaptive filter, and gives the modulation types as output. Frequency modulated signals in time-frequency image form enables recognition through convolutional neural networks as they are in the image form. For the first pre-processed input, feature extraction process is performed in deep neural network of 3 convolutional layers, as illustrated in Fig. 1

. In these layers, 8, 4 and 2 filters are used with the size of 5x5, 4x4 and 4x4, respectively. The filter sizes are selected so that the lowest local similarity of the TFIs can be learned by the CNN. The unusual pattern with decreasing filter numbers is explained in the last paragraph of this chapter. Max pooling of size 2x2 is performed with stride of 2x2 after each layer to reduce computation, thereby decreasing size.

1-dimensional 3 layered convolutional neural network is implemented as second feature extraction step using vectors obtained by second pre-processing. In these layers, 8, 4 and 2 filters are used with size of 5x1, 4x1 and 4x1, respectively. Max pooling of size 2x1 with stride of 2x1 is performed after each layer to reduce computation and decrease size. Lastly, feature fusion is applied to the output neurons of both CNNs, by combining the both networks last layers of 5 neurons and passing them through 2 dense layers where classification is performed. When the feature fusion layer is applied to the last layers of the CNNs, and training is performed as a single network instead of two separate classifiers, the resultant network model learns to tolerate errors and weak points of the individual pre-processing methods by adjusting the weights of the extracted features and manages to obtain highly accurate results.

Lastly, in CNNs it is a common approach to increase the number of channels while decreasing layer sizes progressively with the purpose of preventing information loss [18]

. However, increasing number of channels also increases the required computation as well as the number of parameters needs to be learned. In the proposed technique, similar to sparsifying autoencoder structures

[4], both the size of layers and the number of channels are decreased to prevent excessive growth in the number of parameters and to ensure reduction in the size of layers progressively. As a result, a CNN structure that can successfully generalize over limited set of training data is obtained.

Iii Simulation Results

To compare the performance of the proposed method with the existing alternatives and to analyze its generalization capability of it at different SNR levels, two different sets are investigated. Types of modulations used in these scenarios, which are generated based on [13], are given in Table I

. Proposed FF-CNN is implemented in Python using Tensorflow library. For each changing number of training samples, constant number of 100 validation and 500 test samples are used per class. In addition, for some of the classes, data gathered from field measurements are also included in the test set (%10-20 per class). Training is performed in batches of 128. All of these training, validation and test sets are chosen as mutually exclusive, in other words, network is tested on a set that it has not seen during training phase. Training is performed 3 times per scenario, and the weights giving best validation performance are used for the test samples to calculate the classification accuracy. Categorical cross-entropy is used as the loss function, and ADAM solver

[7]

is preferred for optimization, which combines the benefits of RMSProp and AdaGrad techniques.

In simulation scenarios, synthetic pulses with varying PW values from are generated at 100 Mhz sampling rate at 5 and 10 dB SNR levels. Since the typical lowest SNR value for an EW system to detect a radar pulse in real-time is about 10 dB, the chosen values for SNR provide realistically challenging test cases. Pulses that has periodic frequency modulations (ramp, triangular and sinusoidal FM), are generated such that at least one period is present in . Stepped modulations are generated with at least 0.4 chip duration. Number of frequency steps for Frank, P1 and P2 coded pulses are selected uniformly from , and number of sub code in a code is selected uniformly from for P3 and P4 coded pulses. Number of segments are chosen uniformly from for T1 an T2 codes, and bandwidth of the intercepted signals are uniformly selected from Mhz for linear, ramp, triangular, sinusoidal FM, and T3-T4 coded pulses.

7 Class Set 23 Class Set
Single Car. Mod. (SCM) SCM 8-PSK
Linear FM + Ramp FM 16-PSK
Costas-10 FM - Ramp FM Frank Code
Baker-13 PM Sinusoidal FM P1 Code
QPSK Triangular FM P2 Code
8-PSK Costas-5 FM P3 Code
16-PSK Costas-7 FM P4 Code
Costas-10 FM T1 Code
Barker-3 PM T2 Code
Barker-7 PM T3 Code
Barker-13 PM T4 Code
QPSK
TABLE I: MODULATION TYPES USED AS CLASSES
IN SIMULATION RESULT SETS

The first set is used to compare proposed FF-CNN technique with currently highest performing alternatives ([19] ACF-DGM, [20] TFI-CNN) to the best of our knowledge. This set is the same set as used in [20], except our set also includes some additional phase modulations (QPSK, 8-PSK, 16-PSK). Convolution and averaging filter sizes used in TFI-CNN are optimized for the input size of by cross-validation for a fair comparison. Results are obtained by calculating classification accuracies for individual predictions and combined-10-predictions (denoted as ”10 samp.” in figures), with changing training set sizes per class. For combined-10-predictions case, predictions of 10 test samples per class is combined to make the final decision. Fig. 4 and Table II indicate that proposed FF-CNN technique outperforms the highest performing alternative technique by up to 10-15%. The reason of TFI-CNN performing badly is, random QPSK, 8-PSK, 16-PSK sequences do not have very distinctive time-frequency images. Fusing the features extracted from second pre-processing input with the ones extracted from TFI input, enables improved classification.

Fig. 4: Comparison of the proposed FF-CNN technique with two highest performing alternatives in 7 class case at 10 dB SNR
Classification accuracies for
100 pulse per class 900 pulse per class
FF-CNN (1 pulse) 98.65% 100.00%
FF-CNN (10 pulse) 100.00% 100.00%
TFI-CNN (1 pulse) 75.57% 88.62%
TFI-CNN (10 pulse) 83.83% 93.23%
ACF-DGM (1 pulse) 67.10% 67.79%
ACF-DGM (10 pulse) 70.10% 70.81%
TABLE II: COMPARISON OF THE PROPOSED FF-CNN TECHNIQUE
WITH TWO HIGHEST PERFORMING ALTERNATIVES
IN 7 CLASS CASE AT 10 DB SNR

The second simulation scenario set with 23 classes is used to test the scalability of the proposed FF-CNN technique over large number of classes at different SNR levels. As Fig. 5 and Table III suggest, the proposed method is able to successfully classify 23 classes at 5 dB SNR without in a need of class specific classifier which makes this method feasible for any type of frequency/phase intra-pulse modulation including pseudo-random phase codes and radar-embedded communication [blunt2010intrapulse] signal modulations.

Fig. 5: Classification performance of the proposed FF-CNN technique for 23 class case at 5 and 10 dB SNR with different training set sizes

Table III illustrates time analysis of the method for 24-class scenario. The method’s pre-processing is performed on CPU while network trainings and tests are performed on GPU. Models and specifications of the CPU and GPU are Intel i5 4460 with 4 cores of 3.4 GHz, and Nvidia GTX 970 with 1664 CUDA cores of 1050 MHz, respectively. After an off-line training, FF-CNN technique requires about 42 ms for classification of a pulse, which makes it feasible for on-line processing in EW systems.

5dB SNR 10dB SNR
Pre-processing Time (Training) 481 s 472 s
Network Time (Training) 584 s 541 s
Pre-processing (Testing) 39 ms 39 ms
Network Time (Testing) 3 ms 3 ms
Performance (1 Sample) 98.10% 99.83%
Performance (10 Sample) 99.85% 100.00%
TABLE III: AVERAGE TIME AND PERFORMANCE ANALYSIS
FOR 23 CLASSES (900 TRAINING SAMPLES PER CLASS)

Iv Conclusions

In this work, a feature fusion based convolutional neural network structure is proposed for automatic classification of frequency and phase modulation types in radar pulses using TFI of pulses and detected anomaly part on the instantaneous phase of the received signal. Simulation results show that the proposed FF-CNN technique outperforms the highest performing alternatives by a significant margin, and it is scalable over broad range of classes. The proposed FF-CNN structure can be trained by synthetic data, alleviating the difficulty of obtaining field data on rare modulation types. As a follow up of the encouraging results of this study, neural network structures that are capable of finding parameter values as well as class probabilities will be investigated in future works.

References

  • [1] F. Auger and P. Flandrin (1995) Improving the readability of time-frequency and time-scale representations by the reassignment method. IEEE Transactions on signal processing 43 (5), pp. 1068–1089. Cited by: §II-A.
  • [2] X. Bai, C. Liu, P. Ren, J. Zhou, H. Zhao, and Y. Su (2015) Object classification via feature fusion based marginalized kernels. IEEE Geoscience and Remote Sensing Letters 12 (1), pp. 8–12. Cited by: §I.
  • [3] C. Chen (2002)

    Paper 265-27 robust regression and outlier detection with the robustreg procedure

    .
    In Proceedings of the Proceedings of the Twenty-Seventh Annual SAS Users Group International Conference, Cited by: §II-A.
  • [4] I. Goodfellow, Y. Bengio, A. Courville, and Y. Bengio (2016) Deep learning. Vol. 1, MIT press Cambridge. Cited by: §II-B.
  • [5] M. Haghighat, M. Abdel-Mottaleb, and W. Alhalabi (2016) Discriminant correlation analysis: real-time feature level fusion for multimodal biometric recognition. IEEE Transactions on Information Forensics and Security 11 (9), pp. 1984–1996. Cited by: §I.
  • [6] R. Hang, Q. Liu, H. Song, and Y. Sun (2016) Matrix-based discriminant subspace ensemble for hyperspectral image spatial–spectral feature fusion. IEEE Transactions on Geoscience and Remote Sensing 54 (2), pp. 783–794. Cited by: §I.
  • [7] D. Kinga and J. B. Adam (2015) A method for stochastic optimization. In International Conference on Learning Representations (ICLR), Cited by: §III.
  • [8] K. Konopko, Y. P. Grishin, and D. Jańczak (2015)

    Radar signal recognition based on time-frequency representations and multidimensional probability density function estimator

    .
    In Signal Processing Symposium (SPSympo), 2015, pp. 1–6. Cited by: §I.
  • [9] J. Li and Y. Ying (2014) Radar signal recognition algorithm based on entropy theory. In Systems and Informatics (ICSAI), 2014 2nd International Conference on, pp. 718–723. Cited by: §I.
  • [10] J. Lundén and V. Koivunen (2007) Automatic radar waveform recognition. IEEE Journal of Selected Topics in Signal Processing 1 (1), pp. 124–136. Cited by: §I, §I.
  • [11] R. Mingqiu, C. Jinyan, Z. Yuanqing, and H. Jun (2009) Radar signal feature extraction based on wavelet ridge and high order spectral analysis. Cited by: §I.
  • [12] R. Mingqiu, C. Jinyan, and Z. Yuanqing (2010) Classification of radar signals using time-frequency transforms and fuzzy clustering. In Microwave and Millimeter Wave Technology (ICMMT), 2010 International Conference on, pp. 2067–2070. Cited by: §I.
  • [13] P. E. Pace (2009) Detecting and classifying low probability of intercept radar. Artech House. Cited by: §III.
  • [14] J. A. Parker, R. V. Kenyon, and D. E. Troxel (1983) Comparison of interpolating methods for image resampling. IEEE Transactions on medical imaging 2 (1), pp. 31–39. Cited by: §II-A.
  • [15] K. Pong and K. Lam (2014)

    Multi-resolution feature fusion for face recognition

    .
    Pattern Recognition 47 (2), pp. 556–567. Cited by: §I.
  • [16] M. Ren, J. Cai, Y. Zhu, and M. He (2008) Radar emitter signal classification based on mutual information and fuzzy support vector machines. In Signal Processing, 2008. ICSP 2008. 9th International Conference on, pp. 1641–1646. Cited by: §I.
  • [17] B. D. Rigling and C. Roush (2010) Acf-based classification of phase modulated waveforms. In Radar Conference, 2010 IEEE, pp. 287–291. Cited by: §I.
  • [18] K. Simonyan and A. Zisserman (2014) Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556. Cited by: §II-B.
  • [19] C. Wang, H. Gao, and X. Zhang (2016) Radar signal classification based on auto-correlation function and directed graphical model. In Signal Processing, Communications and Computing (ICSPCC), 2016 IEEE International Conference on, pp. 1–4. Cited by: §I, §III.
  • [20] C. Wang, J. Wang, and X. Zhang (2017) Automatic radar waveform recognition based on time-frequency analysis and convolutional neural network. In Acoustics, Speech and Signal Processing (ICASSP), 2017 IEEE International Conference on, pp. 2437–2441. Cited by: §I, §III.
  • [21] Z. Yu, C. Chen, and W. Jin (2009) Radar signal automatic classification based on pca. In Intelligent Systems, 2009. GCIS’09. WRI Global Congress on, Vol. 3, pp. 216–220. Cited by: §I.
  • [22] D. Zeng, X. Zeng, G. Lu, and B. Tang (2011) Automatic modulation classification of radar signals using the generalised time-frequency representation of zhao, atlas and marks. IET radar, sonar & navigation 5 (4), pp. 507–516. Cited by: §I.
  • [23] D. Zeng, X. Zeng, H. Cheng, and B. Tang (2012) Automatic modulation classification of radar signals using the rihaczek distribution and hough transform. IET Radar, Sonar & Navigation 6 (5), pp. 322–331. Cited by: §I.