Emitter classification of radar pulse sequences is a critical task in Electronic Warfare (EW) disciplines such as Electronic Support Measures (ESM), where the correct identification of a target is crucial to determine adequate countermeasures for protection of sensible units and other defence purposes [1, 2]. The increasing complexity of the electromagnetic environment, due to more sophisticated radar characteristics and higher emitter density, has rendered classification of pulse streams an increasingly difficult task .
Traditional approaches to tackle this problem are based on categorization of pulses through statistical measures of different pulse attributes . Commonly exploited pulse features are, in this sense, the Pulse Width (PW) and the Radio Frequency (RF). During a process known as deinterleaving, incoming pulses are clustered by emitter [1, 4] so that time-variant parameters, like the pulse repetition interval (PRI), are computed and can be further utilized for classification 
. Other intrapulse features are occasionally used, although frequently discarded in real-time operations to avoid storage overloading. More recent is the application of Machine Learning and especially Deep Learning (DL) to radar pulse stream classification, for which proposed solutions include Support Vector Machines (SVMs)
, Multilayer Perceptrons (MLPs)[7, 8]
and Convolutional Neural Networks (CNNs)[9, 10, 11].
However, these approaches have several shortcomings. Firstly, a sufficient number of pulses needs to be acquired before taking a prediction step, having repercussions on their real-time applicability. Secondly, temporal patterns and dependencies inside pulse streams are not modelled efficiently, because the pulse order inside the sequence is either not taken into account or, in some cases, the entire series of values is summarized to the average or the domain interval of the attributes . Towards the overcoming of the aforementioned shortcomings, a recent work 
has introduced Recurrent Neural Networks (RNNs) in the ESM domain as a method to efficiently process pulse streams, due to their proven effectiveness in several sequence processing problems, such as neural machine translation, time series forecasting and classification[13, 14, 15].
In this paper, we propose to utilize attribute-specific RNNs in combination with a novel normalization scheme, for the challenging task of emitter classification. Towards this end our contribution is two-fold: 1) we introduce a new type of normalization, here called per-sequence normalization, and we apply it in parallel to the more commonly used min-max normalization, concatenating the output of the two transformations along the feature axis to obtain channels, where
is the number of attributes extracted from each pulse. 2) we leverage attribute-specific Long Short Term Memory (LSTM) layers for each feature after the normalization process to compute an intermediate representation useful for classification purposes (see Fig. 1).
Our method utilizes attributes that are extracted from Pulse Descriptor Words (PDWs) to construct sequences of pulses. These sequences are of the form , being the length of the sequence, where the individual pulse is represented as a tuple , being the number of pulse attributes extracted from the PDW. Moreover, we define as the th-attribute sequence of . Finally, each sequence is associated with a class , , to form a dataset of samples divided in
classes. For a classifierwith output prediction , the goal is to maximize the classification accuracy, so that for as many as possible. The classifier is parametrized by its weights , which are subject to optimization and trained via first-order methods based on gradient descent. Finally, the model employs LSTM 
cells, which have shown to be able to learn long-term dependencies (by means of the internal cell state) while handling the vanishing and exploding gradient problems typically encountered with standard RNN cells.
2.1 Normalization scheme
The model incorporates a normalization scheme that maps the original sequence to . Differently from , the input is not digitized but normalized according to two different techniques. The first one is min-max normalization, for which the sequence values are linearly mapped into the range according to:
The attribute domains
are estimated from the whole training data distribution and the normalization is applied attribute-wise to all the sequences of the dataset. The second transformation applied is a per-sequence normalization, i.e. sequence attributes are normalized based on the values inside their respective attribute sequence only. This differs from min-max normalization, where the entire dataset distribution of values plays a role in defining the normalization. Asdefines the values of attribute in sequence , we have:
This normalization is again applied for in the sequence and for all in the dataset independently, mapping the observed attribute sequence domain to the range . Doing so enables the network to isolate temporal patterns inside the sequence with more precision, due to the restricted domain of values taken by attribute along the time axis only. The two normalizations are performed in parallel and the outputs are concatenated along the feature axis to obtain a input. Compared to the discretization proposed in , this normalization scheme allows for reduced model complexity and increased inference speed, by eliminating the need for embeddings and reduction of the input dimension of the remaining network.
2.2 Attribute-specific LSTMs
Other differences with respect to  are the choice and the architectural layout of the RNN layers. In our approach, a dedicated RNN network is assigned to each sequence channel, so that temporal dependencies can be extracted, while the values of different attribute sequences can be processed separately (Fig. 1). This is motivated by the fact that joint-attribute patterns are oftentimes spurious since attribute values changes occur independently. These RNN blocks are constituted of stacked LSTM  layers, each block producing an output feature map of size . After the RNN layers have processed the normalized input sequences, the outputs of the LSTM layers are concatenated along the hidden size dimension to obtain a output sensor for a -long input sequence of pulse attributes. In our experiments we set and .
2.3 Model training
The architecture is completed with a Fully Connected (FC) layer, mapping the hidden features from the LSTMs to the prediction class scores. The softmax function is then applied on these prediction scores to obtain the final class probabilities
, which represent the input of the loss function. The loss function employed for training is the weighted Cross Entropy loss, defined as
3 Experimental Setup
To test the described supervised approach, a dataset of labeled pulse sequences was produced. It was obtained by means of a radar simulation software, in which different radar characteristics were defined and where, resembling a real-world scenario, signals were programmed to be detected by the receiving antenna of an external agent. Afterwards detected pulses are clustered by emitter and sorted by time, similar to a real-operating threat detection system, to produce the final pulse sequences. The dataset consists of 60910 training samples and 17382 test samples, divided into 17 classes of emitters, taken from different operating environments (aircraft, marine or ground-based). To depict a more realistic case, classes are not equally represented inside the dataset, making the classification task more challenging. Moreover, the pulse sequences present in the dataset have variable length, ranging from few pulses (7-8) to an upper limit of 512. In our experiments, the set of pulse attributes consists of PRI, PW and RF.
The class imbalance was taken into consideration in terms of evaluation metrics as well. To emphasize correct classification for all the classes, regardless of their relative frequency, macro-averaged classification accuracy () was measured during all the experiments. Macro-averaged metrics are obtained by computing the metric independently for each class and then taking the average, i.e. , where . Accuracies have been measured on the test set, consisting of unseen examples, in order to test the generalization power of the models.
This evaluation metric was used to perform an ablation study of the different techniques introduced in the paper. First, a model was tested without any input normalization. Then the same model was tested with min-max input normalization only. Finally, the same evaluation was performed on a model with the proposed normalization scheme. Results are then compared. According to the same principle, the same model was tested with and without attribute-specific LSTM layers to showcase their effectiveness.
Moreover, a baseline comparison of different DL-based emitter classification approaches was carried out on our dataset. Other tested models include:
The work of Petrov et al. , who employ a MLP on statistics computed from attribute sequences, more specifically minimum and maximum observed values for PRI, PW, and RF;
To ensure a more objective baseline for comparison, the same set of attributes, consisting of PRI, PW and RF, was utilized. Therefore, methods who explicitly mention to use only a subset of these three pulse features were tested both in the configuration described in the original paper and in the configuration of this paper. Finally, in case some papers proposed more than one normalization, all the different proposals were included in our baselines.
4 Results and Discussion
Table 1 summarizes the results of the ablation study regarding the prediction accuracy, when applying attribute-specific LSTMs compared to the standard joint RNN processing case. In all the cases the introduction of attribute-specific LSTMs outperforms the joint RNNs by 2-14%. Additionally, Table 1 clearly highlights the improvement in accuracy of attribute-specific LSTMs brought upon by the proposed normalization scheme, rendering the combination of those two components the most favorable option for the radar emitter classification.
|Liu et al. ||discretization||0.4563|
|Liu et al.  + RF||discretization||0.5477|
|Petrov et al. ||min-max||0.6287|
|Petrov et al. ||standardization||0.5218|
In Table 2 the accuracy of our method is compared with a variety of approaches which have been previously deployed for emitter classification. The superiority of the proposed method is clear with an improvement across the board ranging between 2%-19%. Specifically, comparing our method with Liu et al.  the improvement brought upon our method is 19% if we utilize the method described explicitly in  and 10% when we also incorporate the RF information for fairness. The increase in performance can be attributed to the fact that the proposed normalization is more suitable for this domain compared to discretization.
Furthermore, we achieved a 2%-12% improvement in comparison to Petrov et al. . Utilizing temporal information and per-sequence normalization, which isolates temporal patterns inside the sequences efficiently, provides a tailored approach for the emitter classification. Finally, a state-of-the-art ResNet18  was also outperformed by our method by 14%, highlighting the importance of the temporal information stored by the LSTMs.
In Figure 2, we compare the confusion matrices of Liu et al.  and ResNet18  with the proposed method, in order to showcase the performance achieved per class and the main sources of ambiguity that increased the difficulty of the task at hand. Neighboring classes usually represent similar type of emitters, such as air-based or ground-based. Thus we can clearly see that all methods are prone to confusing classes originating from similar emitter types. However, the proposed method achieves the lowest amount of misclassifications and as can be seen the majority of the predictions lie in the diagonal overlapping with the ground truth classes.
Finally, we performed a robustness evaluation experiment by adding Gaussian noise on the radar signals in order to showcase the resistance of our method to noise perturbations. In our experimental setup, additive Gaussian noise was applied in increasing quantities to the pulse sequences reaching up to 10% of the signal magnitude. This value corresponds to a Signal-to-Noise Ratio (SNR) of 20 dB, a threshold in the proximity of the minimum SNR requirement for most EW systems. Fig. 3 shows that all the methods, except for Liu et al.  maintained a relatively stable performance in presence of noise. However, the proposed approach still outperformed the other baselines increasingly as the percentage of noise got higher, ranging from 2% increase for no noise to 6% for a SNR of 20 dB in comparison to Petrov et al. . ResNet18  shows high robustness to noise, however the proposed method combines not only resilience to noise but also an improved accuracy by 15%.
In this paper we introduced a novel method for the complex task of radar emitter classification. Our approach comprises an application-specific normalization scheme to address the large variability of values within signal attributes and feature-specific RNNs, which not only incorporate temporal dependencies in the method but also improve the processing of individual features.
Through thorough ablation testing of the individual components of the proposed technique and comparison with previous state-of-the-art methods, we showcased the superiority of our method in terms of accuracy. Furthermore, our robustness evaluation with additive Gaussian noise proved the ability of our approach to be more stable in the presence of noise compared to other baselines.
Future work may include the application of the proposed method across domains on other pulsed signals, such as LIDAR signal classification in autonomous driving. Other future applications expand to tasks of the medical domain, such as pulse irregularity detection in ECGs or MRIs and EEG signal classification.
-  David Adamy, Ew 101: A First Course in Electronic Warfare, Artech House radar library. Artech House, 685 Canton Street, Norwood, MA 02062, 2001.
-  A. E. Spezio, “Electronic warfare systems,” IEEE Transactions on Microwave Theory and Techniques, vol. 50, no. 3, pp. 633–644, March 2002.
-  Richard G Wiley, ELINT: The Interception and Analysis of Radar Signals, Artech House, Norwood, 2006.
-  H. K. Mardia, “New techniques for the deinterleaving of repetitive sequences,” IEE Proceedings F - Radar and Signal Processing, vol. 136, no. 4, pp. 149–154, Aug 1989.
-  U. I. Ahmed, T. ur Rehman, S. Baqar, I. Hussain, and M. Adnan, “Robust pulse repetition interval (pri) classification scheme under complex multi emitter scenario,” in 2018 22nd International Microwave and Radar Conference (MIKON), May 2018, pp. 597–600.
-  Gexiang Zhang, Weidong Jin, and Laizhao Hu, “Radar emitter signal recognition based on support vector machines,” in ICARCV 2004 8th Control, Automation, Robotics and Vision Conference, 2004., Dec 2004, vol. 2, pp. 826–831 Vol. 2.
-  I. Jordanov, N. Petrov, and A. Petrozziello, “Supervised radar signal classification,” in 2016 International Joint Conference on Neural Networks (IJCNN), July 2016, pp. 1464–1471.
-  Chin-Teng Lin Ching-Sung Shieh, “A vector neural network for emitter identification,” IEEE Transactions on Antennas and Propagation, vol. 50, 2002.
-  S. Hong, Y. Yi, J. Jo, and B. Seo, “Classification of radar signals with convolutional neural networks,” in 2018 Tenth International Conference on Ubiquitous and Future Networks (ICUFN), July 2018, pp. 894–896.
-  Eric Paulsy Ben Ausdenmoore Richard Clouse Jr.y Ted Josue Lindsay Cain, Jeffrey Clark, “Convolutional neural networks for radar emitter classification,” IEEE 8th Annual Computing and Communication Workshop and Conference (CCWC), 2018.
-  Wenjuan Ren Zhiyuan Yan Jun Sun, Guangluan Xu, “Radar emitter classification based on unidimensional convolutional neural network,” IET Radar, Sonar & Navigation, vol. 12, 2017.
-  Z. Liu and P. S. Yu, “Classification, denoising, and deinterleaving of pulse streams with recurrent neural networks,” IEEE Transactions on Aerospace and Electronic Systems, vol. 55, no. 4, pp. 1624–1639, Aug 2019.
-  Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio, “Neural machine translation by jointly learning to align and translate,” 2014.
-  Jun Zhang and K. F. Man, “Time series prediction using rnn in multi-dimension embedding phase space,” in SMC’98 Conference Proceedings. 1998 IEEE International Conference on Systems, Man, and Cybernetics (Cat. No.98CH36218), Oct 1998, vol. 2, pp. 1868–1873 vol.2.
-  Hassan Ismail Fawaz, Germain Forestier, Jonathan Weber, Lhassane Idoumghar, and Pierre-Alain Muller, “Deep learning for time series classification: a review,” CoRR, vol. abs/1809.04356, 2018.
-  Sepp Hochreiter and Jürgen Schmidhuber, “Long short-term memory,” Neural Computation, vol. 9, no. 8, pp. 1735–1780, 1997.
-  Y. Bengio, P. Simard, and P. Frasconi, “Learning long-term dependencies with gradient descent is difficult,” IEEE Transactions on Neural Networks, vol. 5, no. 2, pp. 157–166, March 1994.
-  Sankaran Panchapagesan, Ming Sun, Aparna Khare, Spyros Matsoukas, Arindam Mandal, Bjorn Hoffmeister, and Shiv Vitaladevuni, “Multi-task learning and weighted cross-entropy for dnn-based keyword spotting,” in INTERSPEECH, 09 2016, pp. 760–764.
-  David Eigen and Rob Fergus, “Predicting depth, surface normals and semantic labels with a common multi-scale convolutional architecture,” CoRR, vol. abs/1411.4734, 2014.
-  Geoffrey E. Hinton, Nitish Srivastava, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov, “Improving neural networks by preventing co-adaptation of feature detectors,” CoRR, vol. abs/1207.0580, 2012.
-  Kyunghyun Cho, Bart van Merrienboer, Çaglar Gülçehre, Fethi Bougares, Holger Schwenk, and Yoshua Bengio, “Learning phrase representations using RNN encoder-decoder for statistical machine translation,” CoRR, vol. abs/1406.1078, 2014.
-  Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun, “Deep residual learning for image recognition,” CoRR, vol. abs/1512.03385, 2015.
-  Zhiguang Wang, Weizhong Yan, and Tim Oates, “Time series classification from scratch with deep neural networks: A strong baseline,” CoRR, vol. abs/1611.06455, 2016.