With the ever-increasing amount of traffic that goes through the network, network management has become a difficult task. One of the most important tasks in network management is identifying the types of traffic that are passing through the network. Classifying the applications is a fairly simple task with high evaluation metrics. Additionally, NTCs have been able to take care of this matter efficiently. Two major purposes of NTCs are detecting anomalies in the network and classification of applications for Quality of Service (QOS) purposes[1, 2]. There have been several types of NTCs that use different methods for handling the task at hand, however, each one has its own drawbacks. These methods are generally divided into three categories as follows
Port-based: This approach is not efficient since some applications do not use a specific port e.g., Bittorrent. Moreover, if the port is changed, this method is no longer reliable.
Deep Packet Inspection (DPI): These applications use the patterns in the payloads of packets for classification. They generally have three major drawbacks. The first one is that they need to be updated with new patterns in the payloads of emerging applications . In addition, they are not able to identify all of the flows. Furthermore, if we do not have access to the payload of packets for privacy reasons, their accuracy is very much affected.
Machine Learning based:
The flaws of the two above methods has gained attention to the third type of classifiers, which use machine learning and specifically deep learning algorithms. This type of algorithms usually work with the features in the header of the packets, but some of them may also take into account the information in the payloads[1, 3]. Although they are still limited, they have shown great potential in terms of evaluation metrics and will be a great substitution in the future for the aforementioned methods.
Most of the traces that are gathered from real Internet traffic are imbalanced i.e., some types of application flows are generally more populated than others e.g., HTTP[4, 5, 6, 7]. This matter is a lot bolder when it comes to large-scale traffic and will cause some serious problems in the way of algorithms’ measure.
Augmentation is an approach in machine learning that addresses the issue of small amount of data for training. This approach usually tries to increase the training data in a way that can be still classified in the same category. Augmentation is a popular method used especially in image classification  and can be done through methods like cropping, zooming, rotating, and filliping vertically or horizontally. Another way of achieving augmentation is through generating artificial data for a class.
In order to address the challenges of machine learning algorithms in imbalanced network datasets, we introduce a novel augmentation method to improve the accuracy of deep learning algorithms on real-world traffic traces by using KDE and LSTM.
The remainder of this paper is organized as follows: In section II we review the related works in the area of NTCs. In section III we describe our augmentation scheme. The dataset and deep learning model that was used in order to classify the traffic traces are mentioned in sections IV and V, respectively. Finally, the evaluation of our method is demonstrated in section VI.
Ii related works
Due to the high variety in the classes, datasets, and performance metrics that are used in this area, having a comparison between the works in this subject is a difficult task . Considering this, there are several well-known pieces of research that are done up to this point.
There are several works that have applied deep learning architectures or neural networks in order to solve the classification problem. In  Lopez-Martin et al. have presented a deep Convolutional Recurrent Neural Network architecture in order to classify network flows and have found the best setting in that environment in terms of hyper-parameters and feature set. Nevertheless, they have not taken any measures to handle the imbalance problem of their dataset. Additionally, the scale of their dataset is approximately fifth of the one that we are using.
Rahul et al. have also proposed using Convolutional Neural Networks (CNNs) in order to classify network traffic but only consider three classes of applications in their work on a limited amount of data.
In  a comparison between CNN and Stacked Auto-Encoders in order to classify not only types of traffic but also applications in the network in a standard VPN/none-VPN dataset in packet level has been given. The scheme that was used, unlike ours, relies on the features from both header and payload of packets which may not be available in some privacy-preserving datasets.
Finally, in  Auld et al. have deploy a Bayesian neural network in the form of a multi-layer perception and accordingly classify their dataset. In this work, the lowest performance metrics are from the classes with the lowest number of data.
Some works in this area have attempted to battle the imbalanced property through different measures. In 
Rotsos et al. have introduced a method through Probabilistic Graphical Models for semi-supervised learning in a Naive Bayes model. For their learning, they have assumed a Dirichlet distribution prior for the classes with high
value. This is based on the assumption that some classes have a higher probability than others.
In addition, in 
, an augmentation method has been proposed by using an Auxiliary Classifier Generative Adversarial Network (AC-GAN), although only two classes of network is considered: SSH and none-SSH. Furthermore, their method is only evaluated on traditional machine learning algorithms like Support Vector Machines, Random Forest, and Naive Bayes. Also
has presented a new feature extraction method using a divide and conquer approach for an imbalanced dataset in the network.
As an instance of LSTM used for generating sequential data,  has introduced a method to generate data using LSTM and evaluated the method to show that it can capture the temporal features in the dataset. LSTM has also been used as an augmentation tool in works such as  and  for generating handwriting and human movement data, respectively, and has proven to be efficient in both cases.
Iii augmentation scheme for generating time series network data
In this section, we describe our contribution of augmentation scheme for generating new data in network traffic traces.
Every flow in the network has the same 5-tuple attributes:
Source IP address.
Destination IP address.
Source port number.
Destination port number.
Link layer protocol e.g., TCP and UDP.
Every application in the internet creates a flow of packets between communicating peers .
In order to represent flows in our work, we have to choose a set of features for each one that can capture the nature of a flow. According to , the appropriate set of features that will give acceptable results for classifying flows are mentioned in Table I. These features are gathered for the first 20 packets of each flow, which are more than enough for capturing the temporal and spatial features of a flow.
|Direction of packet||Sequential|
|TCP window size|
As shown in Table I, we can put the features in two categories: sequential and numerical. Each group has its own way of augmentation which are described in the following.
Iii-a Generating sequential features
In this section, we demonstrate our approach to generating sequential features.
As mentioned earlier, traffic flow comprises the sequence of packets that are transmitted between a source and a destination. Some applications are uni-directional i.e., the packets are only transmitted in one direction e.g., uploading a data. However, in some type of applications, packets go in both directions such as when a client is communicating with the server and gets a response for its request. Whether a packet is sent from source or destination depends on the sequence of packets that have already been sent up to this point in the flow. Therefore, we can conclude that the sequence of directions of packets in a specific application is of time-series nature and can be generated through means of sequence generation like in [13, 14]. TCP window size is another feature of the flow that is dependent on the previous values in the flow. Generally, this value is an indicator of the conditions of the connection and processing speed of data in the flow . Thus, its amount at each step of the flow is affected by previous steps’ values.
One of the most common ways to generate a sequence is using Recurrent Neural Networks (RNNs), which try to learn the patterns in time-series data e.g., speech, music, text, etc. In our work, we use one type of RNNs called LSTM networks
. Each LSTM block tries to learn the probability distribution in a step of a sequence whilst taking into consideration the information from previous steps.
In order to train the network, we gathered the patterns of packet directions in a flow for up to 20 packets in a class of flow application. We encode every direction by 1 or 0 with the former being from source to destination and the latter is the other way around. At the end of each sequence, we put a unique character as an indicator of the ending of the flow. Then every sequence is shifted by one character to the right and is used as labels in order to train each step of generation in LSTM.
In the generation phase, first, we choose a direction based on the distribution of that direction in the dataset for the first time step and give that as input to the LSTM. Afterwards, we use the output of each step as probability distribution of each character (1, 0, or ending character) and generate a new direction. Then, we feed that output direction to LSTM in order to generate next step probabilities. The maximum number of steps are 19 in order to generate the pattern of flow up to 20 packets (first packet is always from source to destination). Let and denote the direction of the packet in the dataset and the generated direction by the LSTM at time step , respectively. Therefore, the generation process is demonstrated in Fig. 1.
In order to generate window size values, we use the same scheme, although the characters in this case are the values of window sizes in our dataset instead of 0 and 1.
Iii-B Generating numerical features
In this section, we describe our method of generating numerical features of a flow.
As shown in Table I, we consider four numerical features for each packet of the flow. In order to generate new samples from these features, first, we need to learn their probability distribution. Since these features are not sequential, we can use conventional probability density distribution estimation methods. One of these methods is KDE that is in the category of kernel methods.
KDE, also known as the Parzen–Rosenblatt window, is one of the most famous methods used to estimate the probability density function of a dataset. KDE, as a non-parametric density estimator, does not have any assumptions about the density function as opposed to the parametric family of algorithms. This method will learn the shape of the density from the data automatically. This flexibility that arises from its non-parametric nature, makes KDE a very popular method for data drawn from a complicated distribution.
Let denote the set of independent and identically distributed random samples from a group of features e.g., inter-arrival time and denote the probability distribution function (PDF) of the kernel of our choosing. Then we can estimate the PDF of by
The common examples of K(x) is Gaussian distribution withand as expressed by
which is also used for our scheme.
In this section, we describe our dataset and its labeling method.
For this paper, we used real traces of traffic from the campus of Amirkabir University of Technology that includes more than 70 gigabytes of packets from UDP and TCP link layer protocols. Next, we label flows using nDPI, which is an open source DPI tool released by ntop for classifying the flows based on applications. The reason for our choice of labeling tool is that according to , nDPI is the most accurate open-source DPI tool among available DPI tools.
Nineteen classes of traffic from more than 50 gigabytes of packets were chosen which include 904490 flows. 85 percent of these flows were chosen for training and the rest are used for test dataset. The classes of applications are the ones with the most number of instances in the dataset and can be seen in Table II.
|Class||Number of Flows|
As shown in Table II, there are different classes of applications in our dataset and the names of our labels are chosen based on the labels given by nDPI. The percentage of each class is shown in Fig. 2.
As demonstrated by the bar chart, the imbalance feature of the dataset is clear. The most populated class of appliaction is SSL with more than 37 percent of the population and the least populated class is RDP with less than 0.16 percent. Furthermore, more than 83 percent of the whole dataset consists of only 4 classes. Additionally, 10 classes have less than 1 percent, which are the less populated classes and therefore, some of them are expected to be susceptible to low evaluation metrics.
V classification scheme
In this section we explain the classification scheme that was used to test our augmentation.
The classification process mainly consists of two stages:
In the augmentation phase, we generate new data from classes that have less population in the dataset. First, we train and use LSTM to generate the pattern of directions and TCP windows sizes in the flow. After that, we estimate the PDFs of every single numerical feature using KDE. Then, according to these PDFs, we generate points in every feature domain. These points are our generated features for the packets. Finally, we generate up to 20 packets per flow and put these features in an array of size 6*20 (6 features from 20 packets). If the number of packets in the generated sequence is less than 20, the rest of the array is appended with 0. These arrays will comprise the generated dataset.
The pseudo-code for the augmentation process is given in Algorithm 1.
Next, we train a Convolutional Recurrent Neural Network on the augmented dataset. In order to do this, we choose the architecture that was suggested in 
. This architecture includes two Convolution layers, the sizes of which are 32*4*2 and 64*4*2, respectively. Each of these layers is followed by a Batch Normalization (BN) layer. After that, the output of the last BN is put in time-series format and is fed into an LSTM layer of 100 hidden units. At the end of the architecture, there are two Fully-Connected (FC) layers, each with 100 and 108 hidden nodes and dropout rates of 0.2 and 0.4, respectively. These are followed by a soft-max layer with 19 outputs, each corresponding to 19 classes of traffic. The activation function of every layer in this architecture, except for the soft-max layer, is Relu function.
In this section, we present the evaluation results of the model on three different datasets.
In order to fully discover the advantages of our method, three sets of datasets are prepared:
Actual data: The exact dataset from section IV.
Augmented data: Dataset of section IV augmented using our method.
Classes NTP, Facebook, twitter, WindowsUpdate, Instagram, PlayStore, and YouTube are chosen for augmentation and over-sampling because the CRNN network gets the worst results in these classes. Furthermore, these are the classes that have low number of samples in the dataset.
The evaluation metrics that are chosen to measure the performance of our approach are those that are mostly used for imbalanced datasets and give an appropriate analysis of the methods that are employed. These metrics are precision, recall, accuracy, and measure, whose formulas are given in the following.
The TP, FP, TN, and FN in above formulas depict true positive, false positive, true negative, and false negative values, respectively.
measure shows the overall performance of algorithm on both precision and recall.
In Fig. 3 the precision metric for all three datasets is given. Although in some classes with less instances that have been augmented like Playstore and Instagram, there has been a slight decrease in precision, others have mostly had an improvement in this matter. In some cases, the sampled dataset performed better than our method such as BitTorrent and Google, but due to the lack of generalization, we can see that in a class like Playstore, which is sampled in large scales, this method has a huge decrease in the results. Furthermore, the number of classes that are improved by our augmentation is more than those that performed better in sampled dataset.
Fig. 4 depicts the recall of each class in three separate datasets. In every augmented class, there is a clear upgrade in recall measure. This is due the fact that the number of FN predictions are less for these classes compared to the normal dataset. This might have some negative effect on the over-populated class of DNS, but for others this metric is improved. Due to higher generality in our augmentation, it is obvious that the amount of increase in recall in our approach is higher than sampling in augmented classes in every instance. Moreover, sampling has caused a decrease in recall in 12 classes compared to actual dataset.
Fig. 5 illustrates the measure in all the classes of the dataset. This figure verifies the fact that overall performance of our method is better than sampling in each and every one of the classes.
Fig. 6 shows the overall measures on the whole datasets. As shown in this figure, although sampling improved the recall, it has also a slight decrease in precision due to the lack of generalization. However, the overall performance as shown by the increase, albeit a small one, on the is better than the actual dataset. On the other hand in our method, in all three metrics, there is a noticeable improvement which is more than any that is caused by sampling method.
Fig. 7 and Fig. 8 illustrate the confusion matrices of actual and augmented datasets, respectively. As shown in Fig. 7 classes of HTTP, DNS, and SSL, which have high number of instances in the dataset, have noticeable negative effect on majority of classes’ prediction. Fig. 8 shows that our method is able to improve this matter and lessen the number of false predictions. Additionally, the number of true positives in HTTP and SSL is increased. Although DNS predictions have less true positives, the number of false negatives is diminished. Moreover, the overall accuracy in our method is increased by 6.56 percent.
In this paper, we proposed an augmentation method for imbalanced network traffic classification on real traffic traces based on LSTM and KDE. In order to compare the performance of our scheme, we considered two sampled and augmented datasets. The results that are obtained from CRNN show that our approach gets better results in overall measures of precision, recall, and .
-  M. Lopez-Martin, B. Carro, A. Sanchez-Esguevillas, and J. Lloret. Network traffic classifier with convolutional and recurrent neural networks for internet of things. IEEE Access, 5:18042–18050, 2017.
M. Z. Alom, V. Bontupalli, and T. M. Taha.
Intrusion detection using deep belief networks.In 2015 National Aerospace and Electronics Conference (NAECON), pages 339–344, June 2015.
-  Mohammad Lotfollahi, Ramin Shirali Hossein Zade, Mahdi Jafari Siavoshani, and Mohammdsadegh Saberian. Deep packet: A novel approach for encrypted traffic classification using deep learning. CoRR, abs/1709.02656, 2017.
-  Lizhi Peng, Haibo Zhang, Yuehui Chen, and Bo Yang. Imbalanced traffic identification using an imbalanced data gravitation-based classification model. Computer Communications, 102:177 – 189, 2017.
-  Ly Vu, Cong Thanh Bui, and Quang Uy Nguyen. A deep learning based method for handling imbalanced problem in network traffic classification. In Proceedings of the Eighth International Symposium on Information and Communication Technology, SoICT 2017, pages 333–339, New York, NY, USA, 2017. ACM.
-  J. Shen, J. Xia, Y. Shan, and Z. Wei. Classification model for imbalanced traffic data based on secondary feature extraction. IET Communications, 11(11):1725–1731, 2017.
Muhammad Shafiq, Xiangzhan Yu, Ali Kashif Bashir, Hassan Nazeer Chaudhry, and
A machine learning approach for feature selection traffic classification using security analysis.The Journal of Supercomputing, 74(10):4867–4892, Oct 2018.
-  Luis Perez and Jason Wang. The effectiveness of data augmentation in image classification using deep learning. CoRR, abs/1712.04621, 2017.
-  R. K. Rahul, T. Anjali, Vijay Krishna Menon, and K. P. Soman. Deep learning for network flow analysis and malware classification. In Sabu M. Thampi, Gregorio Martínez Pérez, Carlos Becker Westphall, Jiankun Hu, Chun I. Fan, and Félix Gómez Mármol, editors, Security in Computing and Communications, pages 226–235, Singapore, 2017. Springer Singapore.
-  T. Auld, A. W. Moore, and S. F. Gull. Bayesian neural networks for internet traffic classification. IEEE Transactions on Neural Networks, 18(1):223–239, Jan 2007.
-  Charalampos Rotsos, Jurgen Van Gael, Andrew W. Moore, and Zoubin Ghahramani. Probabilistic graphical models for semi-supervised traffic classification. In Proceedings of the 6th International Wireless Communications and Mobile Computing Conference, IWCMC ’10, pages 752–757, New York, NY, USA, 2010. ACM.
-  Alex Graves. Generating sequences with recurrent neural networks. CoRR, abs/1308.0850, 2013.
-  C. Wigington, S. Stewart, B. Davis, B. Barrett, B. Price, and S. Cohen. Data augmentation for recognition of handwritten words and lines using a cnn-lstm network. In 2017 14th IAPR International Conference on Document Analysis and Recognition (ICDAR), volume 01, pages 639–645, Nov 2017.
J. Tu, H. Liu, F. Meng, M. Liu, and R. Ding.
Spatial-temporal data augmentation based on lstm autoencoder network for skeleton-based human action recognition.In 2018 25th IEEE International Conference on Image Processing (ICIP), pages 3478–3482, Oct 2018.
-  J. Zhang, C. Chen, Y. Xiang, W. Zhou, and Y. Xiang. Internet traffic classification by aggregating correlated naive bayes predictions. IEEE Transactions on Information Forensics and Security, 8(1):5–15, Jan 2013.
-  Jacobson, Braden, and Borman. Tcp extensions for high performance. RFC 4180, RFC Editor, May 1992.
-  Zachary Chase Lipton. A critical review of recurrent neural networks for sequence learning. CoRR, abs/1506.00019, 2015.
-  Klaus Greff, Rupesh Kumar Srivastava, Jan Koutník, Bas R. Steunebrink, and Jürgen Schmidhuber. LSTM: A search space odyssey. CoRR, abs/1503.04069, 2015.
-  Murray Rosenblatt. Remarks on some nonparametric estimates of a density function. Ann. Math. Statist., 27(3):832–837, 09 1956.
-  Emanuel Parzen. On estimation of a probability density function and mode. Ann. Math. Statist., 33(3):1065–1076, 09 1962.
-  B. W. Silverman. Density Estimation for Statistics and Data Analysis. Chapman and Hall, 1986.
-  ndpi by ntop. https://www.ntop.org/products/deep-packet-inspection/ndpi/. Accessed: 2018-07-02.
-  Tomasz Bujlow, Valentín Carela-Español, and Pere Barlet-Ros. Independent comparison of popular dpi tools for traffic classification. Comput. Netw., 76(C):75–89, January 2015.
-  Rushi Longadge and Snehalata Dongre. Class imbalance problem in data mining review. CoRR, abs/1305.1707, 2013.