Network anomalies typically refer to unusual and significant deviation from normal behaviors, which influence both network administrators and end users 
. For the Internet Service Providers (ISPs) and network administrators, it becomes more and more important to fast and accurately classify the type of traffic and the abnormal behaviors on the backbone network. ISPs need to monitor the constitutions of different applications so as to prioritize traffic of the QoS-sensitive application to prevent, locate harmful activities and take additional steps for other reasons, say, politics.
Until now, there have been four main effective fashions on traffic classification and anomaly detection:
Transport layer port number based method;
Deep packet inspection;
Host behavior based method;
Traffic flow features based method.
Among them, only the fourth method belongs to the region of machine learning.
So far many states of the art methodologies are applied in the field of traffic classification and anomaly detection. Erman, et al. 
used two unsupervised clustering algorithms - K-Means and DBSCAN to demonstrate how to use the cluster analysis to effectively identify groups of traffic only based on the transport layer statistics. Brauckhoff et al. applied PCA for traffic anomaly detection overcoming the sensitive to its parameter setting by a slight modification. Kim, et al. 
conducted an evaluation of three traffic classification approaches: port-based, host-behavior-based and flow-features-based. By comparing seven commonly used traffic-flow-features-based methods with the other two kinds of traffic classification methods, they found that Support Vector Machine (SVM) algorithm achieved the highest accuracy on every trace and application.-norm minimization Extreme Learning Machine (ELM)  has reached a good performance on anomaly detection.
Most of the mentioned methods above were applied in the circumstance of single traffic, and difficultly to meet the demand of complex multiple traffic cases. Learning from multiple tasks simultaneously has been widely applied [6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16] and shown to significantly improve the performance relative to learning each task independently . Multi-task learning has been successfully applied in various areas including: handwriting character recongnition , predicting disease progression , modeling disease progression [7, 8], patient risk prediction , biomarker identification , conjoint analysis [12, 13], medical diagnosis 15]16] and text classification .
In multi-task learning, the traffic consisting of flows at each time periods is considered as a task. These multiple tasks at different time periods performed are learnt simultaneously by extracting and utilizing appropriate shared information across tasks. The useful features learnt from multiple tasks can significantly improve the performance on anomaly detection.
In this paper, our main contributions can be summarized below:
Applying multi-task feature selection on network anomaly detection area.
Employing an effective preprocessing and feature extraction step to deal with the raw network data trace.
Generating anomaly detection datasets with ground truth, which can be used comprehensively.
To the best of our knowledge, it is the first time that multi-task feature selection method is applied in network anomaly detection area. Although the application is still in its infancy, we believe that it has great potential to enhance the field.
The rest of paper is organized as follows: Section 2 presents the idea of -norm based feature selection, and discusses the multi-task feature selection extension. Section 3 describes how to convert from raw network data trace into features and process of labeling . Section 4 presents systematic evaluation of different methods for anomaly detection. Conclusions come to in Section 5.
2 Multi-task Feature Selection.
In this section, we review the work of regularization - Lasso  and the extending of the well-known regularization for single-task to multi-task setting.
Formally, we assume there are models to learn and the training set consists of , where denotes the training sample for the task, denotes the corresponding output, is the number of training sample for task, and is the total number of training samples. n denotes the number of features. Let denote the data matrix for the task, , , and .
2.1 Single task case.
For single-task case, the well-known -norm regularization implemented by Lasso method produces a few non-zero coefficients that gives a more accurate and more easily interpretable model. Learning a model independently through minimizing empirical risk with regularization:
Solving these problems individually is equivalent to the solving of the summing problem:
It yields to a sparse for each task with distinct sparsity pattern by solving this optimization problem. denotes the column of which is the weight vector of feature for all tasks, and denotes the row of , where stands for the weight matrix.
2.2 Extending from single-task case to multi-task case.
In network traffic anomaly detection area, the data collected range from a certain network at different time to different backbone network sources forms multi-task cases. Our goal is to learn from a group of tasks by considering the relations between them resulting in a shared pattern of sparsity. Obzinski et al.  proposed a joint regularization of the parameters which selects features across a group of tasks. The joint regularization blocks the weight vector of a certain feature across all tasks with the -norm. To penalize the -norm of these blocks which result in global feature selection. The optimization problem is to minimize the following -norm regularization:
The nonsmooth 3 could be solved by accelerated gradient methods (AGM). The AGM is the optimal among first order methods, which has the convergence speed of .
3 Data Preprocessing.
Dataset is generated broadly including three main steps: preprocess the raw traffic trace, the feature extraction step and the labeling step. The process is briefly described in figure 1.
|Number of flows||731,362|
|Seconds of duration||899.20|
|Number of packets||28,944,127|
|Number of IPv4 addresses||420,421|
|Number of IPv6 addresses||2,893|
|Total Captured Size||1588.48MB|
|Total Flow Size||20809.67MB|
3.1 Raw Traffic Traces.
In the traffic classification and anomaly detection research field, owing to the privacy and legal concern, it is lack of the benchmark datasets that could be used to verify the feature selection methods and classifiers. The WIDE project  maintains a data trace repository. Traffic traces were captured from a trans-Pacific backbone link between Japan and the United States. A 15-minute traffic trace is chosen as the base evaluation dataset that started from 14:00 and ended at 14:15 on January 9th, 2011. The statistics of traffic trace could be found in table1.
3.2 Feature Extraction.
Moore et al. 
proposed comprehensive discriminators to characterize the flow which provided 248 features. The more detailed the features could be, the more proper model would achieve better classification performance. The features are calculated through complete TCP flows which derived from packet header. A TCP flow is defined as one or more packets traveling between two computer addresses using TCP protocol. Every packet contains 5-tuple which is made up of source IP address, destination IP address, source port number, destination port number and the protocol in use. Netdude is used to create a set of complete TCP flows on account of packet loss. A variety of features are selected to characterize the TCP flow includes packet length, inter-packet timings and information about transport protocol(TCP): such as SYN and ACK counts. Tcptrace can deal with some packet statistics which need to estimate round-trip time, size of TCP segments. However, for the simpler statistics such as counting packets and packet header size, calculation is more efficient. The 248 features could be found in table2.
|1||Port number at server|
|2||Port number at client|
|3||Minimum packet inter-arrival time for all packets of the flow|
First quartile inter-arrival time
|5||Median inter-arrival time|
|6||Mean inter-arrival time|
|7||Third quartile packet inter-arrival time|
|25||First quartile of control bytes in packet|
|26||Median of control bytes in packet|
|27||Mean of control bytes in packet|
|239||FFT of packet IAT (Frequency 1)|
|248||FFT of packet IAT (Frequency 10)|
Due to the lack of ground truth, labeling the traffic set is difficult. The method  to label the dataset which includes four main steps:
Four anomaly detectors analyze the traffic and report alarms
Uncover the similarities among the reported alarms by using similarity estimator, then group similar alarms into communities.
Combiner classifies each community according to overall output of all detectors.
Labeling the datasets.
3.3.1 Analyze the traffic and report alarms.
The four unsupervised anomaly detectors report traffic at different granularities.
PCA: PCA-based detector reports the source IP address of identified anomalous traffic by using random projection techniques.
Hough transform based method : an useful technique for identification of a specific shape in a picture.
Kullback-Leibler divergence based method: detected the prominent changes in traffic. The alarms reported by this anomaly detector are association rules, namely 4-tuples (source and destination IP addresses, source and destination port numbers).
3.3.2 Extract the traffic with similar alarms and group into communities.
The traffic extractor selects the traffic described by each alarm. Using both packet and flow as granularities could avoid missing similar alarms.
The graph generator builds an undirected graph by using the retrieved traffic. In this similarity graph, nodes stand for alarms and an edge between two nodes if their associated traffic intersects. The identical alarms in the graph are a set of the strong connected nodes, which is called community. After identifying the community and combining the communities, each community is classified to either accepted or rejected by measuring the distance to the reference communities in the low-dimensional space.
3.3.3 Labeling the datasets.
To label the analyzed traffic, we define a simple traffic taxonomy with two labels: anomalous and normal. Where the anomalous label stands for the traffic accepted. And this traffic should be detected as anomalous by any efficient anomaly detectors. The label normal consists of two cases. One is the traffic rejected which has a relative distance to the reference points. The other one is none of the detectors identified the traffic.
In the experiment we study the empirical performance of feature selection algorithms and classification methods in datasets which processed in Section 3. In this section, in order to show the effectiveness of the multi-task feature selection, we compare it with single-task feature selection method - Lasso.
After preprocessing the raw traffic trace, we formed a dataset consisted of flow discriminators. Entire data flow is fully characterized in a row. The dataset consisting of 235000 flows which is splited into 10 tasks chronologically. We select 50% samples for training and 50% for testing. The statistics of the dataset and samples could be found in table3.
4.2 Experiment Setup
In the experiments the implementation of multi-task feature selection and Lasso from the MALSAR package  and SLEP package , respectively. SVM is chosen to be the classifier and implemented by Matlab version of libsvm . For multi-task case, the multi-task feature selection method is applied to ten tasks simultaneously by tuning the parameter of , which selected 5, 12 and 24 features respectively. For single-task case, we choose top features by turning the parameter , where varying t from 5, 12 to 24. After the feature selection, we build classification model using SVM and evaluate the model on the testing data. For SVM, 5-fold cross validation is used to estimate the best parameter and .
To measure the performance of multi-task feature selection method, we use overall accuracy as metric. In figure 3 and figure 5, we compare the overall accuracy of SVM by selecting top-5 features and top-20 features. The multi-task feature selection achieve the highest overall accuracy in 8 of all 10 tasks. In figure 4, we use top-12 features to measure the performance. With the similar result, multi-task feature selection method runs ahead in 9 of all 10 tasks. We can observe that the performance improves slightly with increasing of the number of selected feature and the classifier works well using only 5 features. Since the class distribution of the data sets is not balanced, we also focus on the Area Under Curve (AUC) metric. From table 4, We observe that multi-task feature selection method achieve better performance than Lasso on both top-5 features case and top-12 features case.
In this paper, we apply a multi-task feature selection method on network anomaly detection field. The method selects effective features from multiple tasks simultaneously which significantly improves the performance on detecting anomalies. We employ an effective preprocessing and feature extraction step to deal with the real data trace from backbone network. Then generate an anomaly detection dataset with ground truth. The datasets generated with those features helped to evaluate the multi-task feature selection method, which outperforms the Lasso in most cases. We plan to extend the scale of application by evaluating the multi-task feature selection method on different backbone network simultaneously, which could be a network-wide anomaly detection.
-  Anukool Lakhina, Mark Crovella, and Christophe Diot. Diagnosing network-wide traffic anomalies. In ACM SIGCOMM Computer Communication Review, volume 34, pages 219–230. ACM, 2004.
-  Hyunchul Kim, Kimberly C Claffy, Marina Fomenkov, Dhiman Barman, Michalis Faloutsos, and KiYoung Lee. Internet traffic classification demystified: myths, caveats, and the best practices. In Proceedings of the 2008 ACM CoNEXT conference, page 11. ACM, 2008.
-  Jeffrey Erman, Martin Arlitt, and Anirban Mahanti. Traffic classification using clustering algorithms. In Proceedings of the 2006 SIGCOMM workshop on Mining network data, pages 281–286. ACM, 2006.
-  Daniela Brauckhoff, Kave Salamatian, and Martin May. Applying pca for traffic anomaly detection: Problems and solutions. In INFOCOM 2009, IEEE, pages 2866–2870. IEEE, 2009.
-  Guyu hu Zhisong Pan Yibing Wang, Dong Li and Junqing Wu. Anomaly detection using l1-norm minimization extreme learning machine. 2013.
-  Jiayu Zhou, Lei Yuan, Jun Liu, and Jieping Ye. A multi-task learning formulation for predicting disease progression. In Proceedings of the 17th ACM SIGKDD international conference on Knowledge discovery and data mining, pages 814–822. ACM, 2011.
-  Jiayu Zhou, Jun Liu, Vaibhav A Narayan, and Jieping Ye. Modeling disease progression via fused sparse group lasso. In Proceedings of the 18th ACM SIGKDD international conference on Knowledge discovery and data mining, pages 1095–1103. ACM, 2012.
-  Jiayu Zhou, Jun Liu, Vaibhav A Narayan, and Jieping Ye. Modeling disease progression via multi-task learning. NeuroImage, 2013.
-  Jiayu Zhou, Jimeng Sun, Yashu Liu, Jianying Hu, and Jieping Ye. Patient risk prediction model via top-k stability selection.
-  Jiayu Zhou, Zhaosong Lu, Jimeng Sun, Lei Yuan, Fei Wang, and Jieping Ye. Feafiner: Biomarker identification from medical data through feature generalization and selection. 2013.
-  Guillaume Obozinski, Ben Taskar, and Michael I Jordan. Multi-task feature selection. Statistics Department, UC Berkeley, Tech. Rep, 2006.
-  Andreas Argyriou, Theodoros Evgeniou, and Massimiliano Pontil. Convex multi-task feature learning. Machine Learning, 73(3):243–272, 2008.
-  Jian Zhang, Zoubin Ghahramani, and Yiming Yang. Flexible latent variable models for multi-task learning. Machine Learning, 73(3):221–242, 2008.
-  Jinbo Bi, Tao Xiong, Shipeng Yu, Murat Dundar, and R Bharat Rao. An improved multi-task learning approach with applications in medical diagnosis. In Machine Learning and Knowledge Discovery in Databases, pages 117–132. Springer, 2008.
Antonio Torralba, Kevin P Murphy, and William T Freeman.
Sharing features: efficient boosting procedures for multiclass object
Computer Vision and Pattern Recognition, 2004. CVPR 2004. Proceedings of the 2004 IEEE Computer Society Conference on, volume 2, pages II–762. IEEE, 2004.
-  Rie Kubota Ando. Applying alternating structure optimization to word sense disambiguation. In Proceedings of the Tenth Conference on Computational Natural Language Learning, pages 77–84. Association for Computational Linguistics, 2006.
-  Andreas Argyriou, Theodoros Evgeniou, and Massimiliano Pontil. Multi-task feature learning. In Advances in Neural Information Processing Systems, pages 41–48, 2006.
-  Robert Tibshirani. Regression shrinkage and selection via the lasso. Journal of the Royal Statistical Society. Series B (Methodological), pages 267–288, 1996.
-  Kenjiro Cho, Koushirou Mitsuya, and Akira Kato. Traffic data repository at the wide project. In Proceedings of the annual conference on USENIX Annual Technical Conference, pages 51–51. USENIX Association, 2000.
-  Andrew Moore, Denis Zuev, Michael Crogan, and Queen Mary. Discriminators for use in flow-based classification. Queen Mary and Westfield College, Department of Computer Science, 2005.
-  Romain Fontugne, Pierre Borgnat, Patrice Abry, and Kensuke Fukuda. Mawilab: combining diverse anomaly detectors for automated anomaly labeling and performance benchmarking. In Proceedings of the 6th International COnference, page 8. ACM, 2010.
-  Guillaume Dewaele, Kensuke Fukuda, Pierre Borgnat, Patrice Abry, and Kenjiro Cho. Extracting hidden anomalies using sketch and non gaussian multiresolution statistical detection procedures. In Proceedings of the 2007 workshop on Large scale attack defense, pages 145–152. ACM, 2007.
-  Romain Fontugne and Kensuke Fukuda. A hough-transform-based anomaly detector with an adaptive time interval. ACM SIGAPP Applied Computing Review, 11(3):41–51, 2011.
-  Jiayu Zhou, Jianhui Chen, and Jieping Ye. Malsar: Multi-task learning via structural regularization. Arizona State Univ, 2012.
-  Jun Liu, Shuiwang Ji, and Jieping Ye. Slep: Sparse learning with efficient projections, 2009.
-  Chih-Chung Chang and Chih-Jen Lin. Libsvm: a library for support vector machines. ACM Transactions on Intelligent Systems and Technology (TIST), 2(3):27, 2011.