BEBP: An Poisoning Method Against Machine Learning Based IDSs

03/11/2018
by   Pan Li, et al.
0

In big data era, machine learning is one of fundamental techniques in intrusion detection systems (IDSs). However, practical IDSs generally update their decision module by feeding new data then retraining learning models in a periodical way. Hence, some attacks that comprise the data for training or testing classifiers significantly challenge the detecting capability of machine learning-based IDSs. Poisoning attack, which is one of the most recognized security threats towards machine learning-based IDSs, injects some adversarial samples into the training phase, inducing data drifting of training data and a significant performance decrease of target IDSs over testing data. In this paper, we adopt the Edge Pattern Detection (EPD) algorithm to design a novel poisoning method that attack against several machine learning algorithms used in IDSs. Specifically, we propose a boundary pattern detection algorithm to efficiently generate the points that are near to abnormal data but considered to be normal ones by current classifiers. Then, we introduce a Batch-EPD Boundary Pattern (BEBP) detection algorithm to overcome the limitation of the number of edge pattern points generated by EPD and to obtain more useful adversarial samples. Based on BEBP, we further present a moderate but effective poisoning method called chronic poisoning attack. Extensive experiments on synthetic and three real network data sets demonstrate the performance of the proposed poisoning method against several well-known machine learning algorithms and a practical intrusion detection method named FMIFS-LSSVM-IDS.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 5

02/08/2018

Detection of Adversarial Training Examples in Poisoning Attacks through Anomaly Detection

Machine learning has become an important component for many systems and ...
10/30/2019

Investigating Resistance of Deep Learning-based IDS against Adversaries using min-max Optimization

With the growth of adversarial attacks against machine learning models, ...
07/06/2021

A Low-Cost Machine Learning Based Network Intrusion Detection System with Data Privacy Preservation

Network intrusion is a well-studied area of cyber security. Current mach...
04/10/2020

Adversarial Attacks on Machine Learning Cybersecurity Defences in Industrial Control Systems

The proliferation and application of machine learning based Intrusion De...
04/23/2020

Adversarial Machine Learning in Network Intrusion Detection Systems

Adversarial examples are inputs to a machine learning system intentional...
09/20/2017

Practical Machine Learning for Cloud Intrusion Detection: Challenges and the Way Forward

Operationalizing machine learning based security detections is extremely...
11/01/2020

Unsupervised Intrusion Detection System for Unmanned Aerial Vehicle with Less Labeling Effort

Along with the importance of safety, an IDS has become a significant tas...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

Currently, intelligent intrusion detection systems (IDSs) generally adopt various machine learning techniques to make decisions regarding the presence of security threats using high performance classifiers, which are selected via learning models and algorithms like support vector machine (SVM), Native Bayes (NB), logistic regression (LR), decision tree and artificial neural networks

[1][2]. For example, the authors in [1]

proposed an efficient intrusion detection method by combining flexible mutual information based feature selection (FMIFS) and least-square SVM (LSSVM), achieving the state-of-the-art classification performance on widely recognized KDDCUP99, NSL-KDD and Kyoto 2006+ data sets.

Although machine learning has been extensively used for intelligent decision in IDSs, previous works have demonstrated that the technology itself suffers from diverse security threats, e.g., attacking against spam filtering [3], malware detection [4][5]

and anomaly detection systems

[6][7]. Basically, security threats towards machine learning can be classified into two categories, i.e., exploratory and causative attacks [8]. Specifically, the exploratory attack exploits the security vulnerabilities of learning models to deceive the resulting classifiers without affecting their training phase. For example, adversaries generate some customized adversarial samples111The terms sample and data point are used interchangeably in this paper for convenience. to evade the detection of spam filtering [3] and malware detection systems [5][9]

. Considering the great influences of deep neural networks (DNNs) in several application scenarios, e.g., speech recognition, image recognition, natural language processing and autonomous driving, some researchers paid more attention to exploratory attacks against prevailing DNNs

[10][11]. On the other hand, the causative attack (also termed as the poisoning attack) shall change training data sets via injecting adversarial samples, inducing influences on the training phase of learning models [8]. Typically, such adversarial samples are designated by adversaries to have similar features with malicious samples but wrong labels, inducing the change of the training data distribution. Therefore, adversaries can reduce the performance of classification or regression models in terms of accuracy. Since the training data in practical machine learning based systems are protected with high confidentiality, it is uneasy for adversaries to alter the data themselves. Alternatively, the adversaries are able to exploit the vulnerability that stems from retraining existing machine learning models. Since machine learning based systems in practical usage, e.g., anomaly detection systems [6][7], are generally required to periodically update their decision models to adapt to varying application contexts, the poisoning attack is emerging as a main security threat towards these real systems. Hence, we focus on the latter type of security threats towards machine learning in this paper.

Existing work regarding poisoning attacks mainly falls into poisoning SVMs [12]

, principal component analysis (PCA)

[7] via direct gradient methods. However, these attacking methods are not effective to poison other learning models. Recently, a poisoning attack against DNNs was proposed by adopting the concept of generative adversarial network (GAN) [13]. Label contamination attack (LCA) is another type of poisoning attack against black-box learning models [14]. However, LCA made a strong assumption that the adversary had the ability of changing the labels of training data, which was difficult in reality. In addition, some researchers proposed another attaching strategy called model inversion by using the information of system application program interfaces (APIs) [15][16].

In this paper, we propose a novel poisoning method using the Edge Pattern Detection (EPD) algorithm described in [17][18]. Specifically, we propose a boundary pattern detection algorithm to efficiently generate the poisoning data points that are near to abnormal data but regarded as normal ones by current classifiers. After that, we present a Batch-EPD Boundary Pattern (BEBP) detection algorithm to address the drawback of the limited number of edge pattern points generated by conventional EPD and to obtain more useful boundary pattern points. After that, we present a moderate but effective poisoning method based on BEBP, called chronic poisoning attack. Compared to previous poisoning methods, a notable advantage of the proposed poisoning method is that it can poison different learning models such as SVMs (with linear, RBF, sigmoid and polynomial kernels), NB and LR. Extensive experiments on synthetic and three real network data sets, i.e., KDDCUP99, NSL-KDD and Kyoto 2006+, demonstrate the effectiveness of the proposed poisoning method against the above learning models and a practical intrusion detection method named FMIFS-LSSVM-IDS (see [1]).

The rest of this paper is organized as follows: Section II presents an adversary model and some assumptions. Then, Section III gives the details of the proposed poisoning method. After that, Section IV evaluates the performance of the proposed method via extensive experiments on synthetic and real network data sets. Finally, Section V concludes this paper.

Ii Adversary Model and Assumptions

In this section, we present an adversary model and make some proper assumptions from four aspects: goal, knowledge, capability and strategy.

(a) The adversarial goal. Generally speaking, the adversarial goal means the intention of launching attacks, e.g., breaking integrity, availability and user privacy [8][19]. In poisoning attack, integrity violation and availability violation are two dominating goals of an adversary. To be more detailed, the adversary hopes to attack against a learning model and its application performance by poisoning the training phase. Hence, we assume that the adversarial goal is to reduce the accuracy and the detection rate of IDSs.

(b) The adversarial knowledge. To achieve the above goal, an adversary should have some information related to target IDS systems. Thus, the adversarial knowledge is the priori information that the adversary can utilize to design attacking strategies, including learning algorithms, training and testing data sets, extracted features, etc [8]. Conventional poisoning methods [6][7] require full knowledge of target systems, which is not rational in practical usage. Therefore, we make an assumption of limited knowledge that the adversary only know the details of training data.

(c) The adversarial capability. The adversarial capability of launching poisoning attacks contains two points. One is whether or not the adversary can change the labels or the features of training data. The other is how many adversarial samples that the adversary can inject into training data. Accordingly, we make two more assumptions regarding the adversarial capability as follows: the adversary can not change the labels nor modify the features of training data and is able to inject adversarial samples at each time of updating target systems (or retraining learning models).

(d) The adversarial strategy. Based on the assumptions made before, we define the adversarial strategy as

(1)
(2)

where denotes the total number of training samples, is a constant parameter representing the poisoning degree of adversarial samples, refers to the target learning model that the adversary aims to compromise, means the adversarial knowledge including training data and the output labels after feeding inputs, and represents the number of adversarial samples that the adversary can inject. Thus, the adversarial goal is to minimize the performance of the target learning model under limited knowledge and capability .

Iii Details of the Proposed Batch Poisoning Method

Iii-a Formulation of Adversarial Sample Generation

According to the adversary model, an adversary has no knowledge about learning models in machine learning-based IDSs. Hence, the proposed poisoning method can be regarded as a kind of black-box attacks. Since the information about learning models is unknown, the adversary alternatively prefers to inject adversarial samples such that target models can not well fit for the real distribution of training data. Such process is termed as data drifting in this paper.

To maximize the effects of data drifting in training data, the best strategy is to generate adversarial samples that are close to the discriminant plane defined by a pretrained decision function . Hence, the black-box poisoning problem can be formally defined by generating a set of adversarial samples satisfying

(3)

where is the Euclidean distance between two vectors, and denotes the chosen threshold between an adversarial sample and the discriminant plane.

Iii-B Boundary Pattern Detection

As per the formulation of adversarial sample generation, we define boundary pattern as the data points that are near to abnormal data but considered as normal ones by classifiers. Thus, the goal of the proposed poisoning method is to generate the boundary pattern, which is then used to shift discriminant plane towards the central of abnormal data during model retraining. Accordingly, we propose a boundary pattern detection (BPD) algorithm using the edge pattern detection (EPD) algorithm [17][18] to effectively generate the boundary pattern samples. There are two main steps in BPD as follows:

(a) Detecting the edge pattern points of normal data that are regarded as normal behaviors by IDSs. Given , it is easy to find out the edge pattern points () by applying the EPD algorithm [17]. Moreover, we calculate the normal vector with respect to each edge point to obtain the direction of departing from with the fastest speed [18]. Let denote the set of all normal vectors with respect to .

(b) Generating the boundary pattern by shifting the edge pattern points outwards. Although these edge pattern points locate at the exterior surface of , they may be far from the discriminant plane . Hence, we perform the following two operations based on and : Firstly, selecting an edge pattern point and corresponding normal vector . Then, shifting outwards along the direction of until the generated data points are near to the discriminant plane of classifiers. The data shifting is formally defined by

(4)

where

(5)
1.  Input: An edge pattern point and corresponding normal vector , target learning model , ,
2.  Output: A boundary pattern generated from
3.  Initialize , , ;
4.  for  do
5.     if  then
6.        if ( and then
7.           ;
8.        end if
9.        ; ;
10.     else
11.        ; ;
12.     end if
13.  end for
Fig. 1: Pseudo code of the boundary pattern detection algorithm

The pseudo code of the BPD algorithm is shown in Fig. 1, where is the maximal number of iterations, means the initial shifting step size, and represent the selected edge pattern point and corresponding normal vector, respectively. In particular, we first shift outwards along the direction of its normal vector according to equations (4) and (5), where and determine the generated adversarial sample and the shifting step size in the th iteration. Note that . Furthermore, the output of a target learning model () with respect to an input sample falls into representing Normal and Abnormal, respectively. Finally, we select valid adversarial samples (i.e. boundary pattern points) according to the equation (3). For simplicity, is set to .

Iii-C Batch-EPD Boundary Pattern Detection

1.  Input: A training data set , target learning model , maximal number of iterations , shifting step size , batch size ;
2.  Output: Generated adversarial samples ;
3.  Select the training data with normal labels from ;
4.  Initialize , ;
5.  for  do
6.     Randomly select samples from , which is denoted by ;
7.     Calculate and corresponding regarding using EPD;
8.     for  do
9.        Calculate using BPD with inputs of , , , and ;
10.        ;
11.     end for
12.  end for
Fig. 2: Pseudo code of the Batch-EPD boundary pattern detection algorithm

Although the aforementioned BPD algorithm can effectively generate the boundary pattern, it is constrained by the limited number of edge pattern points, especially for those data sets with sparse edge points. Hence, we further introduce a Batch-EPD method, which is able to directly obtain more valid adversarial samples near to the discriminant boundary of learning models. The main idea of Batch-EPD is as follows: At the first stage, we randomly select subsets from the training data with Normal labels. Then, we utilize the conventional EPD algorithm to calculate edge pattern points and corresponding normal vectors with respect to each subset (). Note that some edge pattern points generated by Batch-EPD may locate at inner data points of . However, the proposed BPD algorithm can still shift these inner points to the discriminant boundary. Fig. 2 shows the pseudo code of the proposed Batch-EPD boundary pattern (BEBP) detection algorithm.

To demonstrate the improvement of BEBP comparing to BPD, Fig. 3 illustrates comparative results on a synthetic data set, where blue (red) stars are normal (abnormal) samples, blue and red solid circles refer to edge pattern points and generated adversarial samples, respectively.

Fig. 3: Comparative results on a synthetic data set between BPD and BEBP

Iii-D Chronic Poisoning Attack Using BEBP

Based on the aforementioned BEBP algorithm, we now present a moderate but effective poisoning method against learning models, called chronic poisoning attack. Similar to the boil frog poisoning attack proposed in [7], the proposed chronic poisoning attack using BEBP is also a long-term poisoning method, which changes the distribution of training data in each time of updating learning models. By gradually injecting adversarial samples, which are classified as normal samples and locate near to the discriminant boundary defined by a pretrained model, the boundary of the updated model after retraining over the corrupted training data will move towards the centre of abnormal data points. As a result, the performance of IDSs detecting abnormal samples significantly decreases after several rounds of poisoning. Fig. 4 shows the pseudo code of the chronic poisoning attack using BEBP, where and refer to the training data and the pretrained model at the th round of poisoning, respectively.

1.  Input: An initial training data set , an initial learning model , number of poisoning rounds
2.  for  do
3.     Generate adversarial samples using the BEBP algorithm with inputs and ;
4.     ;
5.     Retrain a new model based on ;
6.  end for
Fig. 4: Pseudo code of the chronic poisoning attack using BEBP

Iv Performance Evaluation and Analysis

In this section, we evaluate the performance of the proposed algorithms by extensive experiments described as follows: Firstly, we examine the attacking effects of the proposed poisoning method against different learning models on synthetic data sets. Then, we evaluate the performance of the proposed method on three real data sets to further demonstrate its strong capability of reducing the detecting performance of multiple learning models. After that, we select a state-of-the-art IDS system, called FMIFS-LSSVM-IDS [1], as the poisoning target and give comparative results between the proposed method and two other baseline methods.

Iv-a Experimental Setup

Iv-A1 Data Sets

To demonstrate the performance of the proposed poisoning method without loss of generality, we adopted the synthetic moon data set that was used in sklearn222http://scikit-learn.org, where 100 synthetic samples were randomly generated with a noise of 0.2. Regarding the real data sets, we chose three public data sets, i.e., KDDCUP99, NSL-KDD and Kyoto 2006+. KDDCUP99 is a well-known benchmark data set for evaluating the performance of IDSs, which contains five categories of samples (one normal and four abnormal). Moreover, each sample has features. NSL-KDD is a revised version of KDDCUP99, and it has the same numbers of categories and features. Apart from these two widely used data sets, Kyoto 2006+ proposed in [20] is another recognized data set for performance evaluation. The data set has been collected from honeypots and regular servers that are deployed at the Kyoto University since 2006. Moreover, Kyoto 2006+ contains three types of samples, i.e., normal, known attack and unknown one, and each sample has features.

Considering that the goal of poisoning attacks is to reduce the performance of IDSs detecting abnormal behaviors, we treat all samples with abnormal labels in each data set as a whole regardless of their specific types of attacks. Similar to FMIFS-LSSVM-IDS, we preprocess and perform data normalization with respect to all samples such that each feature value is normalized into a range of . To evaluate the effectiveness of the proposed poisoning method, we will use two types of data for performance evaluation, a.k.a. (a) evaluating data that are randomly selected from training data, and (b) official testing data from public data sets.

Iv-A2 Performance Metrics

Regarding an IDS system, accuracy and detecting rate are two primary performance metrics. Hence, we also adopt these two metrics in this paper to evaluate the performance reduction of machine learning-based IDSs under the proposed poisoning attack. The accuracy () and the detecting rate () with respect to abnormal samples are defined by

(6)
(7)

where true positive () is the number of truly abnormal samples that are classified as abnormal ones by IDSs, true negative () means the number of truly normal samples that are treated as normal ones, false positive () refers to the number of truly normal samples classified as abnormal ones, and false negative () represents the number of truly abnormal samples classified as normal ones.

Iv-B Performance of the Proposed Poisoning Method over Synthetic Data Sets

Fig. 5: Comparative results of five-round poisoning against different learning models on synthetic data sets
Data Set NORMAL PROB DOS U2R R2L
KDDCUP99 Training data 2000 300 3790 32 350
Evaluating data 2000 500 3900 20 400
NSL-KDD Training data 2000 300 3790 32 350
Evaluating data 2000 500 3900 20 400
TABLE I: Summary of Sample Distributions of the Randomly Selected Data Regarding the KDDCUP99 and NSL-KDD Data Sets

To demonstrate the attacking effects of chronic poisoning, we first evaluated the performance of the proposed poisoning method against six different learning models on synthetic data sets. The evaluated models included NB-Gaussian, LR, SVM with a sigmoid kernel (SVM-sigmoid), SVM with a polynomial kernel (SVM-POLY), SVM with a radial basis function kernel (SVM-RBF) and SVM with a linear kernel (SVM-linear). To focus on poisoning itself, we simply used the default values of model parameters as specified in the sklearn tool. Fig.

5 illustrates the comparative results of five-round poisoning against different learning models, where the blue and white points represent the training data with normal and abnormal labels, respectively. In Fig. 5, the read points mean the adversarial samples generated by BEBP, and the discriminant boundary between normal and abnormal samples is shown as the line of separating blue and red regions. Moreover, we would like to highlight that read points in the figures of SVM-sigmoid at the th round and SVM-POLY at the nd-th rounds denote the truly abnormal data. From Fig. 5, we can see that no matter what the learning model is, the discriminant boundary gradually moves towards the centre of abnormal data. Accordingly, we clearly figure out that more abnormal points are wrongly classified as normal ones along with an increase of poisoning round.

Iv-C Performance of the Proposed Poisoning Method over real Data Sets

According to the sample selection method in [2], we adopted samples as training data and samples as evaluating data that were randomly selected from the “kddcup.data_10_percent_corrected” (“KDD Train+”) of the KDDCUP99 (NSL-KDD) data set. Table I summarizes the sample distributions of the selected data regarding the KDDCUP99 and NSL-KDD data sets. Similar to [1], we randomly selected samples from the traffic data collected during 27-31, August 2009 regarding the Kyoto 2006+ data set.

Fig. 6: Comparative results on NSL-KDD evaluating data with respect to different values of poisoning ratio

As we mentioned before, the parameter controls the poisoning ratio of adversarial samples to normal training data. Hence, it is meaningful to examine the change of poisoning results with different values of . For simplicity without loss of generality, we took NSL-KDD as the evaluating data set and carried out a group of experiments with different settings of . The comparative results on NSL-KDD evaluating data with respect to different values of poisoning ratio are illustrated in Fig. 6. We can see from Fig. 6 that the DR of different learning models tends to decrease with an increase of .

(Evaluating results;Testing results) NB LR SVM-sigmoid SVM-POLY SVM-RBF SVM-linear
Round 0 (0.9256;0.8757) (0.9794;0.9168) (0.9644;0.9215) (0.9285;0.919) (0.981;0.9216) (0.9809;0.9289)
Round 5 (0.6102;0.4478) (0.9311;0.8667) (0.8825;0.8948) (0.8517;0.6898) (0.9091;0.8542) (0.9304;0.8177)
Round 10 (0.429;0.3088) (0.8745;0.8474) (0.8071;0.6984) (0.7861;0.6706) (0.8762;0.8013) (0.8776;0.7881)
Round 15 (0.3677;0.247) (0.8118;0.7089) (0.7013;0.6398) (0.3986;0.2989) (0.7461;0.6303) (0.7278;0.6679)
TABLE II: Comparative Results of on KDDCUP99 under the Proposed Poisoning Attack
(Evaluating results;Testing results) NB LR SVM-sigmoid SVM-POLY SVM-RBF SVM-linear
Round 0 (0.8895;0.7711) (0.8536;0.7733) (0.9508;0.781) (0.892;0.7799) (0.9576;0.7724) (0.9578;0.7615)
Round 5 (0.7726;0.6471) (0.8822;0.7049) (0.809;0.6429) (0.8233;0.6753) (0.8337;0.6897) (0.8756;0.6875)
Round 10 (0.6694;0.5403) (0.8051;0.646) (0.7682;0.6034) (0.7227;0.5162) (0.7904;0.6552) (0.7829;0.6222)
Round 15 (0.6158;0.5164) (0.7324;0.5563) (0.5207;0.4683) (0.3904;0.4445) (0.6057;0.5155) (0.6875;0.5125)
TABLE III: Comparative Results of on NSL-KDD under the Proposed Poisoning Attack
Evaluating data NB LR SVM-sigmoid SVM-POLY SVM-RBF SVM-linear
Round 0 0.9541 0.9834 0.9734 0.9315 0.9821 0.989
Round 5 0.6475 0.9339 0.8984 0.869 0.9074 0.93
Round 10 0.6181 0.8095 0.5457 0.4131 0.6142 0.763
Round 15 0.5701 0.5422 0.4794 0.4131 0.5362 0.5376
TABLE IV: Comparative Results of on Kyoto 2006+ under the Proposed Poisoning Attack

To further demonstrate the effectiveness of the proposed poisoning method against different learning models, we carried out more experiments on KDDCUP99, NSL-KDD and Kyoto 2006+ data sets. Specifically, we selected the total poisoning round as in each comparative experiment, and we independently reran poisoning attacks times to minimize the fluctuation of experimental results brought by random data sampling. Moreover, the poisoning ratio was set to 0.07 in all experiments. The comparative results of and under the proposed poisoning attack are given in Tables IIIV and Fig. 7, respectively. The comparative results on three benchmark data sets demonstrate that both and of classifiers detecting abnormal behaviors significantly decrease when the proposed chronic poisoning attack occurs for a long time. Furthermore, the similar changes with respect to different learning models validate that the proposed poisoning method is scalable for attacking black-box detecting models.

Fig. 7: Comparative results of under the proposed poisoning attack

Iv-D Comparative Results of Poisoning FMIFS-LSSVM-IDS

In this part, we further demonstrate the performance of the proposed poisoning method against a state-of-the-art IDS based on machine learning named FMIFS-LSSVM-IDS. Here, we select two more poisoning methods as the comparative baselines, i.e., BASIC and RANDOM [14]. In the BASIC method, if adversarial samples are added into training data, then normal samples selected from normal training data randomly will also be added. In the RANDOM method, on the other hand, we generate a number of samples with random features. After that, those samples that are classified as the normal ones by FMIFS-LSSVM-IDS are chosen as valid adversarial samples. Finally, some normal samples are randomly selected from normal training data as new added samples as well. Fig. 8 illustrates the comparative results among different poisoning methods.

Fig. 8: Comparative results among different poisoning methods against FMIFS-LSSVM-IDS

We can see from Fig. 8 that the proposed poisoning method is more effective to reduce the of FMIFS-LSSVM-IDS compared with BASIC and RANDOM on all three data sets. These results further demonstrate the advantages of the proposed method to attack against state-of-the-art IDSs.

V Conclusion and Future Work

In this paper, we have proposed a novel poisoning method by using the EPD algorithm. Specifically, we first propose the BPD algorithm to generate adversarial samples that locate near to the discriminant boundary defined by classifiers but are still classified to be normal ones. To address the drawback of limited adversarial samples generated by BPD, we further present the BEBP algorithm to obtain more useful adversarial samples. After that, we introduce a chronic poisoning attack based on BEBP. Extensive experiments on synthetic and real data sets demonstrate the effectiveness of the proposed poisoning method against different learning models and state-of-the-art IDSs, e.g., FMIFS-LSSVM-IDS.

In future, it is worthwhile to do more in-depth studies on the scalability of the proposed poisoning method. Moreover, research on defending against the poisoning method will be an interesting work as well.

References

  • [1] M. A. Ambusaidi, X. He, P. Nanda, and Z. Tan, “Building an intrusion detection system using a filter-based feature selection algorithm,” IEEE Trans. Comput., vol. 65, no. 10, pp. 2986–2998, 2016.
  • [2] K. Kishimoto, H. Yamaki, and H. Takakura, “Improving performance of anomaly-based ids by combining multiple classifiers,” in Proc. of the SAINT’11, 2011, pp. 366–371.
  • [3] B. Nelson, M. Barreno, F. J. Chi, A. D. Joseph, B. I. P. Rubinstein, U. Saini, C. Sutton, J. D. Tygar, and K. Xia, Misleading Learners: Co-opting Your Spam Filter, ser. Machine Learning in Cyber Trust.   Springer, Boston, MA, 2009.
  • [4] B. Biggio, K. Rieck, D. Ariu, C. Wressnegger, I. Corona, G. Giacinto, and F. Roli, “Poisoning behavioral malware clustering,” in Proc. of the AISec’14.   New York, NY, USA: ACM, 2014, pp. 27–36.
  • [5] W. Hu and Y. Tan, “Generating adversarial malware examples for black-box attacks based on gan,” arXiv.org, 2017. [Online]. Available: https://arxiv.org/abs/1702.05983
  • [6] M. Kloft and P. Laskov, “Online anomaly detection under adversarial impact,” in Proc. of the AISTATS’10, 2010, pp. 405–412.
  • [7] B. I. Rubinstein, B. Nelson, L. Huang, A. D. Joseph, S.-h. Lau, S. Rao, N. Taft, and J. D. Tygar, “Antidote: Understanding and defending against poisoning of anomaly detectors,” in Proc. of the IMC’09.   New York, NY, USA: ACM, 2009, pp. 1–14.
  • [8] M. Barreno, B. Nelson, R. Sears, A. D. Joseph, and J. D. Tygar, “Can machine learning be secure?” in Proc. of the ASIACCS’06.   New York, NY, USA: ACM, 2006, pp. 16–25.
  • [9] W. Xu, Y. Qi, and D. Evans, “Automatically evading classifiers: A case study on pdf malware classifiers,” in Proc. of the NDSS’16, 2016, pp. 1–15.
  • [10] N. Papernot, P. McDaniel, I. Goodfellow, S. Jha, Z. B. Celik, and A. Swami, “Practical black-box attacks against machine learning,” in Proc. of the ASIACCS’17.   New York, NY, USA: ACM, 2017, pp. 506–519.
  • [11] S. M. Moosavidezfooli, A. Fawzi, and P. Frossard, “Deepfool: A simple and accurate method to fool deep neural networks,” in Proc. of the CVPR’16, 2016, pp. 2574–2582.
  • [12] B. Biggio, B. Nelson, and P. Laskov, “Poisoning attacks against support vector machines,” in Proc. of the ICML’12, 2012, pp. 1467–1474.
  • [13] C. Yang, Q. Wu, H. Li, and Y. Chen, “Generative poisoning attack method against neural networks,” arXiv.org, 2017. [Online]. Available: https://arxiv.org/abs/1703.01340
  • [14] M. Zhao, B. An, W. Gao, and T. Zhang, “Efficient label contamination attacks against black-box learning models,” in Proc. of the IJCAI’17, 2017, pp. 3945–3951.
  • [15] I. Rosenberg, A. Shabtai, L. Rokach, and Y. Elovici, “Generic black-box end-to-end attack against rnns and other api calls based malware classifiers,” arXiv.org, 2017. [Online]. Available: https://arxiv.org/abs/1707.05970
  • [16] F. Tramèr, F. Zhang, A. Juels, M. K. Reiter, and T. Ristenpart, “Stealing machine learning models via prediction apis,” arXiv.org, 2016.
  • [17] Y. Li and L. Maguire, “Selecting critical patterns based on local geometrical and statistical information,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 33, no. 6, pp. 1189–1201, 2011.
  • [18] S. Wang, Q. Liu, E. Zhu, J. Yin, and W. Zhao, “Mst-gen: An efficient parameter selection method for one-class extreme learning machine,” IEEE Trans. Cybern., vol. 47, no. 10, pp. 3266–3279, 2017.
  • [19] B. Biggio, G. Fumera, and F. Roli, “Security evaluation of pattern classifiers under attack,” IEEE Trans. Knowl. Data Eng., vol. 26, no. 4, pp. 984–996, 2017.
  • [20] J. Song, H. Takakura, Y. Okabe, M. Eto, D. Inoue, and K. Nakao, “Statistical analysis of honeypot data and building of kyoto 2006+ dataset for nids evaluation,” in Proc. of the BADGERS’11.   New York, NY, USA: ACM, 2011, pp. 29–36.