Defending Support Vector Machines against Poisoning Attacks: the Hardness and Algorithm

06/14/2020 ∙ by Hu Ding, et al. ∙ USTC 60

Adversarial machine learning has attracted a great amount of attention in recent years. In a poisoning attack, the adversary can inject a small number of specially crafted samples into the training data which make the decision boundary severely deviate and cause unexpected misclassification. Due to the great importance and popular use of support vector machines (SVM), we consider defending SVM against poisoning attacks in this paper. We consider two common strategies for defending: designing robust SVM algorithms and data sanitization. Though several robust SVM algorithms have been proposed before, most of them either are in lack of adversarial-resilience or rely on strong assumptions about the data distribution or the attacker's behavior. Moreover, the research on the complexities is still quite limited nowadays. We are the first, to the best of our knowledge, to prove that even the simplest hard-margin one-class SVM with outliers problem is NP-complete, and has no fully PTAS unless P=NP. For the data sanitization defense, we link it to the intrinsic dimensionality of data; in particular, we provide a sampling theorem in doubling metrics for explaining the effectiveness of DBSCAN (as a density-based outlier removal method) for defending against poisoning attacks. In our empirical experiments, we compare several defenses including the DBSCAN and robust SVM methods, and investigate the influences from the intrinsic dimensionality and data density to their performances.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

In the past decades we have witnessed enormous progress in machine learning. One driving force behind this is the successful applications of machine learning technology to many different fields, such as data mining, networking, and bioinformatics. However, with its territory rapidly enlarging, machine learning has also imposed a great deal of challenges for researchers to meet its new demands. In particular, the field of adversarial machine learning concerning about the potential vulnerabilities of the algorithms has attracted a great amount of attention DBLP:conf/ccs/BarrenoNSJT06 ; huang2011adversarial ; biggio2018wild ; DBLP:journals/cacm/GoodfellowMP18 . As mentioned in the survey paper biggio2018wild , the very first work of adversarial machine learning dates back to 2004, in which Dalvi et al. DBLP:conf/kdd/DalviDMSV04

formulated the adversarial classification problem as a game between the classifier and the adversary. In general, the adversarial attacks against machine learning can be categorized to

evasion attacks and poisoning attacks biggio2018wild . An evasion attack happens at test time, where the adversary aims to evade the trained classifier by manipulating test examples. For example, Szegedy et al. DBLP:journals/corr/SzegedyZSBEGF13

observed that small perturbation to a test image can arbitrarily change the neural network’s prediction.

In this paper, we focus mainly on poisoning attacks that happen at training time. Usually, the adversary injects a small number of specially crafted samples into the training data which make the decision boundary severely deviate and cause unexpected misclassification. In particular, because of the fact that open datasets are commonly used to train our machine learning algorithms now, it has been considered as a key security issue that seriously limits real-world applications biggio2018wild . For instance, even a small number of poisoning samples can significantly increase the test error of support vector machine (SVM) DBLP:conf/icml/BiggioNL12 ; DBLP:conf/aaai/MeiZ15 ; DBLP:conf/ecai/XiaoXE12 ; DBLP:journals/corr/abs-1811-00741 . Beyond linear classifiers, a number of works studied the poisoning attacks on other machine learning problems, such as clustering DBLP:conf/sspr/BiggioBPMMPR14 , PCA DBLP:conf/imc/RubinsteinNHJLRTT09 , and regression DBLP:conf/sp/JagielskiOBLNL18 .

Though lots of works focused on constructing poisoning attacks, our ultimate goal is to design defenses. Poisoning attacks can be regarded as outliers, and this leads to two natural approaches for defenses: (1) data sanitization defense, i.e., first perform outlier removal on the data and then run an existing machine learning algorithm on the cleaned data DBLP:conf/sp/CretuSLSK08 , or (2) directly design a robust optimization algorithm that is resilient against outliers DBLP:journals/jmlr/ChristmannS04 ; DBLP:conf/sp/JagielskiOBLNL18 .

Steinhardt et al. DBLP:conf/nips/SteinhardtKL17 studied two basic methods of data sanitization defense, which remove the points outside a specified sphere or slab, for binary classification; they showed that high dimensionality gives attacker more room for constructing attacks to evade outlier removal. Laishram and Phoha DBLP:journals/corr/LaishramP16 applied the seminal DBSCAN (Density-Based Spatial Clustering of Applications with Noise) method ester1996density

to remove outliers for SVM and showed that it can successfully identify most of the poisoning data. However, their method is heuristic and lack of theoretical analysis. Several other outlier removal or detection methods for fighting poisoning attacks also have been studied recently, such as 

DBLP:conf/pkdd/PaudiceML18 ; DBLP:journals/corr/abs-1802-03041 . We would like to point out that outlier removal is a topic that in fact has been extensively studied in various fields before (we refer the reader to the surveys aggarwal2001outlier ; kriegeloutlier ; chandola2009anomaly ).

The other defense strategy, i.e., designing robust optimization algorithms, also has a long history in machine learning. A substantial part of robust optimization algorithms rely on the idea of regularization. For example, Xu et al. DBLP:journals/jmlr/XuCM09 studied the relation between robustness and regularization for SVM; several other robust SVM algorithms were proposed as well DBLP:journals/prl/TaxD99 ; conf/aaai/XuCS06 ; DBLP:conf/nips/NatarajanDRT13 ; DBLP:journals/pr/XuCHP17 ; DBLP:journals/entropy/KanamoriFT17 . However, as discussed in DBLP:conf/aaai/MeiZ15 ; DBLP:conf/sp/JagielskiOBLNL18 , these approaches are not quite ideal to defend against poisoning attacks since the outliers can be located arbitrarily in the feature space by the adversary. Another idea for achieving the robustness guarantee is to add strong assumptions about the data distribution or the attacker’s behavior DBLP:conf/nips/FengXMY14 ; DBLP:journals/pr/WeerasingheEAL19 , but they are usually not well satisfied in practice. An alternative approach is to explicitly remove outliers during optimization, such as the “trimmed” version for robust regression recently proposed in DBLP:conf/sp/JagielskiOBLNL18 ; but this model often results in a challenging combinatorial optimization problem: if of the input data items are outliers (), we have to consider an exponentially large number of different possible cases when optimizing the objective function. As an example, the clustering with outliers problems, like -means and -center with outliers, have at least quadratic time complexities in general charikar2001algorithms ; chen2008constant ; of course, if we aim to obtain only a local optimum, we can formulate the problem as a bilevel optimization that alternatively removes outliers and minimizes the objective function on the remaining data in each iteration, like the -means algorithm chawla2013k (this alternating minimization idea also works for other problems, such as regression with outliers DBLP:journals/cacm/FischlerB81 ).

1.1 Our Contributions

Due to the great importance and popular use of SVM journals/tist/ChangL11 , we consider defending SVM against poisoning attacks in this paper. Our contributions are twofold.

First, we consider the robust optimization approach. To study its complexity, we only consider the hard-margin case (because the soft-margin case is more complicated and thus should have an even higher complexity). As mentioned above, we can formulate the SVM with outliers problem as a combinatorial optimization problem for achieving the adversarial-resilience: finding an optimal subset of items from the poisoned input data to achieve the largest separating margin induced by the SVM (we will provide the formal definition in Section 2). Though its local optimum can be obtained by using the alternating minimization approach, we are still interested in its global optimal solution. We are unaware of any strong hardness-of-approximation result for this problem. In Section 3, we try to bridge the gap in the current state of knowledge. We prove that even the simplest one-class SVM with outliers problem is NP-complete, and has no fully polynomial-time approximation scheme (PTAS) unless PNP. That is, it is quite unlikely that one can achieve a (nearly) optimal solution in polynomial time.

Second, we investigate the DBSCAN based data sanitization defense proposed in DBLP:journals/corr/LaishramP16 and explain its effectiveness in theory (Section 4). DBSCAN is one of the most popular density-based clustering methods and has been implemented for solving many real-world outlier removal problems ester1996density ; schubert2017dbscan ; roughly speaking, the inliers are assumed to be located in some dense regions and the remaining points are recognized as the outliers. Actually, the intuition of using DBSCAN for data sanitization is straightforward. We assume the original input training data (before poisoning attack) is large and dense enough in the domain ; thus the poisoning data should be the sparse outliers together with some small clusters located outside the dense regions, which can be identified by the DBSCAN. Obviously, if the attacker has a fixed budget (the number of poisoning points), the lager the data size is, the more efficient the DBSCAN performs (we can imagine the extreme case that is close to , where it is clearly inappropriate to use a density based clustering method to identify the outliers).

Thus a fundamental question in theory is what about the lower bound of the data size for guaranteeing the correctness of the DBSCAN (we can assume that the clean data is a set of i.i.d. samples drawn from the domain ). However, to achieve a favorable lower bound is a non-trivial task. The VC dimension DBLP:journals/jcss/LiLS01 of the range space induced by the Euclidean distance is high in a high-dimensional feature space, and thus the lower bound of the data size could be very large. Our idea is motivated by the recent observations on the link between the adversarial vulnerability of learning and the intrinsic dimensionality of the data DBLP:journals/corr/abs-1905-01019 ; DBLP:conf/wifs/AmsalegBBEHNR17 ; DBLP:conf/iclr/Ma0WEWSSHB18 . We prove a lower bound of that depends on the intrinsic dimension of and is independent of the feature space’s dimensionality. Our result strengthens the observation from DBLP:conf/nips/SteinhardtKL17 who only considered the Euclidean space’s dimensionality: more precisely, it is the “high intrinsic dimensionality” that gives attacker more room to evade outlier removal. In particular, different from the previous results DBLP:journals/corr/abs-1905-01019 ; DBLP:conf/wifs/AmsalegBBEHNR17 ; DBLP:conf/iclr/Ma0WEWSSHB18 focusing on evasion attacks, our result is the first one linking poisoning attacks to intrinsic dimensionality, to the best of our knowledge.

2 Preliminaries

Given two point sets and in , the problem of linear support vector machine (SVM) journals/tist/ChangL11

is to find the maximum margin (induced by two parallel hyperplanes) separating these two point sets (if they are separable). For convenience, we say that

and are the sets of points labeled as “” and “”, respectively. If (or ) is a single point, say the origin, the problem is called one-class SVM. The SVM can be formulated as a quadratic programming problem, and a number of efficient techniques have been developed in the past, such as the soft margin SVM mach:Cortes+Vapnik:1995 , -SVM bb57389 ; conf/nips/CrispB99 , and Core-SVM tkc-cvmfstv-05 . If and are not separable, we can apply the kernel method: each point is mapped to be in a higher dimensional space; the inner product is defined by a kernel function . Many existing SVM algorithms can be adapted to handle the non-separable case by using kernel functions.

Poisoning attacks. Usually, the adversary injects some bad points to the original data set . For instance, the adversary can drawn a sample from the domain of , and flip its label to be “”; therefore, this poisoning sample can be viewed as an outlier of . Since poisoning attack is expensive, we often assume that the adversary can poison at most points (or the poisoned fraction is a fixed small number in ). Overall, we can formulate the defense against poisoning attacks as the following combinatorial optimization problem. As mentioned in Section 1.1, we only consider the simpler hard-margin case for studying the complexity.

Definition 1 (Support Vector Machine (SVM) with Outliers).

Let be an instance of SVM in , and suppose . Given a positive integer , the problem of SVM with outliers is to find two subsets and with , such that the width of the margin separating and is maximized.

Suppose the optimal margin has the width . If we achieve a solution with margin width where is a small parameter in , we say that it is a -approximation.

Remark 1.

The model proposed in Definition 1 in fact follows the popular data trimming idea from robust statistics books/wi/RousseeuwL87 . As an example, Jagielski et al. DBLP:conf/sp/JagielskiOBLNL18 proposed a robust regression model that is resilient against poisoning attacks based on data trimming.

We also need to clarify the definition of intrinsic dimensionality for our following analysis. We consider the doubling dimension which is a measure of intrinsic dimensionality widely adopted in learning theory DBLP:journals/jcss/BshoutyLL09 . Given and , we use to indicate the ball of radius around .

Definition 2 (Doubling Dimension).

The doubling dimension of a point set is the smallest number , such that for any and , is always covered by the union of at most balls with radius .

The doubling dimension is often used for describing the expansion rates of point sets. Note that the intrinsic dimensionality described in DBLP:conf/wifs/AmsalegBBEHNR17 ; DBLP:conf/iclr/Ma0WEWSSHB18 is quite similar to the doubling dimension, which also measures the expansion rates of point sets.

3 The Hardness of SVM with Outliers

In this section, we prove that even the one-class SVM with outliers problem is NP-complete. Further, we show that there is no fully PTAS for the problem unless PNP, that is, we cannot achieve a polynomial time -approximation for any given . Our idea is partly inspired by the result from Megiddo DBLP:journals/jsc/Megiddo90 . Given a set of points in , the “covering by two balls” problem is to determine that whether the point set can be covered by two unit balls. By the reduction from -SAT, Megiddo proved that the “covering by two balls” problem is NP-complete. In the proof of the following theorem, we modify Megiddo’s construction of the reduction to adapt the one-class SVM with outliers problem.

Theorem 1.

The one-class SVM with outliers problem is NP-complete, and has no fully PTAS unless PNP.

Proof.

Let be a -SAT instance with the literal set and clause set . We construct the corresponding instance of one-class SVM with outliers. First, let be the unit vectors of , where each has in the -th position and in other positions. Also, for each clause with , we generate a point : (1) if occurs in , , (2) else if occurs in , , (3) else, ; in addition, . For example, if , the point

(1)

We will determine the value of below. Let denote the set . Now, we construct the instance with the number of points and the number of outliers . Below we prove that has a satisfying assignment if and only if has a solution with margin width .

Suppose there exists a satisfying assignment for . We define the set as follows. If is true in , we include in , else, we include in ; we also include in . Assume . We claim that the set is a solution of the instance with the margin width , that is, the size and the margin separating the origin and has width . It is easy to verify the size of . To compute the width, we consider the mean point of which is denoted as . For each , if is true, the -th position of is , else, the -th position of is ; the -th position of is . Let be the hyperplane that is orthogonal to the vector and passing through . Obviously, separates and with the margin width . Furthermore, for any point , since there exists at least one true variable in , we have

(2)

where the last inequality comes from the fact . Therefore, all the points from lie on the same side of as , and then the set can be separated from by a margin with width .

Suppose the instance has a solution with margin width . With a slight abuse of notations, we still use to denote the subset of that is included in the set of inliers. Since the number of outliers is , we know that for any pair , there exists exactly one point belonging to ; also, the whole set should be included in the solution to keep that there are inliers in total. We still use to denote the mean point of . Now, we have the assignment for : if , we assign to be true, else, we assign to be true. We claim that is satisfied by this assignment. For any clause , if it is not satisfied, i.e., all the three variables in are false, then we have

(3)

That means the angle . So any margin separating the origin and the set should has the width at most

(4)

See Figure (a)a. This is in contradiction to the assumption that has a solution with margin width .

(a)
(b)
Figure 1: (a) An illustration for (4); (b) the ball is enclosed by and the ball is not.

Overall, has a satisfying assignment if and only if has a solution with margin width . Thus, the one-class SVM with outliers problem is NP-complete. Moreover, the gap between and is

(5)

if we assume is a fixed constant. Therefore, if we set , then is satisfiable if and only if any -approximation of the instance has width . That means if we have a fully PTAS for the one-class SVM with outliers problem, we can determine that whether is satisfiable or not in polynomial time. It implies that we cannot achieve a fully PTAS for one-class SVM with outliers, unless PNP. ∎

4 The Data Sanitization Defense

From Theorem 1, we know that it is extremely challenging to achieve the optimal solution even for one-class SVM with outliers. Therefore, we turn to consider the other approach, data sanitization defense, under some reasonable assumption in practice. First, we prove a general sampling theorem in Section 4.1 which can help us to analyze density-based clustering methods on data with low doubling dimensions. Then, we apply this theorem to explain the effectiveness of DBSCAN for defending against poisoning attacks in Section 4.2.

4.1 A Sampling Theorem

Let be a set of i.i.d. samples drawn from a connected and compact domain who has the doubling dimension . For ease of presentation in our following analysis, we assume that lies on a manifold in the space. Let denote the diameter of , i.e., . Also, we let

be the probability density function of the data distribution over

.

To measure the uniformity of , we define a value as follows. For any and any , we say “the ball is enclosed by ” if ; intuitively, if the ball center is close to the boundary of or the radius is too large, the ball will not be enclosed by . See Figure (b)b for an illustration. We let , where and are any two equal-size balls, and is required to be enclosed by

. As an example, if the data is uniformly distributed over

who lies on a flat manifold, the value will be equal to . On the other hand, if the distribution is very imbalanced or the manifold is very rugged, the value will be high.

Theorem 2.

Let , , and . If the sample size

(6)

then with constant probability, for any ball enclosed by , the size . The asymptotic notation .

Remark 2.

(i) A highlight of Theorem 2 is that the lower bound of is independent of the Euclidean dimensionality; so if the doubling dimension is a fixed number, the required sample size for is relatively low.

(ii) For the simplest case that the data is uniformly distributed over who lies on a flat manifold, will be equal to and thus the lower bound of in Theorem 2 becomes .

Before proving Theorem 2, we need to relate the doubling dimension to the VC dimension of the range space consisting of all balls with different radii DBLP:journals/jcss/LiLS01 . Unfortunately, Huang et al. DBLP:conf/focs/HuangJLW18 recently showed that “although both dimensions are subjects of extensive research, to the best of our knowledge, there is no nontrivial relation known between the two”. For instance, they constructed a doubling metric having unbounded VC dimension, and the other direction cannot be bounded neither. However, if allowing a small distortion to the distance, we can achieve an upper bound on the VC dimension for a given metric space with bounded doubling dimension. For stating the result, they defined a distance function called “-smoothed distance function”: for any , , where . Given and , the ball defined by this distance function will be .

Theorem 3 (DBLP:conf/focs/HuangJLW18 ).

Suppose the point set has the doubling dimension . There exists an -smoothed distance function “” such that the VC dimension111In DBLP:conf/focs/HuangJLW18 , the authors used “shattering dimension” to state their result. Actually, the shattering dimension is another measure for the complexity of range space, which is tightly related to the VC dimension DBLP:conf/stoc/FeldmanL11 . For example, if the shattering dimension is , the VC dimension is bounded by . of the range space consisting of all balls with different radii is at most , if replacing the Euclidean distance by .

Proof.

(of Theorem 2) Let be any positive number. First, since the doubling dimension of is , if recursively applying Definition 2 times, we know that can be covered by at most balls with radius . Thus, if is enclosed by , we have

(7)

Now we consider the size . From Theorem 3, we know that the VC dimension with respect to the -smoothed distance is . Thus, for any , if

(8)

the set will be an -sample of ; that is, for any and ,

(9)

with constant probability DBLP:journals/jcss/LiLS01 . Note the exact probability comes from the success probability that is an -sample of ; for convenience, we simply say it is a constant probability. Because is an -smoothed distance function of the Euclidean distance, we have

(10)

So if we set and , (7), (9), and (10) jointly imply

(11)

The last inequality comes from (7); since we assume the ball is enclosed by , the shrunk ball should be enclosed as well. Moreover, if

(12)

we have from (11). Combining (8) and (12), we obtain the lower bound of . ∎

4.2 The DBSCAN Approach

For the sake of completeness, we briefly introduce the method of DBSCAN ester1996density below. Given two parameters and , the DBSCAN divides the set into three classes: (1) is a core point, if ; (2) is a border point, if is not a core point but of some core point ; (3) all the other points are outliers. Actually, we can imagine that the set forms a graph where any pair of points are connected if their pairwise distance is no larger than ; then the set of core points and border points form several clusters where each cluster is a connected component (a border point may belong to multiple clusters, but we can arbitrarily assign it to only one cluster). The goal of the DBSCAN is to identify these clusters and the remaining outliers.

Following Section 4.1, we assume that is a set of i.i.d. samples drawn from the connected and compact domain who has the doubling dimension . We let be the set of poisoning data items injected by the attacker to , and suppose each has distance larger than to . In an evasion attack, we often use the adversarial perturbation distance to evaluate the attacker’s capability; but in a poisoning attack the attacker can easily achieve a large perturbation distance (e.g., in the SVM problem, if the attacker flips the label of some point , it will become an outlier having the perturbation distance larger than to its ground truth domain, where is the optimal margin width). Also, we assume the boundary is smooth and has curvature radius at least everywhere. For simplicity, let . The following theorem states the relation between the DBSCAN and the poisoned dataset . We assume the poisoned fraction .

Theorem 4.

We let be any absolute constant number larger than , and assume that the size of satisfies the lower bound of Theorem 2 (with respect to and ). If we set and , and run the DBSCAN on the poisoned dataset , the obtained largest cluster should be exactly . In other word, the set should be formed by the outliers and the clusters except the largest one from the DBSCAN.

Proof.

Since , for any , either the ball is enclosed by , or is covered by some ball enclosed by . We set and , and hence from Theorem 2 we know that all the points of will be core points or border points. Moreover, any point from has distance larger than to the points of , that is, any two points and will not belong to the same cluster. Also, because we assume that the domain is connected and compact, the set will form the largest cluster. ∎

Remark 3.

(i) We often adopt the poisoned fraction as the measure to indicate the attacker’s capability. If we fix the value of , the bound of from Theorem 2 reveals that the larger the doubling dimension , the lower the poisoned fraction (and the easier corrupting the DBSCAN defense). In addition, when is large, i.e., each poisoning point has large perturbation distance and is sufficiently smooth, it will be relatively easy for the DBSCAN to defend.

But we should point out that this theoretical bound probably is overly conservative, since it requires a “perfect” sanitization result that removes all the poisoning samples (this is not always a necessary condition for achieving a good performance of the defending in practice). In our experiments, we show that the DBSCAN method can achieve promising performance, even when the poisoned fraction is higher than the threshold.

(ii) In practice, we usually cannot obtain the exact values of and

; instead, we may only estimate a reasonable lower bound

for . Thus, we can set and tune the value of until the largest cluster has points.

Directly solving such a high-dimensional DBSCAN instance is very expensive. A bottleneck of the original DBSCAN algorithm is that it needs to perform a range query for each data item, i.e., computing the number of neighbors within the distance , and the overall time complexity can be as large as in the worst case, where is the number of data items. To speed up the step of range query, a natural idea is using some efficient index structures, such as -tree DBLP:conf/sigmod/BeckmannKSS90 , though the overall complexity in the worst case is still (we refer the reader to the recent articles that systematically discussed this issue gan2015dbscan ; schubert2017dbscan ).

Putting it all together. Let be an instance of SVM with outliers, where is the number of poisoning points. We assume that the original input point sets and (before the poisoning attack) are i.i.d. samples drawn respectively from the connected and compact domains and with doubling dimension . Then, we perform the DBSCAN procedure on and respectively (as Remark 3 (ii)). Suppose the obtained largest clusters are and . Finally, we run an existing SVM algorithm on the cleaned instance .

5 Discussion

In this paper, we study two different strategies for protecting SVM against poisoning attacks. To achieve the adversarial-resilience, the defense can be formulated as a combinatorial optimization problem called “SVM with outliers”. We show for the first time that even the simplest hard-margin one-class SVM with outliers is NP-complete, and has no fully PTAS unless PNP. We then focus on the data sanitization defense. Under the assumption that the original input data (before poisoning attack) are drawn from the domains with low doubling dimensions, we provide the lower bound of the data size to ensure that the DBSCAN can correctly identify the poisoning samples.

We leave the detailed experimental results on the synthetic and real datasets to our supplement. We compare several defenses including the DBSCAN and robust SVM methods, and study the trends of their classification accuracies with varying three values: the poisoned fraction, the intrinsic dimensionality, and the Euclidean dimensionality. All the experimental results were obtained by using publicly available implementations on a Windows workstation equipped with an Intel core - processor and GB RAM.

In future, there are also several open questions deserving to study. To name a few:

(1) How about the effectiveness of the DBSCAN for protecting other machine learning problems? For the SVM, we can assume that each poisoning point has a perturbation distance at least (since any simple label flipping will result in a large distance); but for other problems, such as regression, we cannot simply follow the same assumption.

(2) How about the ensemble methods? For example, can we take the ensemble of different robust SVM methods or outlier removal methods to achieve a more convincing result? Cretu et al. DBLP:conf/sp/CretuSLSK08 and Biggio et al. DBLP:conf/mcs/BiggioCFGR11 proposed the “voting” and “bagging” ideas for fighting attacks, but our understanding on their effectiveness in theory is still far from being satisfactory.

(3) What about the complexities of other machine learning problems under the adversarially-resilient formulations as Definition 1. Mount et al. DBLP:journals/algorithmica/MountNPSW14

proved that it is impossible to achieve even an approximate solution for the linear regression with outliers problem within polynomial time under the conjecture of

the hardness of affine degeneracy DBLP:journals/dcg/EricksonS95 , if the dimensionality is not fixed. Simonov et al. DBLP:conf/icml/SimonovFGP19 showed that unless Exponential Time Hypothesis fails, it is impossible not only to solve the PCA with outliers problem exactly but even to approximate it within a constant factor. For a large number of other adversarial machine learning problems, however, the study of their complexities is still in its infancy.

References

  • [1] C. C. Aggarwal and P. S. Yu. Outlier detection for high dimensional data. ACM Sigmod Record, 30(2):37–46, 2001.
  • [2] L. Amsaleg, J. Bailey, D. Barbe, S. M. Erfani, M. E. Houle, V. Nguyen, and M. Radovanovic. The vulnerability of learning to adversarial perturbation increases with intrinsic dimensionality. In 2017 IEEE Workshop on Information Forensics and Security, WIFS 2017, Rennes, France, December 4-7, 2017, pages 1–6. IEEE, 2017.
  • [3] M. Barreno, B. Nelson, R. Sears, A. D. Joseph, and J. D. Tygar. Can machine learning be secure? In F. Lin, D. Lee, B. P. Lin, S. Shieh, and S. Jajodia, editors, Proceedings of the 2006 ACM Symposium on Information, Computer and Communications Security, ASIACCS 2006, Taipei, Taiwan, March 21-24, 2006, pages 16–25. ACM, 2006.
  • [4] N. Beckmann, H. Kriegel, R. Schneider, and B. Seeger. The r*-tree: An efficient and robust access method for points and rectangles. In Proceedings of the 1990 ACM SIGMOD International Conference on Management of Data, Atlantic City, NJ, USA, May 23-25, 1990, pages 322–331, 1990.
  • [5] B. Biggio, S. R. Bulò, I. Pillai, M. Mura, E. Z. Mequanint, M. Pelillo, and F. Roli.

    Poisoning complete-linkage hierarchical clustering.

    In P. Fränti, G. Brown, M. Loog, F. Escolano, and M. Pelillo, editors,

    Structural, Syntactic, and Statistical Pattern Recognition - Joint IAPR International Workshop, S+SSPR 2014, Joensuu, Finland, August 20-22, 2014. Proceedings

    , volume 8621 of Lecture Notes in Computer Science, pages 42–52. Springer, 2014.
  • [6] B. Biggio, I. Corona, G. Fumera, G. Giacinto, and F. Roli. Bagging classifiers for fighting poisoning attacks in adversarial classification tasks. In Multiple Classifier Systems - 10th International Workshop, MCS 2011, Naples, Italy, June 15-17, 2011. Proceedings, pages 350–359, 2011.
  • [7] B. Biggio, B. Nelson, and P. Laskov. Poisoning attacks against support vector machines. In Proceedings of the 29th International Conference on Machine Learning, ICML 2012, Edinburgh, Scotland, UK, June 26 - July 1, 2012, 2012.
  • [8] B. Biggio and F. Roli. Wild patterns: Ten years after the rise of adversarial machine learning. Pattern Recognition, 84:317–331, 2018.
  • [9] N. H. Bshouty, Y. Li, and P. M. Long. Using the doubling dimension to analyze the generalization of learning algorithms. J. Comput. Syst. Sci., 75(6):323–335, 2009.
  • [10] V. Chandola, A. Banerjee, and V. Kumar. Anomaly detection: A survey. ACM Computing Surveys (CSUR), 41(3):15, 2009.
  • [11] C.-C. Chang and C.-J. Lin. LIBSVM: A library for support vector machines. ACM TIST, 2(3), 2011.
  • [12] M. Charikar, S. Khuller, D. M. Mount, and G. Narasimhan. Algorithms for facility location problems with outliers. In Proceedings of the twelfth annual ACM-SIAM symposium on Discrete algorithms, pages 642–651. Society for Industrial and Applied Mathematics, 2001.
  • [13] S. Chawla and A. Gionis. k-means–: A unified approach to clustering and outlier detection. In Proceedings of the 2013 SIAM International Conference on Data Mining, pages 189–197. SIAM, 2013.
  • [14] K. Chen. A constant factor approximation algorithm for k-median clustering with outliers. In Proceedings of the nineteenth annual ACM-SIAM symposium on Discrete algorithms, pages 826–835. Society for Industrial and Applied Mathematics, 2008.
  • [15] A. Christmann and I. Steinwart. On robustness properties of convex risk minimization methods for pattern recognition. J. Mach. Learn. Res., 5:1007–1034, 2004.
  • [16] C. Cortes and V. Vapnik. Support-vector networks. Machine Learning, 20:273, 1995.
  • [17] G. F. Cretu, A. Stavrou, M. E. Locasto, S. J. Stolfo, and A. D. Keromytis. Casting out demons: Sanitizing training data for anomaly sensors. In 2008 IEEE Symposium on Security and Privacy (S&P 2008), 18-21 May 2008, Oakland, California, USA, pages 81–95. IEEE Computer Society, 2008.
  • [18] D. J. Crisp and C. J. C. Burges. A geometric interpretation of v-SVM classifiers. In S. A. Solla, T. K. Leen, and K.-R. Müller, editors, NIPS, pages 244–250. The MIT Press, 1999.
  • [19] N. N. Dalvi, P. M. Domingos, Mausam, S. K. Sanghai, and D. Verma. Adversarial classification. In Proceedings of the Tenth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, Seattle, Washington, USA, August 22-25, 2004, pages 99–108, 2004.
  • [20] J. Erickson and R. Seidel. Better lower bounds on detecting affine and spherical degeneracies. Discrete & Computational Geometry, 13:41–57, 1995.
  • [21] M. Ester, H.-P. Kriegel, J. Sander, and X. Xu. A density-based algorithm for discovering clusters in large spatial databases with noise. In Proceedings of the ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD), pages 226–231, 1996.
  • [22] D. Feldman and M. Langberg. A unified framework for approximating and clustering data. In L. Fortnow and S. P. Vadhan, editors,

    Proceedings of the 43rd ACM Symposium on Theory of Computing, STOC 2011, San Jose, CA, USA, 6-8 June 2011

    , pages 569–578. ACM, 2011.
  • [23] J. Feng, H. Xu, S. Mannor, and S. Yan.

    Robust logistic regression and classification.

    In Z. Ghahramani, M. Welling, C. Cortes, N. D. Lawrence, and K. Q. Weinberger, editors, Advances in Neural Information Processing Systems 27: Annual Conference on Neural Information Processing Systems 2014, December 8-13 2014, Montreal, Quebec, Canada, pages 253–261, 2014.
  • [24] M. A. Fischler and R. C. Bolles. Random sample consensus: A paradigm for model fitting with applications to image analysis and automated cartography. Commun. ACM, 24(6):381–395, 1981.
  • [25] J. Gan and Y. Tao. DBSCAN revisited: mis-claim, un-fixability, and approximation. In Proceedings of the 2015 ACM SIGMOD International Conference on Management of Data, pages 519–530. ACM, 2015.
  • [26] B. Gao and J. Wang. A fast and robust TSVM for pattern classification. CoRR, abs/1711.05406, 2017.
  • [27] I. J. Goodfellow, P. D. McDaniel, and N. Papernot. Making machine learning robust against adversarial inputs. Commun. ACM, 61(7):56–66, 2018.
  • [28] L. Huang, S. Jiang, J. Li, and X. Wu. Epsilon-coresets for clustering (with outliers) in doubling metrics. In 59th IEEE Annual Symposium on Foundations of Computer Science, FOCS 2018, Paris, France, October 7-9, 2018, pages 814–825, 2018.
  • [29] L. Huang, A. D. Joseph, B. Nelson, B. I. Rubinstein, and J. D. Tygar. Adversarial machine learning. In

    Proceedings of the 4th ACM workshop on Security and artificial intelligence

    , pages 43–58, 2011.
  • [30] M. Jagielski, A. Oprea, B. Biggio, C. Liu, C. Nita-Rotaru, and B. Li. Manipulating machine learning: Poisoning attacks and countermeasures for regression learning. In 2018 IEEE Symposium on Security and Privacy, SP 2018, Proceedings, 21-23 May 2018, San Francisco, California, USA, pages 19–35, 2018.
  • [31] T. Kanamori, S. Fujiwara, and A. Takeda. Breakdown point of robust support vector machines. Entropy, 19(2):83, 2017.
  • [32] M. Khoury and D. Hadfield-Menell. Adversarial training with voronoi constraints. CoRR, abs/1905.01019, 2019.
  • [33] P. W. Koh, J. Steinhardt, and P. Liang. Stronger data poisoning attacks break data sanitization defenses. CoRR, abs/1811.00741, 2018.
  • [34] H.-P. Kriegel, P. Kröger, and A. Zimek. Outlier detection techniques. Tutorial at PAKDD, 2009.
  • [35] R. Laishram and V. V. Phoha. Curie: A method for protecting SVM classifier from poisoning attack. CoRR, abs/1606.01584, 2016.
  • [36] Y. Li, P. M. Long, and A. Srinivasan. Improved bounds on the sample complexity of learning. J. Comput. Syst. Sci., 62(3):516–527, 2001.
  • [37] X. Ma, B. Li, Y. Wang, S. M. Erfani, S. N. R. Wijewickrema, G. Schoenebeck, D. Song, M. E. Houle, and J. Bailey. Characterizing adversarial subspaces using local intrinsic dimensionality. In 6th International Conference on Learning Representations, ICLR 2018, Vancouver, BC, Canada, April 30 - May 3, 2018, Conference Track Proceedings. OpenReview.net, 2018.
  • [38] N. Megiddo. On the complexity of some geometric problems in unbounded dimension. J. Symb. Comput., 10(3/4):327–334, 1990.
  • [39] S. Mei and X. Zhu. Using machine teaching to identify optimal training-set attacks on machine learners. In B. Bonet and S. Koenig, editors, Proceedings of the Twenty-Ninth AAAI Conference on Artificial Intelligence, January 25-30, 2015, Austin, Texas, USA, pages 2871–2877. AAAI Press, 2015.
  • [40] D. M. Mount, N. S. Netanyahu, C. D. Piatko, R. Silverman, and A. Y. Wu. On the least trimmed squares estimator. Algorithmica, 69(1):148–183, 2014.
  • [41] N. Natarajan, I. S. Dhillon, P. Ravikumar, and A. Tewari. Learning with noisy labels. In Advances in Neural Information Processing Systems 26: 27th Annual Conference on Neural Information Processing Systems 2013. Proceedings of a meeting held December 5-8, 2013, Lake Tahoe, Nevada, United States, pages 1196–1204, 2013.
  • [42] A. Paudice, L. Muñoz-González, A. György, and E. C. Lupu. Detection of adversarial training examples in poisoning attacks through anomaly detection. CoRR, abs/1802.03041, 2018.
  • [43] A. Paudice, L. Muñoz-González, and E. C. Lupu. Label sanitization against label flipping poisoning attacks. In ECML PKDD 2018 Workshops - Nemesis 2018, UrbReas 2018, SoGood 2018, IWAISe 2018, and Green Data Mining 2018, Dublin, Ireland, September 10-14, 2018, Proceedings, pages 5–15, 2018.
  • [44] P. J. Rousseeuw and A. Leroy. Robust Regression and Outlier Detection. Wiley Series in Probability and Statistics. Wiley, 1987.
  • [45] B. I. P. Rubinstein, B. Nelson, L. Huang, A. D. Joseph, S. Lau, S. Rao, N. Taft, and J. D. Tygar. ANTIDOTE: understanding and defending against poisoning of anomaly detectors. In A. Feldmann and L. Mathy, editors, Proceedings of the 9th ACM SIGCOMM Internet Measurement Conference, IMC 2009, Chicago, Illinois, USA, November 4-6, 2009, pages 1–14. ACM, 2009.
  • [46] B. Scholkopf, A. J. Smola, K. R. Muller, and P. L. Bartlett. New support vector algorithms. Neural Computation, 12:1207–1245, 2000.
  • [47] E. Schubert, J. Sander, M. Ester, H. P. Kriegel, and X. Xu. DBSCAN revisited, revisited: why and how you should (still) use dbscan. ACM Transactions on Database Systems (TODS), 42(3):19, 2017.
  • [48] K. Simonov, F. V. Fomin, P. A. Golovach, and F. Panolan. Refined complexity of PCA with outliers. In Proceedings of the 36th International Conference on Machine Learning, ICML 2019, 9-15 June 2019, Long Beach, California, USA, pages 5818–5826, 2019.
  • [49] J. Steinhardt, P. W. Koh, and P. Liang. Certified defenses for data poisoning attacks. In I. Guyon, U. von Luxburg, S. Bengio, H. M. Wallach, R. Fergus, S. V. N. Vishwanathan, and R. Garnett, editors, Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, 4-9 December 2017, Long Beach, CA, USA, pages 3517–3529, 2017.
  • [50] C. Szegedy, W. Zaremba, I. Sutskever, J. Bruna, D. Erhan, I. J. Goodfellow, and R. Fergus. Intriguing properties of neural networks. In Y. Bengio and Y. LeCun, editors, 2nd International Conference on Learning Representations, ICLR 2014, Banff, AB, Canada, April 14-16, 2014, Conference Track Proceedings, 2014.
  • [51] D. M. J. Tax and R. P. W. Duin. Support vector domain description. Pattern Recognit. Lett., 20(11-13):1191–1199, 1999.
  • [52] I. W. Tsang, J. T. Kwok, and P.-M. Cheung. Core vector machines: Fast SVM training on very large data sets. Journal of Machine Learning Research, 6:363–392, 2005.
  • [53] S. Weerasinghe, S. M. Erfani, T. Alpcan, and C. Leckie. Support vector machines resilient against training data integrity attacks. Pattern Recognit., 96, 2019.
  • [54] H. Xiao, B. Biggio, B. Nelson, H. Xiao, C. Eckert, and F. Roli. Support vector machines under adversarial label contamination. Neurocomputing, 160:53–62, 2015.
  • [55] H. Xiao, H. Xiao, and C. Eckert. Adversarial label flips attack on support vector machines. In L. D. Raedt, C. Bessiere, D. Dubois, P. Doherty, P. Frasconi, F. Heintz, and P. J. F. Lucas, editors, ECAI 2012 - 20th European Conference on Artificial Intelligence. Including Prestigious Applications of Artificial Intelligence (PAIS-2012) System Demonstrations Track, Montpellier, France, August 27-31 , 2012, volume 242 of Frontiers in Artificial Intelligence and Applications, pages 870–875. IOS Press, 2012.
  • [56] G. Xu, Z. Cao, B. Hu, and J. C. Príncipe.

    Robust support vector machines based on the rescaled hinge loss function.

    Pattern Recognit., 63:139–148, 2017.
  • [57] H. Xu, C. Caramanis, and S. Mannor. Robustness and regularization of support vector machines. J. Mach. Learn. Res., 10:1485–1510, 2009.
  • [58] L. Xu, K. Crammer, and D. Schuurmans. Robust support vector machine training via convex outlier ablation. In AAAI, pages 536–542. AAAI Press, 2006.

6 Empirical Experiments

We compare several defenses including the DBSCAN and robust SVM methods, and study the trends of their classification accuracies with varying three values: the poisoned fraction, the intrinsic dimensionality, and the Euclidean dimensionality. All the experimental results were obtained on a Windows workstation equipped with an Intel core - processor and GB RAM.

Dataset Size Dimension
Synthetic
letter
mushrooms
Table 1: Datasets

Datasets. We consider both the synthetic and real datasets in our experiments. For each synthetic dataset, we generate two manifolds in , where is between and , and each manifold is represented by a random polynomial function with degree ranging from to . Note that it is challenging to achieve the exact doubling dimensions of the datasets, so we use the degree of the polynomial function as a “rough indicator” for the doubling dimension (the higher the degree, the larger the doubling dimension). In each of the manifolds, we sample points; specifically, the data is randomly partitioned into and respectively for training and testing. We also consider two real datasets: the letter and mushrooms datasets from LibSVM [11]. The details of the datasets are shown in Table 1.

Attack and Defenses. We generate the adversarial label-flipping attacks through the free software ALFASVMLib [54]. We evaluate the performances of different defenses using their publicly available implementations:

  • The basic SVM classification algorithm C-SVC [11];

  • The robust SVM algorithm RSVM- based on the rescaled hinge loss function [56], where the parameter indicates the iteration number of the half-quadratic optimization (e.g., we set and following the paper [56]);

  • The fast and robust twin support vector machine FRTSVM [26];

  • The L2 defense [33], which removes points that are far from their class centroids in distance;

  • The Slab defense [49], which first projects points onto the line between the class centroids, and then removes points that are too far from the class centroids;

  • The Loss defense [33], which discards points that are not well fit by a model trained (without any data sanitization) on the full dataset;

  • The k-NN defense [33], which removes points that are far from their k nearest neighbors (we set as [33]);

  • The SVD defense [33], which assumes that the clean data lies in some low-rank subspace, and that poisoned data therefore will have a large component out of this subspace [45];

  • The DBSCAN method [47] that is implemented as Remark 3 (ii).

For the data sanitization defenses, we run the SVM algorithm C-SVC on the cleaned data to compute their final solutions.

The results. In Figure 2, we illustrate the results on the synthetic datasets. We consider two different cases: (i) the two manifolds are overlapped with each other and (ii) the two manifolds are separated. For case (i), all the defenses obtain lower accuracies when the poisoned fraction increases as seen in Figure (a)a; for case (ii), the performance of DBSCAN keeps much more stable comparing with other defenses when varying the poisoned fraction in Figure (b)b. We also study the influence from the intrinsic dimensionality in these two cases. We set the Euclidean dimensionality to be and vary the polynomial function’s degree from to . In case (i), from Figure (c)c we can observe that the accuracy of each defense dramatically decreases when the degree increases, which is in agreement with our theoretical analysis. However, for case (ii), from Figure (d)d we can see that the accuracies are not substantially affected by the intrinsic dimensionality; we believe that it is due to the fact that case (ii) is relatively easier for classification, as long as the two classes are well separated. Finally, we fix the degree to be and vary the Euclidean dimensionality in Figure (e)e; we can see that the influence from the Euclidean dimensionality is small (except for FRTSVM).

For the real datasets, we set the poisoned fraction to be - in Figure 3; the experimental results reveal the similar trends with the synthetic datasets. To further investigate the performances of these data sanitization defenses, we plot their score curves in Figure 4

(the score is the harmonic mean of precision and recall for the outlier removal). We can see that

DBSCAN always outperforms other data sanitization defenses.

(a)
(b)
(c)
(d)
(e)
Figure 2: The performances on the synthetic datasets.
(a) Letter
(b) Mushrooms
Figure 3: The performances on the real datasets.
(a) Letter
(b) Mushrooms
Figure 4: scores.