1 Introduction & Formalism
The recent popularity of machine learning calls for a deeper understanding of AI security. Amongst the numerous AI threats published so far, poisoning attacks currently attract considerable attention.
An ML algorithm is a state machine with a twophase lifecycle: during the first phase, called training, builds a model (captured by a state variable ) based on sample data , called “training data”:
Learning is hence defined by:
e.g. can be human face images and the .
During the second phase, called testing^{1}^{1}1Or inference., is given an unlabelled data. ’s goal is to predict as accurately as possible the corresponding label given the distribution inferred from .
We denote by the dataset used during testing where is the correct label (solution) corresponding to and is the label predicted by ^{2}^{2}2i.e. if is perfect then label=label..
In a poisoning attack the opponent partially tampers to influence and mislead during testing. Formally, letting , the attacker generates a poison dataset
resulting in a corrupted model such that
Poisoning attacks were successfully implemented by tampering both incremental and periodic training models. In the incremental training model ^{3}^{3}3Also called the incremental update model., whenever a new is seen during testing, ’s performance on is evaluated and is updated. In the periodic retraining model, data is stored in a buffer. When falls below a performance threshold (or after a fixed number of queries) the buffer’s data is used to retrain anew. Retraining is either done using the buffer alone (resulting in a totally new ) or by merging the buffer with previous information (updating ).
Protections against poisoning attacks can be categorized into two types: robustification and sanitizing:
1.0.1 Robustification
(builtin resistance) modifies so that it takes into account the poison but tolerates its effect. Note that does not need to identify the poisoned data as such but the effect of poisonous data must be diminished, dampened or nullified up to a point fit for purpose.
The two main robustification techniques discussed in the literature are:
Feature squeezing [32, 26] is a model hardening technique that reduces data complexity so that adversarial perturbations disappear because of low sensitivity. Usually the quality of the incoming data is degraded by encoding colors with fewer values or by using a smoothing filter over the images. This maps several inputs onto one “characteristic” or “canonical input” and reduces the perturbations introduced by the attacker. While useful in practice, those techniques inevitably degrade the ’s accuracy.
1.0.2 Sanitizing
detects (by various methods e.g. [8, 17]) and discards poisoned s. Note that sanitizing necessarily decreases ’s ability to learn.
This work prevents poisoning by sanitizing.
Figure 1 shows a generic abstraction of sanitizing. takes (periodically or incrementally) and outputs a for the testing phase. But s go through the poisoning detection module Det before entering . If Det
decides that the probability that some
is poisoned is too high, the suspicious is trashed to avoid corrupting .Because under normal circumstances and are drawn from the same distribution it is natural to implement Det using standard algorithms allowing to test the hypothesis .
The most natural tool allowing one to do so is nonparametric hypothesis tests (NPHTs, hereafter denoted by ). Let be two datasets. allows to judge how compatible is a difference observed between and with the hypothesis that were drawn from the same distribution .
It is important to underline that is nonparametric, i.e. makes no assumptions on .
The above makes NPHTs natural candidates for detecting poison. However, whilst NPHTs are very good for natural hypothesis testing, they succumb spectacularly in adversarial scenarios where the attacker has full knowledge of the target’s specification [13]. Indeed, section 3.1 illustrates such a collapse.
To regain a headup over the attacker, our strategy will consist in mapping and into a secret space unpredictable by the adversary where can work confidentially. This mapping is defined by a key making it hard for the adversary to design and such that
2 A Brief Overview of Poisoning Attacks
Barreno et al. [2] were the first to coin the term “poisoning attacks”. Followup works such as Kearns et al. [12] sophisticated and theorized this approach.
Classical references introducing poisoning are [23, 22, 20, 24, 4, 30, 21, 31, 18, 7, 15]. At times (e.g. [15]) the opponent does not create or modify ’s but rather adds legitimate but carefully chosen ’s to to bias learning. Those inputs are usually computed using gradient descent. This was later generalized by [25].
During a poisoning attack, data modifications can either concern data or labels. [3] showed that a random flip of 40% of labels suffices to seriously affect SVMs. [14] showed that inserting malicious points into
could gradually shift the decision boundary of an anomaly detection classifier. Poisoning points were obtained by solving a linear programming problem maximizing the mean of the displacement of the mass center of
. For a more complete overview we recommend [5].2.0.1 Adversarial Goals.
Poisoning may seek to influence the classifier’s decision when presented with a later target query or to leak information about or .
The attacker’s goals always apply to the testing phase and may be:

Confidence Reduction: Have make more errors. In many cases, “less confidence” can clear suspicious instances at the benefit of doubt (in dubio pro reo).

Misclassification attacks: are defined by replacing in the definition:
“Make conclude that a belongs to a wrong label.”
Attack  

Misclassification^{4}^{4}4This is useful if any mistake may serve the opponent’s interests e.g. any navigation error would crash a drone with high probability.  random  random 
Targeted Misclassification  chosen  random 
SourceTarget Misclassification  chosen  chosen 
2.0.2 Adversarial capabilities
designate the degree of knowledge that the attacker has on the target system. Bibliography distinguishes between training phase capabilities and testing phase capabilities. Poisoning assumes training phase capabilities.
The attacker’s capabilities may be:

Data Injection: Add new data to .

Data Modification: Modify before training.

Logic Corruption: Modify the code (behavior) of ^{5}^{5}5This is the equivalent of fault attacks in cryptography..
3 Keyed AntiPoisoning
To illustrate our strategy, we use MannWhitney’s test and Stouffer’s method that we recall in the appendix.
We assume that when training starts, we are given a safe subset of denoted (where the subscript stands for “safe”). Our goal is to assess the safety of the upcoming subset of denoted (where the subscript stands for “unknown”).
We assume that and come from the same distribution . As mentioned before, the idea is to map and to a space hidden from the opponent. is keyed to prevent the attacker from predicting how to create adversarial input fooling .
Figure 2 shows the details of the Det plugin added to in Figure 1. Det takes a key , reads , performs the keyed transformation, calls on and outputs a decision.
can be MannWhitney’s test (illustrated in this paper) or any other NPHT e.g. the location test, the paired
test, SiegelTurkey’s test, the variance test, or multidimensional tests such as deep gaussianization
[29].3.1 Trivial MannWhitney Poisoning
Let be MannWhitney’s Test returning a value:
is, between others, susceptible to poisoning as follows: assume that
is sampled from a Gaussian distribution
and that is sampled from where (Figure 3). While and are totally different, will be misled.For instance, after picking samples and samples (i.e. we took ), we get a value of . From MannWhitney’s perspective, come from the same parent distribution with a very high degree of confidence while, in all evidence, they do not.
3.2 Keyed MannWhitney
We instantiate by secret random polynomials i.e. polynomials whose coefficients are randomly and secretly refreshed before each invocation of . Instead of returning , Det returns where:
The rationale is that will map the attacker’s input to an unpredictable location in which the MannWhitney is very likely to be safe.
random polynomials are selected as keys and Det calls for each polynomial. To aggregate all resulting values, Det computes:
If , the sample is rejected as poisonous with very high probability.
Note that any smooth function can be used as , e.g. Bsplines. The criterion on is that the random selection process must yield significantly different functions.
3.3 Experiments
We illustrate the above by protecting . The good thing about is that random polynomials tend to diverge when but adapt well to the central interval in which the Gaussian is not negligible.
We attack by poisoning with , where is set to 3, 2, 1, and 0.5, respectively. For each value of , two sets of 50 samples are drawn from the two distributions. Those samples are then transformed into other sets by applying a random polynomial of degree 4 and then fed into to obtain a value (using the twosided mode). This value predicts whether these two sets of transformed samples come from the same distribution: a
value close to 0 is a strong evidence against the null hypothesis. In each of our experiments, we apply nine secret random polynomials of degree 4 and aggregate the resulting
values using Stouffer’s method. For each setting, we run 1000 simulations. Similarly, for the same polynomials and , we run a “honest” test, where both samples come from the same distribution.We thus retrieve 1000 “attack” values, which we sort by ascending order. Similarly, we sort the “honest” values. It is a classic result that, under the null hypothesis, a
value follows a random uniform distribution over
, hence a plot of the sorted “honest” values is a linear curve.An attack is successful if, on average, the “attack” sample is accepted as least as often as the “honest” sample. This can be rewritten as , with the . Hence, a sufficient condition for the validity is that the curve of sorted attack values (solid lines in our figures) is above the curve of sorted honest values (dashed lines).
The first quadrant illustrates the polynomials used in the simulation and two bars for . The same random polynomials were used for each experiment. For simplicity, the coefficients of the polynomials were uniformly selected from , and (useless) polynomials of degree lower than 2 were excluded from the random selection. Then, we also added the identity polynomial (poly0), as a witness of what happens when there is no protection.
The following nine quadrants give the distribution of values for each polynomial, over 1000 simulations, sorted in increasing order. The dotted distribution corresponds to what an honest user would obtain, whereas the plain line simulation is based on poisoned datasets.
The last quadrant contains the sorted distribution of the aggregated values using Stouffer. Experimental results show that for poisoned datasets, the aggregated values remain close to zero, while a honest dataset does not appear to be significantly affected. In other words, with very high probability, keyed testing detects poisoning.
3.4 Discussion
We observe a saturation when is too far from , this is due to the fact that even after passing through the attack samples remain at the extremes. Hence if
is of odd degree, nothing changes. If the degree of
is even then the two extremes are projected to the same side and MannWhitney detects 100% of the poisonings. It follows that at saturation a keyed MannWhitney gives either very high or very low value. This means that polynomials or Bsplines must be carefully chosen to make keying effective.The advantage of combining the values with Stouffer’s method is that the weak values are very penalizing (by opposition to Pearson’s method whose combined value degrades much slower). A more conservative aggregation would be using Fisher’s method.
All in all, experimental results reveal that keying managed to endow the test with a significant level of immunity.
Interestingly, Det can be implemented independently of .
A cautionary note: Our scenario assumes that testing does not start before learning is complete. If the opponent can alternate learning and testing then he may infer that a poisonous sample was taken into account (if was updated and ’s behavior was modified). This may open the door to attacks on .
4 Notes and Further Research
This paper opens perspectives beyond the specific poisoning problem. e.g. cryptographers frequently use randomness tests to assess the quality of random number generators. In a strong attack model where the attacker knows and controls the random source it becomes possible to trick many s into declaring flagrantly non random data as random. Hence, the authors believe that developing keyed randomness tests is important and useful as such.
For instance, in the original minimum distance test 8000 points (a set ) sampled from the tested randomness source are placed in a square. Let be the minimum distance between the pairs. If is random then
is exponentially distributed with mean
. To key the test a secret permutation of the plan can generated and the test can be applied to .To the best of our knowledge such primitives were not proposed yet.
References
 [1] (2018) Prime and prejudice: primality testing under adversarial conditions. Note: Cryptology ePrint Archive, Report 2018/749https://eprint.iacr.org/2018/749 Cited by: §4.
 [2] (20101101) The security of machine learning. Machine Learning 81 (2), pp. 121–148. External Links: ISSN 15730565 Cited by: §2.
 [3] (201114–15 Nov) Support vector machines under adversarial label noise. In Proceedings of the Asian Conference on Machine Learning, C. Hsu and W. S. Lee (Eds.), Proceedings of Machine Learning Research, Vol. 20, South Garden Hotels and Resorts, Taoyuan, Taiwain, pp. 97–112. Cited by: §2.
 [4] (2012) Poisoning attacks against support vector machines. In Proceedings of the 29th International Coference on International Conference on Machine Learning, ICML’12, USA, pp. 1467–1474. External Links: ISBN 9781450312851 Cited by: §2.

[5]
(2018)
Wild patterns: ten years after the rise of adversarial machine learning
. Pattern Recognition 84, pp. 317 – 331. External Links: ISSN 00313203 Cited by: §2.  [6] (1975) A method for combining nonindependent, onesided tests of significance. Biometrics 31 (4), pp. 987–992. Cited by: Appendix 0.B.
 [7] (2017) Analysis of Causative Attacks Against SVMs Learning from Data Streams. In Proceedings of the 3rd ACM on International Workshop on Security And Privacy Analytics, IWSPA ’17, New York, NY, USA, pp. 31–36. External Links: ISBN 9781450349093 Cited by: §2.
 [8] (200805) Casting out demons: sanitizing training data for anomaly sensors. In 2008 IEEE Symposium on Security and Privacy (sp 2008), Vol. , pp. 81–95. Cited by: §1.0.2.
 [9] (2019) Quotient hash tables  efficiently detecting duplicates in streaming data. CoRR abs/1901.04358. External Links: 1901.04358 Cited by: §4.
 [10] (2014) Generative adversarial nets. In Advances in neural information processing systems, pp. 2672–2680. Cited by: §1.0.1.
 [11] (2018) Choosing between methods of combiningvalues. Biometrika 105 (1), pp. 239–246. Cited by: Appendix 0.B.

[12]
(1988)
Learning in the presence of malicious errors.
In
Proceedings of the Twentieth Annual ACM Symposium on Theory of Computing
, STOC ’88, New York, NY, USA, pp. 267–280. External Links: ISBN 0897912640 Cited by: §2.  [13] (188301) La cryptographie militaire. In Journal des sciences militaires, Vol. IX, pp. 5–38. Cited by: §1.0.2.

[14]
(201013–15 May)
Online anomaly detection under adversarial impact.
In
Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics
, Y. W. Teh and M. Titterington (Eds.), Proceedings of Machine Learning Research, Vol. 9, Chia Laguna Resort, Sardinia, Italy, pp. 405–412. Cited by: §2.  [15] (2017) Understanding blackbox predictions via influence functions. In Proceedings of the 34th International Conference on Machine Learning  Volume 70, ICML’17, pp. 1885–1894. Cited by: §2.
 [16] (2002) Combining dependent values. Statistics & Probability Letters 60 (2), pp. 183 – 190. External Links: ISSN 01677152 Cited by: Appendix 0.B.
 [17] (2016) Curie: A method for protecting SVM Classifier from Poisoning Attack. ArXiv abs/1606.01584. Cited by: §1.0.2.
 [18] (2015) Using machine teaching to identify optimal trainingset attacks on machine learners. In Proceedings of the TwentyNinth AAAI Conference on Artificial Intelligence, AAAI’15, pp. 2871–2877. External Links: ISBN 0262511290 Cited by: §2.
 [19] (2014) Bloom filters in adversarial environments. CoRR abs/1412.8356. External Links: 1412.8356 Cited by: §4.
 [20] (2008) Exploiting machine learning to subvert your spam filter. In Proceedings of the 1st Usenix Workshop on LargeScale Exploits and Emergent Threats, LEET’08, Berkeley, CA, USA, pp. 7:1–7:9. Cited by: §2.

[21]
(2014)
On the practicality of integrity attacks on documentlevel sentiment analysis
. In Proceedings of the 2014 Workshop on Artificial Intelligent and Security Workshop, AISec’14, New York, NY, USA, pp. 83–93. External Links: ISBN 9781450331531 Cited by: §2.  [22] (2006) Paragraph: thwarting signature learning by training maliciously. In Proceedings of the 9th International Conference on Recent Advances in Intrusion Detection, RAID’06, Berlin, Heidelberg, pp. 81–105. Cited by: §2.
 [23] (200605) Misleading worm signature generators using deliberate noise injection. In 2006 IEEE Symposium on Security and Privacy (S&P’06), pp. 15 pp.–31. Cited by: §2.
 [24] (2009) ANTIDOTE: Understanding and Defending Against Poisoning of Anomaly Detectors. In Proceedings of the 9th ACM SIGCOMM Conference on Internet Measurement, IMC ’09, New York, NY, USA, pp. 1–14. External Links: ISBN 9781605587714 Cited by: §2.
 [25] (2018) DefenseGAN: Protecting classifiers against adversarial attacks using generative models. arXiv preprint arXiv:1805.06605. Cited by: §1.0.1, §2.
 [26] (2018) Defending against adversarial images using basis functions transformations. ArXiv abs/1803.10840. Cited by: §1.0.1.
 [27] (1949) The American Soldier, Vol.1: Adjustment during Army Life.. Princeton University Press, Princeton.. Cited by: Appendix 0.B.
 [28] (2019) Bridging machine learning and cryptography in defence against adversarial attacks. In Computer Vision – ECCV 2018 Workshops, L. LealTaixé and S. Roth (Eds.), Cham, pp. 267–279. Cited by: §4.
 [29] (2019) Population anomaly detection through deep gaussianization. In Proceedings of the 34th ACM/SIGAPP Symposium on Applied Computing, SAC ’19, New York, NY, USA, pp. 1330–1336. External Links: ISBN 9781450359337, Link, Document Cited by: §3.
 [30] (2012) Adversarial label flips attack on support vector machines. In Proceedings of the 20th European Conference on Artificial Intelligence, ECAI’12, Amsterdam, NL, pp. 870–875. External Links: ISBN 9781614990970 Cited by: §2.

[31]
(2015)
Is feature selection secure against training data poisoning?
. In International Conference on Machine Learning, pp. 1689–1698. Cited by: §2. 
[32]
(2017)
Feature squeezing: detecting adversarial examples in deep neural networks
. In Proceedings of the 25th Annual Network and Distributed System Security Symposium (NDSS), Cited by: §1.0.1.
Appendix 0.A MannWhitney’s Test
Let be an arbitrary distribution.
MannWhitney’s test is a nonparametric hypothesis test. The test assumes that the two compared sample sets are independent and that a total order exists on their elements (which is the case for realvalued data such as ML feature vectors).
Assuming that :

The null hypothesis is that .

The alternative hypothesis is that for .
The test is consistent^{6}^{6}6i.e., its power increases with and . when, under , .
The test computes a statistic called , which distribution under is known. When and are large enough, the distribution of under
is approximated by a normal distribution of known mean and variance.
is computed as follows:

Merge the elements of and . Sort the resulting list by ascending order.

Assign a rank to each element of the merged list. Equal elements get as rank the midpoint of their adjusted rankings^{7}^{7}7e.g., in the list , the fours all get the rank .

Sum the ranks for each set. Let be this sum for . Note that if then , with .

Let and .
When the are large enough ( elements) approximately follows a normal distribution.
Hence, one can check if the value
follows a standard normal distribution under , with being the mean of , and
its standard deviation under
:However, the previous formulae are only valid when there are no tied ranks. For tied ranks, the following formula is to be used:
Because under ,
follows a normal distribution, we can estimate the likelihood that the observed values comes from a standard normal distribution, hence getting a related
value from the standard normal table.Appendix 0.B Stouffer’s Value Aggregation Method
values can be aggregated in different ways [11]. Stouffer [27] observes that the value defined by is a standard normal variable under where is the standard normal CDF. Hence when are translated into , we get a collection of independent and identically distributed standard normal variables under . To combine the effect of all tests we sum all the which follows a normal distribution under with mean and variance
. The global test statistic
is hence standard normal under and can thus be reconverted into a value in the standard normal table.
Note that in theory, combining values using Stouffer’s method requires that the tests are independent. Other methods can be used for combining values from nonindependent tests, e.g. [16, 6]
. However, these calculations imply that the underlying joint distribution is known, and the derivation of the combination statistics percentiles requires a numerical approximation.
Comments
There are no comments yet.