A Mixture Model Based Defense for Data Poisoning Attacks Against Naive Bayes Spam Filters

10/31/2018 ∙ by David J. Miller, et al. ∙ Penn State University 0

Naive Bayes spam filters are highly susceptible to data poisoning attacks. Here, known spam sources/blacklisted IPs exploit the fact that their received emails will be treated as (ground truth) labeled spam examples, and used for classifier training (or re-training). The attacking source thus generates emails that will skew the spam model, potentially resulting in great degradation in classifier accuracy. Such attacks are successful mainly because of the poor representation power of the naive Bayes (NB) model, with only a single (component) density to represent spam (plus a possible attack). We propose a defense based on the use of a mixture of NB models. We demonstrate that the learned mixture almost completely isolates the attack in a second NB component, with the original spam component essentially unchanged by the attack. Our approach addresses both the scenario where the classifier is being re-trained in light of new data and, significantly, the more challenging scenario where the attack is embedded in the original spam training set. Even for weak attack strengths, BIC-based model order selection chooses a two-component solution, which invokes the mixture-based defense. Promising results are presented on the TREC 2005 spam corpus.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Interest in adversarial learning has grown dramatically in recent years, with some works focused on devising attacks against machine learning systems, e.g.,

[1, 2], and others devising defenses, e.g., [3, 4, 11]. In this work, we address data poisoning attacks on generative classifiers, with particular focus on naive Bayes spam filters. Recent reviews of spam filtering are [5, 6, 7]; see also [13, 14]. In data poisoning against spam filtering, the attacker, using an IP address known to produce spam, generates emails that will be ground-truth labeled as spam but which are more representative of ham. If these emails, and in sufficient quantity, are added to the spam training set, they will grossly alter the word distribution under the spam model, with concomitant degradation in accuracy of the spam filter [8].

Defenses against data poisoning attacks on various systems include [10] and works described in [20]. For spam filtering, a “data sanitization” strategy was proposed [9], which rejects putative additional training samples if trial-adaptation of the spam model based on use of these samples causes degradation in classification accuracy on a held-out validation set. This strategy makes two assumptions: i) that there is sufficient labeled data to have a held-out validation set; ii) it assumes the classifier has already been trained on “clean” data, with the attack consisting of additional labeled samples for classifier retraining. [9] is not a practical strategy when the attack is embedded within the original training set (with the attack samples an unknown subset) — in such a case, a strategy such as [9] would entail an enormous combinatoric space of clean spam subset, attack data subset hypotheses to explore.

A potential strategy for designing a system robust to an embedded

data poisoning attack is to identify the attack samples as training set outliers. While such ideas are mentioned in

[8] and are related to [15], we are not aware such ideas have been practically, effectively applied to spam filtering. The reason naive Bayes spam filters are so susceptible to data poisoning attacks is because, under each class hypothesis (ham or spam) there is only a single

model, whose training/model estimation is degraded, in an unimpeded fashion, by the attacker. We propose a model and learning mechanism that effects isolation of the attack samples, so that they have little effect on estimation of the learned spam word model. Instead of modeling spam using a single NB word distribution, we propose a

mixture of NB models for spam. We couple this mixture model with both i) a careful component initialization strategy, so that the second component captures and isolates the attack and ii) BIC-based model selection, which determines whether a second mixture component is warranted (and hence whether the defense is invoked). Experimentally, whenever the attack strength is sufficient to induce even the mildest degradation in classification accuracy, BIC selects a two-component model, which invokes the defense and its attack mitigation. Moreover, most importantly, we emphasize that our defense addresses both the scenarios where the attack is on classifier retraining and when the attack is on the initial training. The latter is the more challenging scenario.

2 A Mixture-based Defense Against Poisoning of Spam Filters

2.1 Notation

We consider a dictionary of

unique words (following standard stemming and stoplisting), with a given email represented as a vector of word counts

, the number of times word from the dictionary occurs in the given email. Thus, emails are represented by fixed high-dimensional (but highly sparse) vectors. Let be a given training set of ham emails, used to build a NB ham model. Likewise, let be a training set of spam emails for building a NB spam model.

2.2 Attack Scenarios

We consider two attack scenarios: i) classifier retraining and ii) classifier training, which is more challenging, and for which our method is the first successful defense of which we are aware. In the first (retraining) scenario, one can initially build clean ham and spam models (those uncorrupted by attack) using and , respectively111Note that maximum likelihood estimation or Bayesian variants are easily performed, using normalized frequency count estimates (over the labeled corpus) for each word, conditioned on the class.. Let us denote a batch of additional samples that are treated as labeled spam by . In the retraining case, the learner pools with , retraining the spam model using the combined data pool . Note that may consist of legitimate spam samples, attack samples, or some combination of the two. If one can utilize a separate, uncorrupted, held-out validation set, the approach in [9] can effectively mitigate an attacking . However, consider the other scenario – the training scenario. Unlike retraining, where the subset is known to the learner, in the training scenario the attack samples are embedded amongst the clean spam samples. The learner does not know whether an attack is present and if so, which is the attack sample subset. Again the learner uses , but in this case to perform the inaugural learning of the spam model, not model retraining. In the sequel, we develop a common mixture-based defense strategy, which effectively defeats the attack in both scenarios.

2.3 Two-component Mixture Model for Spam

The standard NB spam classifier computes the maximum a posteriori (MAP) decision, given an email’s vector , as , where , , and

is the probability of count

for word under (a multinomial model for) class , i.e. there is a single NB component model for each class. We will invoke a BIC-based hypothesis testing strategy [16] to decide, given , whether to use a single or a two-component model for spam222Our approach can be extended to evaluate models with more than two components if there is more than one attacker, a single but multi-modal attack, or if there is both attack and legitimate class drift reflected in .. The two-component spam model is

where ,

the mixture component random variable. In principle, the class MAP rule can be invoked as before, evaluating for the spam case

. However, in the spam case, due to the two components, one no longer gets a sum of logs expression and numerical underflow practically inviolates evaluation of .

However, this can be mitigated without any approximation of the MAP decision rule, as follows. Note that

Then, substituting this expression into

we obtain

Similarly, the MAP decision statistic under ham can be expressed as

The term

is common under the ham and spam expressions, and can be ignored. Finally, we note that the posterior probabilities

, , and are easily computed, and without numerical underflow problems. Thus, exact ham vs. spam MAP inference in the two-component spam case is readily achieved.

2.4 EM Learning of the Spam Mixture

Maximum likelihood estimation of the two-component mixture can be performed via a standard application of the Expectation-Maximization (EM) algorithm

[17]. The model parameters are , the the multinomial word probabilities under the two components.

E-Step:
Given the current model parameters333The first time the E-step is invoked, it uses the initialized model parameters., one computes

M-Step:
The parameters are re-estimated via:

(1)
(2)

where is the number of words in the given document444We slightly modify these updates, giving additional counts to all words, to ensure that all words have non-zero probabilities.. The E and M steps are alternated until a convergence criterion is met, with these iterations strictly increasing in the log-likelihood of . The same EM algorithm applies in both the training and retraining cases, except for one key difference – the model parameter initialization, next discussed.

After model learning, the component whose emails are MAP-assigned with a large percentage to the ham model (ham versus component model) is discarded. This removes the attack component. The remaining component is a nearly ‘pure spam’ component.

2.5 Spam Model Initialization

Crucial to our defense is the choice of initial model parameters, seeding the EM learning.
Retraining Scenario:

In this case, since we have already learned the inaugural (single component) ham and spam models, we can compute the ham posterior probability for :

Since an attack introduces emails labeled as spam but with characteristics of ham, we suggest to initialize the multinomial word probabilities as follows:

(3)
(4)

Again, counts are added to avoid zero probabilities. Here, component 1 is seeded to be the true spam component, with component 2 seeded to capture the attack. We initialize .

Training Scenario:

In this case, the learning of the two components is performed on the combined data pool . The multinomial word probabilities are initialized as follows:

(5)
(6)

Note that the “attack” component is initialized using the ham data. We realize that the initial esitimation of is contaminated by the attack. However, the subsequent EM algorithm is effective at “undoing” this contamination, i.e. at reassigning the attack emails to component 2. Again, , is used for initialization of EM.

3 Experimental Results

We used a subset of the TREC 2005 spam corpus [18], also used in [8]. The training sets consisted of 8000 ham and 7977 spam emails. The (exclusive) test set consisted of 2000 ham and 1994 spam emails. The dictionary (following stemming and stoplisting) consisted of 19080 words. Attack emails were generated as follows: i) the email length was randomly chosen to be the length of one of the spam training emails; ii) Then the words were generated i.i.d. based on the NB ham model. We also evaluated a second attack distribution which only used the words more likely under ham than spam – this truncation of the dictionary (to 8645 words) makes the attack more potent. The test set accuracy is evaluated for different attack strengths varying from 0 to attacking emails in each scenario.

Retraining Scenario:

Figure 1: Retraining– test set accuracy with ‘pure-ham’ attacks.
Figure 2: Retraining– test set accuracy with ‘truncated’ attacks.

For the retraining scenario, our method is compared with the standard NB method for both attacks. As shown in Fig. 1, under the ‘pure-ham’ attack, the test accuracy for the standard NB drops rapidly as the attack strength grows over . Our method, however, keeps at a high test accuracy (around 0.9) for all attack strengths. For the ‘truncated’ attack case where the attack is more potent, the test accuracy of the standard NB drops even faster. Our method still performs well, showing its robustness to variation in the attack strength.

In addition to the results shown in the figures, we also address two points. First, we compared the BIC values between the single-component model and the two-component model. As the number of attack emails (except zero) are tested, the two-component model is always preferred, with a lower BIC. Second, the learned mixture almost completely isolates the attack in the second NB component, and the original spam component is essentially unchanged by the attacks. Especially, none of the attacks are classified as true spam in the mixture in all cases for ‘truncated’ attacks above (except , for which one attack is misclassified as true spam).

Training Scenario:

Figure 3: Training– test set accuracy with ‘pure-ham’ attacks.

For the training scenario, the spam distribution is unknown a priori. Thus, the ‘truncated’ attack exploits knowledge that is unavailable even to the designer of the classifier. Thus, we do not evaluate this attack in the training case, focusing on the ‘pure-ham’ attack. Fig. 3 shows that our method has similar performance to the standard NB for low attack strengths and beats the standard NB dramatically when the attack strength is high.

4 Discussion

We have considered attacks which corrupt the spam data. Our approach could also be applied if the attack targets ham, rather than spam. However, it is more complicated to address an attack that simultaneously poisons both spam and ham. That is a good subject for future work. Another scenario of interest is where there is both an attack and legitimate “class drift”, e.g. a time-varying distribution for spam. In such a case, one component may be needed to model spam class drift, with another capturing the attack. It may be possible to identify these two components because we would expect the attack distribution to be closer to the ham distribution than legit drifting spam. Another good research direction is to apply parsimonious mixture modeling [19] to learn accurate spam and attack components, working in the full word space. This approach is much more computationally complex, but should also be a highly accurate spam model “initialization” approach.

5 Conclusions

In this work, we proposed a mixture model based defense against data poisoning attacks against spam filters, devising defenses against attacks on both classifier re-training as well as against initial

classifier training. Our approach should be applicable more generally to defend against data poisoning attacks on other domains that involve generative modeling of the data. Our mixture-based approach could also be applied as a pre-processor, to remove attacks before they corrupt (discriminative) training of a deep neural network or support vector machine based classifier.

References

  • [1] C. Szegedy, W. Zaremba, I. Sutskever, J. Bruna, D. Erhan, I. Goodfellow, and R. Fergus. Intriguing properties of neural networks. In ICLR, 2014.
  • [2]

    N. Papernot, P. McDaniel, S. Jha, M. Fredrikson, Z.B. Celik, and A. Swami. The limitations of deep learning in adversarial settings. In

    Proc. 1st IEEE European Symp. on Security and Privacy, 2016.
  • [3] D. Meng and H. Chen. Magnet: a two-pronged defense against adversarial examples. In CCS, 2017.
  • [4]

    D.J. Miller, Y. Wang, and G. Kesidis. Anomaly Detection of Attacks on DNN Classifiers at Test Time. In

    Proc. IEEE MLSP, Sept, 2018.
  • [5] C. Laorden, X. Ugarte-Pedrero, I. Santos, B. Sanz, J. Nieves, and P.G. Bringas. Study on the effectiveness of anomaly detection for spam filtering. Elsevier Information Sciences, vol. 277, pp. 421-444, Sept, 2014.
  • [6] Anomaly Detection Challenges: ADCG SS14 Challenge 02 - Spam Mails Detection. https://inclass.kaggle.com/c/adcg-ss14-challenge-02-spam-mails-detection, Apr-May 2014.
  • [7] G.V. Cormack. Email spam filtering: A systematic review. Foundations and Trends® in Information Retrieval, 2008.
  • [8]

    L. Huang, A.D. Joseph, B. Nelson, B.I.P. Rubinstein,and J.D. Tygar. Adversarial machine learning. In

    Proc. 4th ACM Workshop on Artificial Intelligence and Security (AISec)

    , 2011.
  • [9] B. Nelson, M. Barreno, F.J. Chi, A. D. Joseph, B.I.P. Rubinstein, U. Saini, C. Sutton, J.D. Tygar, and K. Xia. Misleading learners: Co-opting your spam filter. In J. J. P. Tsai and P. S. Yu, editors, Machine Learning in Cyber Trust: Security, Privacy, Reliability, pages 17-51. Springer, 2009.
  • [10] J. Steinhardt, P.W.W. Koh, and P.S. Liang. Certified defenses for data poisoning attacks. Advances in Neural Information Processing Systems, 2017.
  • [11]

    D.J. Miller, X. Hu, Z. Qiu, and G. Kesidis. Adversarial learning: a critical review and active learning study. In

    Proc. IEEE MLSP, Sept, 2017.
  • [12] R. Feinman, R. Curtin, S. Shintre, and A. Gardner. Detecting adversarial samples from artifacts. https://arxiv.org/abs/1703.00410v2, Mar. 1, 2017.
  • [13] K. Rubin. The Ultimate List of Email SPAM Trigger Words. http://blog.hubspot.com/blog/tabid/6307/bid/30684/The-Ultimate-List-of-Email-SPAM-Trigger-Words.aspx, Jan. 11, 2012.
  • [14] S. Mitchell. Common words that trigger spam filters. http://www.inmotionhosting.com/support/edu/everythingemail /spam-prevention-techniques/common-spamwords, Apr. 26, 2013.
  • [15] D.J. Miller and J. Browning. A mixture model and EM-based algorithm for class discovery, robust classification, and outlier rejection in mixed labeled/unlabeled data sets. IEEE Trans. on Pattern Anal. and Machine Intell., pp. 1468-1483, 2003.
  • [16] G. Schwarz. Estimating the dimension of a model. The Annals of Statistics, vol. 6, no. 2, pp. 461-464, 1978.
  • [17] A. P. Dempster, N. M. Laird, and D. B. Rubin. Maximum-likelihood from incomplete data via the EM algorithm. J. Royal Stat. Soc., Series B, vol. 39, pp. 1-38, 1977.
  • [18] https://plg.uwaterloo.ca/ gvcormac/trecspamtrack05
  • [19]

    M.W. Graham and D.J. Miller. Unsupervised learning of parsimonious mixtures on large spaces with integrated feature and component selection.

    IEEE Trans. on Signal Processing, pp. 1289-1303, April 2006.
  • [20] B. Biggio and F. Roli. Wild patterns: Ten years after the rise of adversarial machine learning. Pattern Recognition, 2018.