DeepAI
Log In Sign Up

A Game-Theoretic Approach to Adversarial Linear Support Vector Classification

06/24/2019
by   Farhad Farokhi, et al.
0

In this paper, we employ a game-theoretic model to analyze the interaction between an adversary and a classifier. There are two classes (i.e., positive and negative classes) to which data points can belong. The adversary is interested in maximizing the probability of miss-detection for the positive class (i.e., false negative probability). The adversary however does not want to significantly modify the data point so that it still maintains favourable traits of the original class. The classifier, on the other hand, is interested in maximizing the probability of correct detection for the positive class (i.e., true positive probability) subject to a lower-bound on the probability of correct detection for the negative class (i.e., true negative probability). For conditionally Gaussian data points (conditioned on the class) and linear support vector machine classifiers, we rewrite the optimization problems of the adversary and the classifier as convex optimization problems and use best response dynamics to learn an equilibrium of the game. This results in computing a linear support vector machine classifier that is robust against adversarial input manipulations. We illustrate the framework on a synthetic dataset and a public Cardiovascular Disease dataset.

READ FULL TEXT VIEW PDF
11/01/2020

Support Vector Machines and Radon's Theorem

A support vector machine (SVM) is an algorithm which finds a hyperplane ...
03/22/2018

Sentiment Analysis of Comments on Rohingya Movement with Support Vector Machine

The Rohingya Movement and Crisis caused a huge uproar in the political a...
10/01/2020

Quasar Detection using Linear Support Vector Machine with Learning From Mistakes Methodology

The field of Astronomy requires the collection and assimilation of vast ...
10/06/2022

GBSVM: Granular-ball Support Vector Machine

GBSVM (Granular-ball Support Vector Machine) is an important attempt to ...
03/21/2019

Prescriptive Cluster-Dependent Support Vector Machines with an Application to Reducing Hospital Readmissions

We augment linear Support Vector Machine (SVM) classifiers by adding thr...
10/21/2010

On the Foundations of Adversarial Single-Class Classification

Motivated by authentication, intrusion and spam detection applications w...
12/06/2020

PAC-Learning for Strategic Classification

Machine learning (ML) algorithms may be susceptible to being gamed by in...

I Introduction

Rapid developments in machine learning techniques is anticipated to boost productivity and spur economic growth. The potential to extract accurate analytic gives rise to a data-driven economy which, according to a recent McKinsey report [1]

, is estimated to potentially deliver an additional economic output of around $13 trillion by 2030. This has motivated a world-wide race to put machine learning in everything ranging from health sector to aerospace engineering. However, machine learning systems face security concerns that, up to recently, have not attracted much attention.

Machine learning algorithms have been observed to be vulnerable to adversarial manipulations of their inputs after training and deployment, known as evasion attacks [2, 3, 4]. In fact, some machine learning models are shown to be adversely influenced by very small perturbations to the inputs [5, 3, 6]. These observations severely restrict their applications in practice.

Most common methods for securing machine learning algorithms against adversarial inputs are ad hoc

in nature or based on heuristic; see, e.g., 

[6, 3, 7]. For instance, it has been shown that injecting adversarial examples into the training set, often referred to as adversarial training, can increase robustness to adversarial manipulations [3]. However, this approach is dependant on the method used for generating adversarial examples and the number of the required adversarial examples is often not known a priori.

In this paper, we propose a game-theoretic approach to model and analyze the interactions between an adversary and a decision maker (i.e., a classifier). As a starting point for research, we focus on a binary classification problem using linear support vector machines with Gaussian-distributed data in each class. This way, we can compute optimal adversarial linear support vector machine. Note that the problem of detecting and mitigating evasion attacks in support vector machines is still an ongoing debate 

[8, 9] with not much known in the way of designing robust classifiers, except through adding adversarial examples to the training dataset (discussed in the earlier references) or, in a heuristical manner, by changing the regularization term [10].

We particularly model the interaction between the adversary and the classifier using a constant-sum game. There are two classes (i.e, positive and negative classes) to which the data can belong. The adversary is interested in maximizing the probability of miss-detection for the positive class, i.e., the probability of classification of an input belonging to the negative class while it is from the positive classes, also known as the false negative probability. However, the adversary does not want to significantly modify the data so that it still maintains the favourable traits of the original class. An example of such a classification problem is a simplified spam filtering in which the nature of the email determines its class (with the positive class denoting spam emails). The adversary’s objective is to modify the spam emails so that they pass the spam filtering algorithm. Manipulating the email by a large amount might negate the adversarial nature of spam emails. Also, note that the adversary cannot access all the emails and thus can only manipulate the spam emails. The classifier is interested in maximizing the probability of correct detection for the positive class, i.e., the probability of classification of an input belonging to the positive class if it is from the positive classes, also known as the true positive probability. In the spam filtering example, the classifier aims to determine if an email is spam or not based on possibly modified spam email and unaltered genuine emails. Evidently, if the objective of the classifier was solely to correctly catch all data points belonging to the positive class, its optimal behaviour would have been to ignore the received data point and to mark it as belonging to the positive class. This would correctly identify all data points belonging to the positive classes however it also miss-classifies all data points from the negative class. This is in fact impractical. For instance, such a policy, in the spam filtering example, would results in marking all emails as spam, which is undesirable. Therefore, the classifier enforces a lower bound on the probability of correct detection for the negative class, i.e., the probability of classification of an input belonging to the negative class if it is from the negative class, also known as the true negative probability. We rewrite the optimization problems of the adversary and the classifier as two convex optimization problems and use a best response dynamics to learn an equilibrium of the game.

The problem formulation of this paper is in essence close to cheap-talk games [11, 12, 13] and Bayesian persuasion [14, 15, 16] in which a better-informed sender wants to communicate with a receiver in a strategic manner to sway its decision. However, there is a stark difference between those studies and the setup of this paper. In this paper, the classifier (i.e., the receiver) is restricted to follow a machine learning model (specifically, a linear support vector machine), which is not necessarily Bayesian.

The rest of the paper is organized as follows. Section II presents the game-theoretic problem formulation. Numerical methods for computing the equilibrium are presented in Section III. Numerical examples are presented in Section IV. Finally, Section V concludes the paper and presents avenues for future research.

Ii Problem Formulation

Consider the communication structure between an adversary and a classifier as in Figure 1.

The adversary

has access to a random variable

, which can belong to two classes: positive and negative. The class to which belongs is denoted by , which is a binary random variable itself with . The random variable is assumed to be Gaussian with mean

and co-variance matrix

if and is assumed to be Gaussian with mean and co-variance matrix if . The notation implies that is a symmetric positive definite matrix while implies that is positive semi-definite.

The adversary communicates a message to the classifier. This message may or may not be truthful. We assume that follows

(1)

where is a weighting matrix and is a Gaussian random variable (i.e., additive Gaussian noise) with mean and co-variance . Let be the policy of the adversary. The set of all policies of the adversary is denoted by . Note that the adversary may not be able to access all the data points, e.g., all the emails in the spam filtering example discussed in the introduction, and thus can only manipulate the ones belonging to the positive class, e.g., spam emails.

In this paper, we restrict the adversary’s policy to be linear. This is to ensure that the adversary’s abilities is a match for the classifier (as the classifier is assumed to be linear support vector machine). Furthermore, the linearity of the adversary simplifies the analysis due to the Gaussianity of the data points. This analysis provides a lower bound on the influence of a more general adversary since they can find more degrees of freedom for manipulating the data points by extending their set of strategies to also cover nonlinear mappings.

The classifier intends to determine the class to which the random variable actually belongs, based on the received message . The decision of the classifier is denoted by and is determined by a linear support vector machine classifier:

(2)

where is a vector of weights and is a bias. Note that scaling both and by a positive constant does not change the sign of and thus, without loss of generality, we can assume that and . Let be the policy of the classifier. The set of all policies of the classifier is denoted by .

The goal of the classifier is to correctly classify as many cases belonging to the positive class as possible. Therefore, the classifier wants to maximize

(3)

Evidently, if the objective of the classifier was solely to maximize , the optimal behaviour would have been to set and , i.e., ignore the received messages and mark it as belonging to the positive class. This is however impractical, e.g., such a policy in the spam filtering example results in marking all emails as spam which is undesirable. Therefore, the set of the actions of the classifier is constrained by

(4)

where is a constant. By selecting small enough, we can ensure that data points belonging to the negative class (e.g., non-spam emails) are almost entirely correctly classified. In what follows, we use the notation

(5)

Adversary

Classifier

Fig. 1: Communication structure between an adversary and a classifier playing the adversarial classification game.

The goal of the adversary is to deceive the classifier into miss-classifying more data points from the positive class, e.g., accepting more spam emails. This is achieved by maximizing

(6)

However, the adversary does not want to make large changes to as that might defeat its original purpose, e.g., in the spam filtering example, the email might no longer contain the desirable traits of the spam emails like unsolicited commercial advertisements. Conceptually, this can be achieved by ensuring that the magnitude of the changes is constrained by

(7)

where is a constant. Although, in this paper, we consider a constraint on the variance of the manipulations, the analysis can be readily extended to other constraints, e.g., mean absolute deviations. In what follows, we use the notation

(8)
Definition 1 (Adversarial Classification Game).

An adversarial classification game is defined as a strategic game between two players: an adversary and a classifier. The utilities of the adversary and the classifier are and , respectively. The action spaces of the adversary and the classifier are and , respectively.

In the language of [17], an adversarial classification game is in fact a competitive economy as the action spaces of the players potentially depends on the actions of the other players. In this paper, we use game instead of competitive economy in line with more recent game theory literature.

Definition 2 (Equilibrium).

A pair of policies constitutes an equilibrium if

(9a)
(9b)

With these definitions in hand, we are ready to present the results of the paper.

Iii Main Results

We can prove an important result regarding the adversarial classification game illustrating the direct conflict of interest between the adversary and the classifier, as expected.

Proposition 1 (Constant-Sum Game).

The adversarial classification game is a constant-sum game.

Proof:

Note that for any and . ∎

In the the remainder of this section, we provide a method for computing equilibria of an adversarial classification game. We first show that the best responses of the players can be computed using convex optimizations. In what follows, denotes the error function, defined as

Theorem 1.

Let be such that (1) and (2) holds with probability one with the following parameters:

where is given by

(10a)
(10b)
(10c)
(10d)
(10e)

and is given by

(11a)
(11b)
(11c)
(11d)
(11e)

Then is an equilibrium of the adversarial classification game.

Proof:

Let us consider the classifier. The utility of the classifier can be simplified as

where the last equality follows from that , conditioned on the observation that , is a Gaussian random variable with the following mean and variance:

Because is an increasing function, maximizing is equivalent to minimizing . The constraint of the classifier can also be rewritten as

where the last equality again follows from that , conditioned on the observation that , is a Gaussian random variable with the following mean and variance:

The constraint that is equivalent to

Again, because is an increasing function, we can rewrite the classifier’s constraint as

Note that because . These derivations allow us to transform the optimization problem in (9b) into

(12a)
(12b)
(12c)
(12d)

We use the approach of [18] for the constraint to eliminate the fractional constraint. We define the change of variables:

Hence, we can rewrite (12) as

(13a)
(13b)
(13c)
(13d)
(13e)

We can drop the last two inequalities noting that we can set , where

(14a)
(14b)
(14c)

Again, we use the approach of [18], but this time for the utility, to rewrite this fractional optimization problem. To do so, define

Following [18], the optimization problem in (14) is equivalent with

(15a)
(15b)
(15c)
(15d)
(15e)

Now, we consider the adversary. The utility of the adversary is given by

Hence, because is an increasing function, maximizing is the same as maximizing . Furthermore, the constraint of the adversary can be rewritten as

Following these derivations, we can transform the optimization problem in (9a) to

(16a)
(16b)
(16c)

Note that the inequality constraint (16b) can be replaced with the following three constraints:

(17a)
(17b)
(17c)

Further, the constraint (17a) can be linearized using its Schur complement because to get

(18)

Again, we can use the Schur complement, to transform the constraint (18) into

(19)

Therefore, the optimization problem in (16) can be transformed into

(20a)
(20b)
(20c)
(20d)
(20e)

Define

Following [18], the optimization problem in (20) is equivalent with

(21a)
(21b)
(21c)
(21d)
(21e)
0:  
0:  ,
1:  for  do
2:     Compute by solving (10) for fixed
3:     Update
4:     Compute by solving (11) for fixed
5:     Update
6:  end for
Algorithm 1 Learning an equilibrium of the adversarial classification game.

Using the Schur complement of (21b) and defining , we can transform (21) into

(22a)
(22b)
(22c)
(22d)
(22e)
(22f)

By defining such that and using the Schur complements of (22b) and (22d), we can transform (22) into

This concludes the proof. ∎

We can use the best response dynamics, summarized in Algorithm 1, to extract an equilibrium of the game. The iterates in Algorithm 1 converge to an equilibrium of the adversarial classification game. The convergence of the best response dynamics follows from [19] because of the constant-sum nature of the game; see Proposition 1.

(a) Non-adversarial (b) Naïve classifier (c) Equilibrium
Fig. 2: Illustration of the effect of the adversary on linear support vector machine classifiers using synthetic Gaussian data. There is no adversary in the non-adversarial case. The naïvete case refers to the case that the classifier is not prepared to respond to the adversary. The equilibrium of the game is recovered by Algorithm 1. The optimal non-adversarial and adversarial support vector machines are illustrated with dashed and solid lines, respectively. The original data points from the positive class are shown by the red pluses and the data points from the negative class are shown by the blue squares. The manipulated data points from the positive class are shown by the black triangles.

Iv Numerical Example

In this section, we illustrate the applicability of the developed game-theoretic framework on two numerical problems: an illustrative example using Gaussian data and a practical example using real data on heart disease classification.

Iv-a Illustrative Example

Consider an example in which the data for the positive and negative classes are Gaussian random variables with the following means and co-variance matrices:

For the following experiments 500 data points from each class are randomly generated to illustrate the effects of the adversary and the classifier.

We first consider there is no adversary. In this case, the classifier is merely interested in identifying the optimal linear support vector machine by maximizing the true positive probability subject to a constraint that the true negative probability is greater than or equal to . Figure 2 (a) shows the optimal non-adversarial support vector machine with the dashed line. The data points from the positive class are shown by the red pluses and the data points from the negative class are shown by the blue squares. In this case, the true negative probability is and the true positive probability (which is equal to one minus the false negative probability) is

. This high performance is of course an artifact of the setup of the example in which the data points for both classes are linearly separable with high probability and mixing is only due to outliers.

Now, let the classifier select this optimal non-adversarial support vector machine as its policy. Assume that the adversary can now manipulate the data in the positive class but the classifier is not prepared to respond to these manipulations. The adversary is interested in manipulating the data from the positive class in order to maximize the false negative probability subject to a bound on the expected variance of the changes with . Figure 2 (b) the effect of the adversary on the performance of this naïve classifier. The naïvete refers to that the classifier is not prepared to respond to the adversary. The manipulated data points from the positive class are depicted by the black triangles. Similarly, the original data points from the positive class are shown by red pluses and the data points from the negative class are shown by the blue squares. In this case, the false negative probability increases 493% in comparison to the non-adversarial case. This is of course due to unprepared, naïve nature of the classifier.

Now, consider the case where the classifier uses the optimal linear support vector machine extracted from the equilibrium of the game extracted by Algorithm 1. In this case, it is also in the benefit of the adversary to employ its optimal manipulation policy corresponding to the equilibrium of the game. Figure 2 (c) shows the optimal adversarial support vector machine with the solid line (compare with the non-adversarial support vector machine depicted still by the dashed line). Again, the manipulated data points from the positive class are shown by the black triangles, the original data points from the positive class are shown by red pluses, and the data points from the negative class are shown by the blue squares. By employing the optimal adversarial support vector machine, the classifier can reduce the false negative probability 56% in comparison to the naïve case.

True negative probability False negative probability
Non-adversarial 0.8986 0.2000
Naïve classifier 0.8986 0.9758
Equilibrium 0.9130 0.6303
TABLE I: The effect of adversary on linear support vector machine classifiers using the Cardiovascular Disease dataset. There is no adversary in the non-adversarial case. The naïvete case refers to the case that the classifier is not prepared to respond to the adversary. The equilibrium of the game is recovered by Algorithm 1.

Iv-B Heart Disease

In this section, we use the Cardiovascular Disease dataset on Kaggle [20]

. The dataset contains age, height, weight, gender, Systolic and Diastolic blood pressures, Cholesterol level, Glucose level as well as smocking, drinking, and activity levels of 70,000 individuals.There are two classes of individuals: those with no heart disease (negative class) and those with a heart disease (positive class).

Consider the case where an adversary is interested in miss-classifying an individual from the positive class. This could be motivated by that the adversary wants to forge medical documents with minimal changes to pass a life insurance test. For the sake of numerical stability, we scale the data with the inverse of the Cholesky factor of the co-variance matrix of the data. This is just to ensure that all the entries of the data are of the same size. This is indeed a linear transformation whose effect can be compensated for in the linear support vector machine and the linear manipulations of the adversary. Therefore, there is no loss of generality in using this transformation.

To be able to use the developed framework, we fit Gaussian density functions to data points for each class and follow the approach of this paper for designing the classifier and computing the optimal manipulation by the adversary. In what follows, we set and . Table I shows the effect of the adversary on the linear support vector machine classifier. Although our Gaussian assumption might not be entirely valid, the optimal non-adversarial support vector machine almost meets the constraints on the true negative probability (with true negative probability of instead of ). In the naïve case, the adversary can use the ignorance of the classifier to improve the false negative probability by 488%; nearly all individuals from the positive class can be made to pass the test. Following the equilibrium extracted by Algorithm 1, the performance of the adversary is degraded by 35%. This is of course a significant improvement for the classifier. To be able to further reduce the false negative probability, we need to increase . This portrays the trade-off that the classifier faces.

V Conclusions and Future Work

We used a constant-sum game to model the interaction between an adversary and a classifier. For Gaussian data and linear support vector machine classifiers, we transformed the optimization problems of the adversary and the classifier to convex optimization problems. We then utilized best response dynamics to learn an equilibrium of the game in order to extract linear support vector machine classifiers that are robust to adversarial input manipulations.

References

  • [1] McKinsey Global Institute, “Notes from the AI frontier: Modeling the impact of AI on the world economy,” 2018.
  • [2] N. Dalvi, P. Domingos, S. Sanghai, and D. Verma, “Adversarial classification,” in Proceedings of the tenth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 99–108, ACM, 2004.
  • [3] I. J. Goodfellow, J. Shlens, and C. Szegedy, “Explaining and harnessing adversarial examples,” in Proceedings of the 3rd International Conference on Learning Representations, 2015.
  • [4]

    X. Yuan, P. He, Q. Zhu, and X. Li, “Adversarial examples: Attacks and defenses for deep learning,”

    IEEE transactions on Neural Networks and Learning Systems

    , 2019.
  • [5] B. Biggio, I. Corona, D. Maiorca, B. Nelson, N. Šrndić, P. Laskov, G. Giacinto, and F. Roli, “Evasion attacks against machine learning at test time,” in Joint European conference on machine learning and knowledge discovery in databases, pp. 387–402, Springer, 2013.
  • [6] N. Papernot, P. McDaniel, X. Wu, S. Jha, and A. Swami, “Distillation as a defense to adversarial perturbations against deep neural networks,” in 2016 IEEE Symposium on Security and Privacy (SP), pp. 582–597, IEEE, 2016.
  • [7] A. Kurakin, I. Goodfellow, and S. Bengio, “Adversarial machine learning at scale,” in Proceedings of the 5rd International Conference on Learning Representations, 2017.
  • [8] C. Frederickson, M. Moore, G. Dawson, and R. Polikar, “Attack strength vs. detectability dilemma in adversarial machine learning,” in 2018 International Joint Conference on Neural Networks (IJCNN), pp. 1–8, IEEE, 2018.
  • [9] Y. Han and B. Rubinstein, “Adequacy of the gradient-descent method for classifier evasion attacks,” in

    Workshops at the Thirty-Second AAAI Conference on Artificial Intelligence

    , 2018.
  • [10] P. Russu, A. Demontis, B. Biggio, G. Fumera, and F. Roli, “Secure kernel machines against evasion attacks,” in Proceedings of the 2016 ACM Workshop on Artificial Intelligence and Security, pp. 59–69, ACM, 2016.
  • [11] V. P. Crawford and J. Sobel, “Strategic information transmission,” Econometrica: Journal of the Econometric Society, pp. 1431–1451, 1982.
  • [12] F. Farokhi, A. M. H. Teixeira, and C. Langbort, “Estimation with strategic sensors,” IEEE Transactions on Automatic Control, vol. 62, no. 2, pp. 724–739, 2016.
  • [13] J. Farrell and M. Rabin, “Cheap talk,” Journal of Economic Perspectives, vol. 10, no. 3, pp. 103–118, 1996.
  • [14] E. Kamenica and M. Gentzkow, “Bayesian persuasion,” American Economic Review, vol. 101, no. 6, pp. 2590–2615, 2011.
  • [15] S. Dughmi and H. Xu, “Algorithmic bayesian persuasion,” in

    Proceedings of the forty-eighth annual ACM symposium on Theory of Computing

    , pp. 412–425, ACM, 2016.
  • [16] V. S. S. Nadendla, C. Langbort, and T. Başar, “Effects of subjective biases on strategic information transmission,” IEEE Transactions on Communications, vol. 66, no. 12, pp. 6040–6049, 2018.
  • [17] K. J. Arrow and G. Debreu, “Existence of an equilibrium for a competitive economy,” Econometrica: Journal of the Econometric Society, pp. 265–290, 1954.
  • [18] S. Schaible, “Parameter-free convex equivalent and dual programs of fractional programming problems,” Zeitschrift für Operations Research, vol. 18, no. 5, pp. 187–196, 1974.
  • [19] E. N. Barron, R. Goebel, and R. R. Jensen, “Best response dynamics for continuous games,” Proceedings of the American Mathematical Society, vol. 138, no. 3, pp. 1069–1083, 2010.
  • [20] S. Ulianova, “Cardiovascular disease dataset: The dataset consists of 70 000 records of patients data, 11 features + target.,” 2019. https://www.kaggle.com/sulianova/cardiovascular-disease-dataset.