Spectrum Data Poisoning with Adversarial Deep Learning

01/26/2019 ∙ by Yi Shi, et al. ∙ Intelligent Automation, Inc. 0

Machine learning has been widely applied in wireless communications. However, the security aspects of machine learning in wireless applications have not been well understood yet. We consider the case that a cognitive transmitter senses the spectrum and transmits on idle channels determined by a machine learning algorithm. We present an adversarial machine learning approach to launch a spectrum data poisoning attack by inferring the transmitter's behavior and attempting to falsify the spectrum sensing data over the air. For that purpose, the adversary transmits for a short period of time when the channel is idle to manipulate the input for the decision mechanism of the transmitter. The cognitive engine at the transmitter is a deep neural network model that predicts idle channels with minimum sensing error for data transmissions. The transmitter collects spectrum sensing data and uses it as the input to its machine learning algorithm. In the meantime, the adversary builds a cognitive engine using another deep neural network model to predict when the transmitter will have a successful transmission based on its spectrum sensing data. The adversary then performs the over-the-air spectrum data poisoning attack, which aims to change the channel occupancy status from idle to busy when the transmitter is sensing, so that the transmitter is fooled into making incorrect transmit decisions. This attack is more energy efficient and harder to detect compared to jamming of data transmissions. We show that this attack is very effective and reduces the throughput of the transmitter substantially.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

Cognitive radios support the efficient discovery and use of the spectrum [1]. Spectrum sensing is one of the key tasks of cognitive radios to achieve situational awareness. Cognitive radios analyze the signals received through spectrum sensing and determine their channel access strategy accordingly. Because of the open and broadcast nature of wireless communications, the decisions built upon spectrum sensing results are susceptible to different types of attacks that aim to force the cognitive radios to make incorrect decisions.

Different attacks on the spectrum sensing decisions in a cognitive network have been studied in the literature and various defense techniques have been developed [2].

  • Primary user emulation (PUE) attacks decrease the spectrum access opportunities of cognitive radios. A defense mechanism with belief propagation was studied in [3].

  • In a collaborative sensing environment, some cognitive users may send falsified reports to a decision center. This corresponds to a spectrum sensing data falsification (SSDF) attack that aims to degrade the performance of spectrum sensing. A trust-based defense strategy to mitigate this attack was studied in [4] by using a common receiver for data fusion. SSDF attack also applies to mobile ad hoc networks (MANETs) where there is no centralized data fusion center. A consensus-based cooperative spectrum sensing scheme to counter SSDF attacks in cognitive radio MANETs was studied in [5].

  • Cognitive radio networks are also susceptible to conventional security threats such as jamming [6], cognitive interference [7], eavesdropping [8] and noncooperation [9]. These threats extend from physical layer to higher layers (e.g., routing in the network layer [10, 11]), and exploit different levels of uncertainty regarding channel, traffic, and adversary types [12, 13, 14, 15].

In this paper, we introduce a new type of attack motivated by adversarial machine learning, namely the over-the-air spectrum data poisoning attack. While the adversary jams the channel under this attack, its purpose is not to degrade the data transmission received (as typically assumed in denial-of-service attacks [16]) but it aims to manipulate the spectrum sensing data collected so that wrong transmit decisions are made by using the unreliable spectrum sensing results. Also, this attack differs from the SSDF attack, since the adversary does not participate in cooperative spectrum sensing and does not try to change channel labels directly as in the SSDF attack. Instead, the adversary injects adversarial perturbation to the channel in order to fool the transmitter into making wrong transmit decisions.

Recently, machine learning has been applied for several cognitive radio tasks such as spectrum sensing [17], spectrum access [18, 19] and modulation recognition [20]. However, there are various security concerns regarding the safe use of machine learning algorithms. For example, if the input data to a machine learning algorithm is manipulated during the training or operation (test) time, the output will be very different compared to the expected results. Adversarial machine learning studies learning in the presence of adversaries and aims to enable safe adoption of machine learning to the emerging applications.

Three broad categories of attacks under adversarial machine learning are exploratory (or inference) attacks, evasion attacks and causative (or poisoning) attacks.

  • In exploratory (or inference) attacks [21, 22, 23, 24], the adversary aims to understand how the underlying machine learning works for an application (e.g., inferring sensitive and/or proprietary information).

  • In evasion attacks [25, 26], the adversary attempts to fool the machine learning algorithm into making a wrong decision (e.g., fooling a security algorithm into accepting an adversary as legitimate).

  • In poisoning (or causative) attacks [27, 28], the adversary provides incorrect information such as training data to machine learning.

These attacks can be launched separately or combined, i.e., causative and evasion attacks can be launched by making use of the inference results from an exploratory attack [29, 30].

In this paper, we apply adversarial deep learning to launch an exploratory attack on a cognitive radio as the preliminary step before intentionally changing the transmitter’s sensing results by transmitting when there will be a success transmission if this transmission is not under attack. For that purpose, the adversary trains a deep neural network. We consider a canonical wireless communication scenario with a cognitive transmitter, the corresponding receiver, an adversary, and some other background traffic. The cognitive transmitter builds a machine learning model (based on a deep neural network) to predict the busy and idle states of the channel. The training data includes

  • time-series of spectrum sensing results as features, and

  • channel idle/busy status based on the ground truth (the background transmitter’s on/off state) as labels.

Then this machine learning model is used by the cognitive transmitter to make transmit decisions. If a transmission is successful (i.e., the signal-to-interference-plus-noise ratio (SINR) exceeds a threshold), the receiver sends an acknowledgement (ACK) to the transmitter, which can also be overheard by an adversary. The adversary performs an exploratory attack

to build a classifier that can predict the outcome of transmissions, i.e., whether there will be an ACK or not if no attack. Note that this is not a standard exploratory attack and the classifier built by the adversary will not be the same as (or similar to) the classifier used by the transmitter, due to the following two differences.

  • The transmitter and the adversary are in different locations and thus their sensing results will vary based on the channel environment and differ from each other. As a result, the input data to their classifiers will differ.

  • The adversary predicts the outcome of transmissions (‘ACK’ or ‘no ACK’) while the transmitter predicts channel status (idle or busy). As a result, the output data of their classifiers will differ.

After building its classifier, the adversary predicts when the transmitter will have a successful transmission (if no attack) and performs the poisoning attack, i.e., the adversary transmits to change the channel status in order to poison (i.e., falsify) the transmitter’s input (spectrum sensing data) to the machine learning algorithm. The attack considered in this paper is similar to that in [18], where the adversary also first learns the transmitter’s behavior (ACK or not) by an exploratory attack and then performs subsequent attacks. The difference is that in [18], the adversary performs a jamming attack during the data transmission phase to make a transmission fail while in this paper the adversary performs a spectrum data poisoning attack in the sensing phase such that the transmitter has incorrect input data to its classifier and makes the wrong decision of not transmitting. The attack considered in this paper is hard to detect since it does not directly jam the transmitter’s signal but it changes the input data to the decision mechanism so that the transmitter chooses not to transmit when the channel is indeed idle. Moreover, this attack is energy efficient since the adversary makes a very short transmission in the sensing period.

We show that this adversarial deep learning approach results in an effective attack. In particular, for the scenario studied in numerical results, only few transmission attempts are made and the achieved throughput (normalized by the optimistic throughput by an ideal algorithm to detect every idle channel) drops from to , when the spectrum data poisoning attack is launched.

The rest of the paper is organized as follows. Section II describes the system model. Section III describes the transmitter’s algorithm and shows the performance without an attack. Section IV describes the adversary’s algorithm and shows the performance under the attack. Section V concludes the paper.

Ii System Model

We consider a cognitive network that includes a transmitter , a receiver , an adversary and some background traffic source that may transmit its data. The network topology to generate numerical results is shown in Figure 1. Note that the designed attack schemes can be directly applied to other network topologies.

Fig. 1: The network topology.

The activities of are not known a priori and can be detected via spectrum sensing. Time is divided in slots. Within each slot, the initial short period of time is allocated by for spectrum sensing and the ending short period of time is allocated for feedback (ACK). The rest of a slot is for data transmission if no other transmission is detected. The decision of is based on a deep neural network (trained by deep learning) that analyzes sensing results and then determines the channel status such that the channel is busy if background traffic is detected and channel is idle otherwise. Each sensing result is either

  • noise with normalized power (when channel is idle) or

  • noise plus the received power from background traffic (when channel is busy).

We assume that the noise and the received power are random variables with Gaussian distributions.

Data transmission is successful if the SINR at the receiver is not less than some threshold, where the interference comes from the transmissions of . We assume Gaussian noise at and Gaussian channel gain from to . The mean value of the channel gain is calculated based on the free-space propagation loss model. A receiver sends an ACK for each successful transmission. The adversary also senses the channel and aims to predict whether there will be a successful transmission (ACK) if no attack. Note that only detects the presence of the ACK message but does not need to decode it. The prediction by is based on another deep neural network that is trained by using deep learning. If predicts that there will be a successful transmission, it performs an attack to reduce the throughput of .

In this paper, we consider the attack of transmitting in the initial short sensing period to change the sensing result of for the current time slot. Since this sensing result is an input to ’s classifier on channel status, may make a wrong decision, even if its classifier algorithm was built well.

The advantage of this attack, comparing with the continuous jamming attack, is that the initial sensing period is much shorter than the data transmission period. As a result, the power consumption of this attack is also much less compared to continuous jamming. In addition, this attack is harder to detect compared to continuous jamming.

We can design a defense mechanism to prevent such attacks by noting that the first step of the attack is an exploratory attack to understand ’s classifier. Thus, can take wrong actions in a controlled manner such that the ‘ACK’ or ‘no ACK’ results can be changed. As a consequence, the classifier built by an adversary may not work well and thus the attack may become less effective.

Iii Transmitter’s Algorithm

Transmitter needs to sense the spectrum, identify an idle channel (the other transmitter is not transmitting), and then decide when to transmit. applies deep learning to train a deep neural network classifier that identifies idle channels. This classifier is pre-trained using the most recent sensing results as features and the current channel busy/idle status as a label to build one training sample. The number of sensing results used for one sample is potentially a design parameter for and can be tuned by to optimize its performance. Each sensing result is either a Gaussian noise with normalized power (when channel is idle) or noise plus the transmit power from another user (when channel is busy), where is the channel gain from the other transmitter to and is the transmission power (noise and the channel gain are random variables with Gaussian distributions). After observing certain period of time, collects a number of samples to be used as training to build a deep learning classifier.

Fig. 2: The input and output (label) data while training ’s classifier.

Figure 2 shows the input data and the labels while building ’s classifier. ’s training algorithm is summarized as follows.

  • collects data over a time period. Denote the sensed power at time as .

  • builds a training sample for each time , where is the channel busy/idle condition at time .

  • divides samples equally to a training set and a test set and uses one half to train a classifier with deep learning.

Once a classifier is built, will use it to predict the channel status and transmit if it predicts the channel as idle. This prediction algorithm is summarized as follows.

  • At time , senses channel and obtains power . builds a test sample .

  • uses its classifier to decide on the labels (busy or idle) and then transmits if the channel is predicted idle.

For this algorithm, there may be two types of errors:

  • Misdetection. The channel is busy but it is detected as idle.

  • False alarm. The channel is idle but it is detected as busy.

Transmitter aims to minimize , where and

are the probabilities of misdetection and false alarm at

, respectively. These error probabilities are calculated by and , where is the number of misdetections, is the number of times that the channel is busy, is the number of false alarms, and is the number of times that the channel is idle.

We use TensorFlow to build a deep learning classifier for

. In particular, we use the following deep neural network:

  • A feedforward neural network is trained with backpropagation function by using cross-entropy as the loss function. The structure of a feedforward neural network is shown in Figure 

    3.

  • Number of hidden layers is 3.

  • Number of neurons per hidden layer is 100.

  • Rectified linear unit (ReLU) is used as activation function at hidden layers.

  • Softmax is used as the activation function at output layer.

  • Batch size is 100.

  • Number of training steps is 1000.

Fig. 3: The structure of a feedforward neural network.

Note that

can further optimize the hyperparameters (e.g., number of layers and number of neurons per layer) of its deep neural network. The block diagram in Figure 

4 shows ’s operation during the run-time. Note that there is an optional block of defense, which was discussed in Section II.

Background traffic arrives at another transmitter at rate of packet per time slot. When has queued data packet, it may decide to transmit at rate of packet per time slot and once it transmits, it will continue until the queue is empty. The channel gain is a random variable with a Gaussian distribution and the expected value , where is the distance between and . In the simulation setting, the location of is , the location of is , and the transmit power at is (normalized with respect to the unit noise power).

collects samples (each with the most recent spectrum sensing results and a label (‘idle’ or ‘busy’) and uses half of them as training data and the other half as test data. There are busy and idle channels in the test data. Among them, busy channels are identified as idle. Thus,

Both errors are small, showing that can reliably predict the channel status.

Fig. 4: ’s classifier during run-time.

transmits data in idle channels detected by its deep learning classifier. If the SNR (or SINR) at receiver is no less than some threshold, will confirm a successful transmission by sending an ACK to . Note that we again assume a Gaussian noise at and a Gaussian channel gain from (or ) to .

In the simulation, we set threshold as , the location of as , and the transmit power at as (normalized with respect to the unit noise power). applies its deep learning classifier on time slots and makes transmission decisions. There are busy and idle channel instances in these time slots. Among them, busy channel instances are identified as idle and transmissions in these slots fail, while idle channel instances are correctly identified as idle and transmissions in slots are successful.

To measure the throughput performance of ’s algorithm, we compare it with an ideal algorithm that can correctly detect all idle channels. Thus, the achieved normalized throughput is defined as the ratio of the number of successful transmissions to the number of slot with idle channel instances. Without an attack, is measured as

We also evaluate the success ratio , which is defined as the ratio of the number of successful transmissions to the number of all transmissions. Without an attack, is measured as

We can see that due to small errors to detect busy/idle channels, ’s algorithm achieves high values of normalized throughput and success ratio. Finally, we evaluate the all transmission ratio , which is defined as the ratio of the number of all transmissions to the number of all slots. Without an attack, is measured as

Iv Adversary’s Algorithm

There is an adversary that aims to reduce transmitter ’s performance. As the first step, it needs to launch an exploratory attack to infer ’s classifier. In particular, senses the spectrum, predicts whether there will be a successful transmission (if no attack), and performs certain attacks if there will be a successful transmission. There are four cases:

  1. channel is idle and is transmitting,

  2. channel is busy and is not transmitting,

  3. channel is idle and is not transmitting, or

  4. channel is busy and is transmitting.

Ideally, the last two cases should be rare cases. We assume that can hear ACKs for ’s transmissions (but does not decode them). uses the most recent sensing results as the features and the current feedback (‘ACK’ vs. ‘no ACK’) as the label to build one training sample. The number of sensing results used for one sample is potentially a design parameter for and can be tuned by to optimize the impact of its attack.

After observing certain period of time, collects a number of samples as training data to build a deep learning classifier that outputs one of two labels, ‘a successful transmission’ and ‘failed transmission’. Figure 5 shows the input data and the labels while building the adversary’s classifier. uses the same deep neural network structure as , although it determines its own weights and biases of the deep neural network by using its own training data. can further optimize the hyperparameters of its deep neural network.

Fig. 5: The input and output (label) data while training ’s classifier.

’s training algorithm is summarized as follows.

  • collects data over a time period. Denote the sensed power at time as and the label (ACK or not) at time as .

  • builds a training sample for each time .

  • divides samples equally to a training set and a test set and one half to train a classifier using deep learning.

The process of building such a classifier can be regarded as an exploratory attack, since aims to build a classifier to infer the operation of . The classifier built by is similar to ’s classifier. However, there are the following two differences between the classifiers built by and .

  • Due to different locations of and and random channels, the sensing results at and differ. Thus, features for the same sample are different at and .

  • The labels (classes) are different, i.e., ‘busy’ or ‘idle’ in ’s classifier and ‘ACK’ or ‘no ACK’ in ’s classifier.

Once a classifier is built, uses it to predict whether there is a successful transmission (if no attack). This prediction algorithm is given as follows.

  • At time , senses channel and obtains power . builds a test sample .

  • uses its classifier to decide on a label (ACK or not) and then attacks if there will be an ACK.

For this algorithm, there may be two types of errors:

  • Misdetection. There will be a successful transmission but ’s classifier predicts that there will not be a successful transmission.

  • False alarm. There will not be a successful transmission but ’s classifier predicts that there will be a successful transmission.

aims to minimize , where and are the probabilities of misdetection and false alarm at , respectively. We use TensorFlow to build a deep learning classifier for . The deep neural network structure is the same as the one used by (described in Section III). As the features and labels are different for and , they individually train their deep neural network (i.e., determine their own weights and biases).

Fig. 6: Using ’s classifier for attacks.

Figure 6 illustrates the attacker’s operation during the run-time. In the simulation, we set the location of as . collects samples with labels and uses half of them for training data and the other half for test data. There will be successful transmissions in test data. Out of transmissions, are predicted as failed transmissions, although these transmissions are indeed successful. Among failed transmissions, of them are predicted as successful transmissions, although these transmissions indeed fail. Thus,

Both errors are small, showing that can reliably predict the successful transmissions by .

With the output of ’s classifier, performs an attack by transmitting in the initial short sensing period to change ’s sensing result for the current time slot. This sensing result is one feature of ’s classifier and thus may make a wrong decision, even if its classifier algorithm was built well. Compared with a continuous jamming attack, this attack targets the initial sensing period that is much shorter than the data transmission period. Hence, the power consumption of this attack is much less than continuous jamming. In the simulation, the transmit power at is set as .

For the classifier built in Section III and time slots considered for transmissions under the attack, busy channel is identified as idle and the transmission in this slot fails, while (of ) idle channels are identified as idle and the transmissions in these slots are all successful. Thus, the achieved normalized throughput is

and the overall success ratio is measured as

while only very few transmission attempts are made such that the all transmission ratio is measured as

As a result, reduces the throughput of significantly from to and reduces the success ratio from to . Table I summarizes the performance of with and without an attack, and demonstrates the success of this attack.

Normalized Success All transmission
throughput ratio ratio
no attack 98.96% 96.94% 19.60%
with attack 3.13% 75.00% 0.80%
TABLE I: Results with and without attack.

V Conclusion

We applied adversarial machine learning (based on deep neural networks) to design an over-the-air spectrum sensing data poisoning attack that manipulates the input data of the transmitter during the run-time and fools it into making wrong transmit decisions. The proposed attack is preferable for the adversary with a power constraint since the adversary only needs to transmit for a short period of time to manipulate the transmit decisions of the transmitter. Moreover, it is not easy to detect such an attack due to the short period of transmit time. We showed that this attack substantially decreases the throughput of the transmitter, while forcing it to make few transmission attempts. The results show the effectiveness of the proposed spectrum poisoning attack, which raises the need of new defense mechanisms to protect wireless communications against intelligent attacks based on adversarial machine learning.

References

  • [1] J. Mitola, Cognitive radio: An integrated agent architecture for software defined radio, Doctor of Technology, Royal Inst. Technol. (KTH), Stockholm, Sweden, 2000.
  • [2] T. C. Clancy, and N. Goergen, “Security in cognitive radio networks: Threats and mitigation,” IEEE Conference on Cognitive Radio Oriented Wireless Networks and Communications (CrownCom), Singapore, May 15–17, 2008.
  • [3] Z. Yuan and D. Niyato and H. Li and J. B. Song and Z. Han, “Defeating primary user emulation attacks using belief propagation in cognitive radio networks,” IEEE Journal Selected Areas in Communications, vol. 30, no. 10, pp. 1850–1860, Nov. 2012.
  • [4] Y. E. Sagduyu, “Securing cognitive radio networks with dynamic trust against spectrum sensing data falsification,” IEEE Military Communications Conference (MILCOM), Baltimore, MD, Oct. 6–8, 2014.
  • [5] F. R. Yut, H. Tang, M. Huang, Z. Lit, and P. C. Mason, “Defense against spectrum sensing data falsification attacks in mobile ad hoc networks with cognitive radios,” IEEE Military Communications Conference (MILCOM), Boston, MA, Oct. 18–21, 2009.
  • [6] Y. E. Sagduyu, R. Berry, and A. Ephremides, “Jamming games in wireless networks with incomplete information,” IEEE Communications Magazine, vol. 49, no 8, pp. 112–118, Aug. 2011.
  • [7] Y. E. Sagduyu, Y. Shi, A. B. MacKenzie, and T. Hou, “Regret minimization for primary/secondary access to satellite resources with cognitive interference,” IEEE Transactions on Wireless Communications, vol. 17, no. 5, pp. 3512–3523, May 2018.
  • [8] Y. Zou, J. Zhu, L. Yang, Y.-C. Liang, and Y.-D. Yao, “Securing physical-layer communications for cognitive radio networks,” IEEE Communications Magazine, vol. 53, no. 9, pp. 48–54, Sep. 2015.
  • [9] Y. E. Sagduyu, R. Berry, and A. Ephremides, “MAC games for distributed wireless network security with incomplete information of selfish and malicious user types,”

    IEEE International Conference on Game Theory for Networks (GameNets)

    , Istanbul, Turkey, May 13–15, 2009.
  • [10] Z. Lu, Y. E. Sagduyu, and J. Li, “Securing the backpressure algorithm for wireless networks,” IEEE Transactions on Mobile Computing, vol. 16, no. 4, pp. 1136–1148, Apr. 2017.
  • [11] Z. Lu, Y. E. Sagduyu, and J. Li, “Queuing the trust: Secure backpressure algorithm against insider threats in wireless networks”, IEEE Conference on Computer Communications (INFOCOM), Hong Kong, Apr. 26–May 1, 2015.
  • [12] Y. E. Sagduyu, R. Berry and A. Ephremides, “Jamming games for power controlled medium access with dynamic traffic,” IEEE International Symposium on Information Theory (ISIT), Austin, TX, June 12–18, 2010.
  • [13] Y. E. Sagduyu, R. Berry and A. Ephremides, “Wireless jamming attacks under dynamic traffic uncertainty,” IEEE International Symposium on Modeling and Optimization in Mobile, Ad Hoc, and Wireless Networks (WIOPT), Avignon, France, May 31–June 4, 2010.
  • [14] Y. E. Sagduyu and A. Ephremides, “SINR-Based MAC games for selfish and malicious Users,” Information Theory and Applications (ITA) Workshop, San Diego, CA, Jan. 29–Feb. 2, 2007.
  • [15] S. Wang, Y. E. Sagduyu, J. Zhang, and J. H. Li, “Traffic shaping impact of network coding on spectrum predictability and jamming attacks,” IEEE Military Communications Conference (MILCOM), Baltimore, MD, Nov. 7–11, 2011.
  • [16] Y. E. Sagduyu and A. Ephremides, “A game-theoretic analysis of denial of service attacks in wireless random access,” Journal of Wireless Networks, vol. 15, no. 5, pp. 651–666, July 2009.
  • [17] K. Davaslioglu and Y. E. Sagduyu, “Generative adversarial learning for spectrum sensing,” IEEE International Conference on Communications (ICC), Kansas City, MO, May 20–24, 2018.
  • [18] Y. Shi, Y. E Sagduyu, T. Erpek, K. Davaslioglu, Z. Lu, and J. Li, “Adversarial deep learning for cognitive radio security: Jamming attack and defense strategies,” IEEE International Conference on Communications (ICC) Workshop on Promises and Challenges of Machine Learning in Communication Networks, Kansas City, MO, May 24, 2018.
  • [19] T. Erpek, Y. E. Sagduyu, and Y. Shi, “Deep learning for launching and mitigating wireless jamming attacks,” arXiv preprint arXiv:1807.02567, 2018.
  • [20] T. O’Shea, J. Corgan, and C. Clancy, “Convolutional radio modulation recognition networks,” International Conference on Engineering Applications of Neural Networks, Aberdeen, United Kingdom, Sept. 2–5, 2016.
  • [21] G. Ateniese, L. Mancini, A. Spognardi, A. Villani, D. Vitali, and G. Felici, “Hacking smart machines with smarter ones: How to extract meaningful data from machine learning classifiers,” International Journal of Security and Networks, vol. 10, issue 3, pp. 137–150, Sept. 2015.
  • [22] F. Tramer, F. Zhang, A. Juels, M. Reiter, and T. Ristenpart, “Stealing machine learning models via prediction APIs,” USENIX Security, Austin, TX, Aug. 10–12, 2016.
  • [23] M. Fredrikson, S. Jha, and T. Ristenpart, “Model inversion attacks that exploit confidence information and basic countermeasures,” ACM SIGSAC Conference on Computer and Communications Security, Denver, CO, Oct. 12–16, 2015.
  • [24] Y. Shi, Y. E. Sagduyu, and A. Grushin, “How to steal a machine learning classifier with deep learning,” IEEE Symposium on Technologies for Homeland Security, Waltham, MA, April 25–26, 2017.
  • [25] B. Biggio, I. Corona, D. Maiorca, B. Nelson, N. Srndic, P. Laskov, G. Giacinto, and F. Roli, “Evasion attacks against machine learning at test time,” European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases, Prague, Czech Republic, Sept. 23–27, 2013.
  • [26] A. Kurakin, I. Goodfellow, and S. Bengio, “Adversarial examples in the physical world,” arXiv preprint arXiv:1607.02533, 2016.
  • [27]

    B. Biggio, B. Nelson, and P. Laskov, “Poisoning attacks against support vector machines,”

    International Conference on International Conference on Machine Learning, Edinburgh, Scotland, June 26 -– July 1, 2012.
  • [28]

    L. Pi, Z. Lu, Y. Sagduyu, and S. Chen, “Defending active learning against adversarial inputs in automated document classification,”

    IEEE Global Conference on Signal and Information Processing (GlobalSIP), Washington, D.C., Dec. 7–9, 2016.
  • [29] Y. Shi and Y. E Sagduyu, “Evasion and causative attacks with adversarial deep learning,” IEEE Military Communications Conference, Baltimore, MD, Oct. 23–25, 2017.
  • [30] Y. Shi, Y. E. Sagduyu, K. Davaslioglu, and R. Levy, “Vulnerability detection and analysis in adversarial deep learning,” in

    Guide to Vulnerability Analysis for Computer Networks and Systems: An Artificial Intelligence Approach

    , S. Parkinson, A. Crampton, R. Hill, Eds. Springer, 2018, pp. 235–258.