Adversarial Attacks and Defences: A Survey

09/28/2018 ∙ by Anirban Chakraborty, et al. ∙ IIT Kharagpur Nanyang Technological University The Ohio State University 0

Deep learning has emerged as a strong and efficient framework that can be applied to a broad spectrum of complex learning problems which were difficult to solve using the traditional machine learning techniques in the past. In the last few years, deep learning has advanced radically in such a way that it can surpass human-level performance on a number of tasks. As a consequence, deep learning is being extensively used in most of the recent day-to-day applications. However, security of deep learning systems are vulnerable to crafted adversarial examples, which may be imperceptible to the human eye, but can lead the model to misclassify the output. In recent times, different types of adversaries based on their threat model leverage these vulnerabilities to compromise a deep learning system where adversaries have high incentives. Hence, it is extremely important to provide robustness to deep learning algorithms against these adversaries. However, there are only a few strong countermeasures which can be used in all types of attack scenarios to design a robust deep learning system. In this paper, we attempt to provide a detailed discussion on different types of adversarial attacks with various threat models and also elaborate the efficiency and challenges of recent countermeasures against them.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 7

page 11

page 13

page 25

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1. Introduction

Deep learning is a branch of machine learning that enables computational models composed of multiple processing layers with high level of abstraction to learn from experience and perceive the world in terms of hierarchy of concepts. It uses backpropagation algorithm to discover intricate details in large datasets in order to compute the representation of data in each layer from the representation in the previous layer

(lecun2015deep)

. Deep learning has been found to be remarkable in providing solutions to the problems which were not possible using conventional machine learning techniques. With the evolution of deep neural network models and availability of high performance hardware to train complex models, deep learning made a remarkable progress in the traditional fields of image classification, speech recognition, language translation along with more advanced areas like analysing potential of drug molecules

(ma2015structure), reconstruction of brain circuits (helmstaedter2013retina), analysing particle accelerator data (cio2012structure) (kaggle2012higgs), effects of mutations in DNA (xiong2015gene). Deep learning network, with their unparalleled accuracy, have brought in major revolution in AI based services on the Internet, including cloud computing based AI services from commercial players like Google (google_cloud), Alibaba (alibaba_cloud) and corresponding platform propositions from Intel (intel_cloud) and Nvidia (nvidia_cloud). Extensive use of deep learning based applications can be seen in safety and security-critical environments, like, self driving cars, malware detection and drones and robotics. With recent advancements in face-recognition systems, ATMs and mobile phones are using biometric authentication as a security feature; Automatic Speech Recognition (ASR) models and Voice Controllable systems (VCS) made it possible to realise products like Apple Siri (ios), Amazon Alexa (alexa) and Microsoft Cortana (cortana).

As deep neural networks have found their way from labs to real world, security and integrity of the applications pose great concern. Adversaries can craftily manipulate legitimate inputs, which may be imperceptible to human eye, but can force a trained model to produce incorrect outputs. Szegedy et al. (szegedy2013intriguing)

first discovered that well-performing deep neural networks are susceptible to adversarial attacks. Speculative explanations suggested it was due to extreme nonlinearity of deep neural networks, combined with insufficient model averaging and insufficient regularization of the purely supervised learning problem. Carlini et al

(carlini2016asr) and Zhang et al (zhang2017vcs) independently brought forward the vulnerabilities of automatic speech recognition and voice controllable systems. Attacks on autonomous vehicles have been demonstrated by Kurakin et al (kurakin2016adversarial) where the adversary manipulated traffic signs to confuse the learning model. The paper by Goodfellow et al. (goodfellow2014explaining) provides a detailed analysis with supportive experiments of adversarial training of linear models, while Papernot et al. (papernot2016transferability) addressed the aspect of generalization of adversarial examples. Abadi et al. (abadi2016deep) introduced the concept of distributed deep learning as a way to protect the privacy of training data. Recently in 2017, Hitaj et al. (hitaj2017deep) exploited the real-time nature of the learning models to train a Generative Adversarial Network and showed that the privacy of the collaborative systems can be jeopardised. Since the findings of Szegedy, a lot of attention has been drawn to the context of adversarial learning and the security of deep neural networks. A number of countermeasures have been proposed in recent years to mitigate the effects of adversarial attacks. Kurakin et al. (kurakin2016adversarial) came up with the idea of using adversarial training to protect the learner by augmenting the training set using both original and perturbed data. Hinton et al. (hinton2015distillating) introduced the concept of distillation which was used by Papernot et al. (papernot2016distillation) to propose a defensive mechanism against adversarial examples. Samangouei et al. (samangouei2018defensegan) proposed a mechanism to use Generative Adversarial Network as a countermeasure for adversarial perturbations. Although each of these proposed defense mechanisms were found to be efficient against particular classes of attacks, none of them could be used as a one-stop solution for all kinds of attacks. Moreover, implementation of these defense strategies can lead to degradation of performance and efficiency of the concerned model.

1.1. Motivation and Contribution

The importance of Deep learning applications is increasing day-by-day in our daily life. However, these deep learning applications are vulnerable to adversarial attacks. To the best of our knowledge, there has been a little exhaustive survey in the field of adversarial learning covering different types of adversarial attacks and their countermeasures. Akhtar et al. (akhtar2018threat) presented a comprehensive survey on adversarial attacks on deep learning but in a restrictive context of computer vision. There have been a handful of surveys on security evaluation related to particular machine learning applications (barreno2006can)(barreno2010security)(corona2013adversarial)(biggio2014securityevaluation). Kumar et al. (DBLP:journals/corr/KumarM17) provided a comprehensive survey of prior works by categorizing the attacks under four overlapping classes. The primary motivation of this paper is to summarize recent advances in different types of adversarial attacks with their countermeasures by analyzing various threat models and attack scenarios. We follow a similar approach like prior surveys but without restricting ourselves to specific applications and also in a more elaborate manner with practical examples.

Organization

In this paper, we review recent findings on adversarial attacks and present a detailed understanding of the attack models and methodologies. While our major focus is on attacks and defenses on deep neural networks, we have also presented attack scenarios on Support Vector Machines (SVM) keeping in mind their extensive use in real-world applications. In Section 

2

, we provide a taxonomy of the related terms and keywords and categorize the threat models. This section also explains adversarial capabilities and illustrates potential attack strategies in training (e.g. poisoning attack) and testing (e.g. evasion attack) phases. We discuss in brief the basic notion of black box and white box attacks with relevant applications and further classify black box attack based on how much information is available to the adversary about the system. Section

LABEL:sec:exploratory summarizes exploratory attacks that aim to learn algorithms and models of machine learning systems under attack. Since the attack strategies in evasion and poisoning attacks often overlap, we have combined the work focusing on both of them in Section LABEL:sec:evasion_poisoning. In Section LABEL:sec:advancements we discuss some of the current defense strategies and we conclude in Section LABEL:sec:conclusion.

2. Taxonomy of Machine Learning and Adversarial Model

Before discussing in details about the attack models and their countermeasures, in this section we will provide a qualitative taxonomy on different terms and key words related to adversarial attacks and categorize the threat models.

2.1. Keywords and Definitions

In this section, we summarize predominantly used approaches with emphasis on neural networks to solve machine learning problems and their respective application.

  • Support Vector Machines

    Support vector machines (SVMs) are supervised learning models capable of constructing a hyperplane or a set of hyperplanes in high-dimensional space, which can be used for classification, regression or outliers detection. In other words, a SVM model is a representation of data as points in space with objective of building a maximum-margin hyperplane and splitting the training examples into classes, while maximizing the distance between the split points.

  • Neural Networks:

    Artificial Neural networks (ANNs) inspired by the biological neural networks is based on a collection of perceptrons called

    neurons

    . Each neuron maps a set of inputs to output using an activation function. The learning governs the weights and activation function so as to be able to correctly determine the output. Weights in a multi-layered feed forward are updated by the back-propagation algorithm. Neuron was first introduced by McCulloch-Pitts, followed by Hebb’s learning rule, eventually giving rise to multi-layer feed-forward perceptron and backpropagation algorithm. ANNs deal with supervised (CNN, DNN) and unsupervised network models (self organizing maps) and their learning rules. The neural network models used ubiquitously are discussed below.

    1. DNN: While single layer neural net or perceptron is a feature-engineering approach, deep neural network (DNN) enables feature learning using raw data as input. Multiple hidden layers and its interconnections extract the features from unprocessed input and thus enhances the performance by finding latent structures in unlabeled, unstructured data. A typical DNN architecture, graphically depicted in Figure. 1

      , consists of multiple successive layers (at least 2 hidden layers) of neurons. Each processing layer can be viewed as learning a different, more abstract representation of the original multidimensional input distribution. As a whole, a DNN can be viewed as a highly complex function that is capable of nonlinearly mapping original high-dimensional data points to a lower dimensional space.

      Figure 1. Deep Neural Network
    2. CNN:

      A Convolutional Neural Network (CNN) consists of one or more convolutional or sub-sampling layers, followed by one or more fully connected layers, to share weights and reduce the number of parameters. The architecture of CNN, shown in Figure. 

      2

      , is designed to take advantage of 2D input structure (e.g. input image). Convolution layer creates a feature map; pooling (also called sub-sampling or down-sampling) reduces the dimensionality of each feature map but retains the most important informations to have a model robust to small distortions. For example, to describe a large image, feature values in original matrix can be aggregated at various locations (e.g. max-pooling) to form a matrix of lower dimension. The last fully connected layer use the feature matrix formed from previous layers to classify the data. CNN is mainly used for feature extraction, thus it also finds application in data preprocessing commonly used in image recognition tasks.

      Figure 2. Convolutional Neural Network for MNIST digit recognition

2.2. Adversarial Threat Model

The security of any machine learning model is measured concerning the adversarial goals and capabilities. In this section, we taxonomize the threat models in machine learning systems keeping in mind the strength of the adversary. We begin with the identification of threat surface (papernot2016towards) of systems built on machine learning models to identify where and how an adversary may attempt to subvert the system under attack.

2.2.1. The Attack Surface

A system built on Machine Learning can be viewed as a generalized data processing pipeline. A primitive sequence of operations of the system at the testing time can be viewed as: a) collection of input data from sensors or data repositories, b) transferring the data in the digital domain, c) processing of the transformed data by machine learning model to produce an output, and finally, d) action taken based on the output. For illustration, consider a generic pipeline of an automated vehicle system as shown Figure 3.

Figure 3. Generic pipeline of an Automated Vehicle System

The system collects sensor inputs (images using camera) from which model features (tensor of pixel values) are extracted and used within the models. It then interprets the meaning of the output (probability of stop sign), and takes appropriate action (stopping the car). The

attack surface, in this case, can be defined with respect to the data processing pipeline. An adversary can attempt to manipulate either the collection or the processing of data to corrupt the target model, thus tampering the original output. The main attack scenarios identified by the attack surface are sketched below (biggio2014security; biggio2014securityevaluation):

  1. Evasion Attack: This is the most common type of attack in the adversarial setting. The adversary tries to evade the system by adjusting malicious samples during testing phase. This setting does not assume any influence over the training data.

  2. Poisoning Attack: This type of attack, known as contamination of the training data, takes place during the training time of the machine learning model. An adversary tries to poison the training data by injecting carefully designed samples to compromise the whole learning process eventually.

  3. Exploratory Attack: These attacks do not influence training dataset. Given black box access to the model, they try to gain as much knowledge as possible about the learning algorithm of the underlying system and pattern in training data.

The definition of a threat model depends on the information the adversary has at their disposal. Next, we discuss in details the adversarial capabilities for the threat model.

2.2.2. The Adversarial Capabilities

The term adversarial capabilities refer to the amount of information available to an adversary about the system, which also indicates the attack vector he may use on the threat surface. For illustration, again consider the case of an automated vehicle system as shown in Figure 3 with the attack surface being the testing time (i.e., an Evasion Attack). An internal adversary is one who have access to the model architecture and can use it to distinguish between different images and traffic signs, whereas a weaker adversary is one who have access only to the dump of images fed to the model during testing time. Though both the adversaries are working on the same attack surface, the former adversary is assumed to have much more information and is thus strictly “stronger”. We explore the range of adversarial capabilities in machine learning systems as they relate to testing and training phases.

Training Phase Capabilities

Attacks during training time attempt to influence or corrupt the model directly by altering the dataset used for training. The most straightforward and arguably the weakest attack on training phase is by merely accessing a partial or full training data. There are three broad attack strategies for altering the model based on the adversarial capabilities.

  1. Data Injection: The adversary does not have any access to the training data as well as to the learning algorithm but has ability to augment a new data to the training set. He can corrupt the target model by inserting adversarial samples into the training dataset.

  2. Data Modification: The adversary does not have access to the learning algorithm but has full access to the training data. He poisons the training data directly by modifying the data before it is used for training the target model.

  3. Logic Corruption: The adversary has the ability to meddle with the learning algorithm. These attacks are referred as logic corruption. Apparently, it becomes very difficult to design counter strategy against these adversaries who can alter the learning logic, thereby controlling the model itself.

Testing Phase Capabilities

Adversarial attacks at the testing time do not tamper with the targeted model but rather forces it to produce incorrect outputs. The effectiveness of such attacks is determined mainly by the amount of information available to the adversary about the model. Testing phase attacks can be broadly classified into either White-Box or Black-Box attacks. Before discussing these attacks, we provide a formal definition of a training procedure for a machine learning model.

Let us consider a target machine learning model is trained over input pair from the data distribution with a randomized training procedure having randomness (e.g., random weight initialization, dropout, etc.). The model parameters are learned after the training procedure. More formally, we can write:

Now, let us understand the capabilities of the white-box and black-box adversaries with respect to this definition. An overview of the different threat models have been shown in Figure. LABEL:tab:table2