Practical Black-Box Attacks against Machine Learning

02/08/2016
by   Nicolas Papernot, et al.
0

Machine learning (ML) models, e.g., deep neural networks (DNNs), are vulnerable to adversarial examples: malicious inputs modified to yield erroneous model outputs, while appearing unmodified to human observers. Potential attacks include having malicious content like malware identified as legitimate or controlling vehicle behavior. Yet, all existing adversarial example attacks require knowledge of either the model internals or its training data. We introduce the first practical demonstration of an attacker controlling a remotely hosted DNN with no such knowledge. Indeed, the only capability of our black-box adversary is to observe labels given by the DNN to chosen inputs. Our attack strategy consists in training a local model to substitute for the target DNN, using inputs synthetically generated by an adversary and labeled by the target DNN. We use the local substitute to craft adversarial examples, and find that they are misclassified by the targeted DNN. To perform a real-world and properly-blinded evaluation, we attack a DNN hosted by MetaMind, an online deep learning API. We find that their DNN misclassifies 84.24 adversarial examples crafted with our substitute. We demonstrate the general applicability of our strategy to many ML techniques by conducting the same attack against models hosted by Amazon and Google, using logistic regression substitutes. They yield adversarial examples misclassified by Amazon and Google at rates of 96.19 is capable of evading defense strategies previously found to make adversarial example crafting harder.

READ FULL TEXT

page 6

page 7

page 9

page 14

research
04/14/2018

On the Limitation of MagNet Defense against L_1-based Adversarial Examples

In recent years, defending adversarial perturbations to natural examples...
research
10/14/2019

Man-in-the-Middle Attacks against Machine Learning Classifiers via Malicious Generative Models

Deep Neural Networks (DNNs) are vulnerable to deliberately crafted adver...
research
08/19/2019

Hybrid Batch Attacks: Finding Black-box Adversarial Examples with Limited Queries

In a black-box setting, the adversary only has API access to the target ...
research
11/04/2018

FAdeML: Understanding the Impact of Pre-Processing Noise Filtering on Adversarial Machine Learning

Deep neural networks (DNN)-based machine learning (ML) algorithms have r...
research
01/06/2018

Adversarial Perturbation Intensity Achieving Chosen Intra-Technique Transferability Level for Logistic Regression

Machine Learning models have been shown to be vulnerable to adversarial ...
research
03/13/2022

Generating Practical Adversarial Network Traffic Flows Using NIDSGAN

Network intrusion detection systems (NIDS) are an essential defense for ...
research
01/27/2020

Practical Fast Gradient Sign Attack against Mammographic Image Classifier

Artificial intelligence (AI) has been a topic of major research for many...

Please sign up or login with your details

Forgot password? Click here to reset