Practical No-box Adversarial Attacks against DNNs

12/04/2020
by   Qizhang Li, et al.
0

The study of adversarial vulnerabilities of deep neural networks (DNNs) has progressed rapidly. Existing attacks require either internal access (to the architecture, parameters, or training set of the victim model) or external access (to query the model). However, both the access may be infeasible or expensive in many scenarios. We investigate no-box adversarial examples, where the attacker can neither access the model information or the training set nor query the model. Instead, the attacker can only gather a small number of examples from the same problem domain as that of the victim model. Such a stronger threat model greatly expands the applicability of adversarial attacks. We propose three mechanisms for training with a very small dataset (on the order of tens of examples) and find that prototypical reconstruction is the most effective. Our experiments show that adversarial examples crafted on prototypical auto-encoding models transfer well to a variety of image classification and face verification models. On a commercial celebrity recognition system held by clarifai.com, our approach significantly diminishes the average prediction accuracy of the system to only 15.40 with the attack that transfers adversarial examples from a pre-trained Arcface model.

READ FULL TEXT

page 4

page 14

research
10/11/2018

Characterizing Adversarial Examples Based on Spatial Consistency Information for Semantic Segmentation

Deep Neural Networks (DNNs) have been widely applied in various recognit...
research
02/12/2019

A survey on Adversarial Attacks and Defenses in Text

Deep neural networks (DNNs) have shown an inherent vulnerability to adve...
research
05/24/2016

Transferability in Machine Learning: from Phenomena to Black-Box Attacks using Adversarial Samples

Many machine learning models are vulnerable to adversarial examples: inp...
research
07/21/2022

Careful What You Wish For: on the Extraction of Adversarially Trained Models

Recent attacks on Machine Learning (ML) models such as evasion attacks w...
research
09/02/2020

Adversarial Attacks on Deep Learning Systems for User Identification based on Motion Sensors

For the time being, mobile devices employ implicit authentication mechan...
research
10/14/2019

Man-in-the-Middle Attacks against Machine Learning Classifiers via Malicious Generative Models

Deep Neural Networks (DNNs) are vulnerable to deliberately crafted adver...
research
05/21/2022

Post-breach Recovery: Protection against White-box Adversarial Examples for Leaked DNN Models

Server breaches are an unfortunate reality on today's Internet. In the c...

Please sign up or login with your details

Forgot password? Click here to reset