Cross-Domain Transferability of Adversarial Perturbations

05/28/2019
by   Muzammal Naseer, et al.
0

Adversarial examples reveal the blind spots of deep neural networks (DNNs) and represent a major concern for security-critical applications. The transferability of adversarial examples makes real-world attacks possible in black-box settings, where the attacker is forbidden to access the internal parameters of the model. The underlying assumption in most adversary generation methods, whether learning an instance-specific or an instance-agnostic perturbation, is the direct or indirect reliance on the original domain-specific data distribution. In this work, for the first time, we demonstrate the existence of domain-invariant adversaries, thereby showing common adversarial space among different datasets and models. To this end, we propose a framework capable of launching highly transferable attacks that crafts adversarial patterns to mislead networks trained on wholly different domains. For instance, an adversarial function learned on Paintings, Cartoons or Medical images can successfully perturb ImageNet samples to fool the classifier, with success rates as high as ∼99% (ℓ_∞< 10). The core of our proposed adversarial function is a generative network that is trained using a relativistic supervisory signal that enables domain-invariant perturbations. Our approach sets the new state-of-the-art for fooling rates, both under the white-box and black-box scenarios. Furthermore, despite being an instance-agnostic perturbation function, our attack outperforms the conventionally much stronger instance-specific attack methods.

READ FULL TEXT

page 6

page 11

page 12

page 13

page 14

page 15

page 16

page 17

research
03/03/2020

Data-Free Adversarial Perturbations for Practical Black-Box Attack

Neural networks are vulnerable to adversarial examples, which are malici...
research
04/23/2023

StyLess: Boosting the Transferability of Adversarial Examples

Adversarial attacks can mislead deep neural networks (DNNs) by adding im...
research
12/10/2021

Cross-Modal Transferable Adversarial Attacks from Images to Videos

Recent studies have shown that adversarial examples hand-crafted on one ...
research
09/09/2021

Energy Attack: On Transferring Adversarial Examples

In this work we propose Energy Attack, a transfer-based black-box L_∞-ad...
research
09/19/2023

Transferable Adversarial Attack on Image Tampering Localization

It is significant to evaluate the security of existing digital image tam...
research
11/19/2021

Towards Efficiently Evaluating the Robustness of Deep Neural Networks in IoT Systems: A GAN-based Method

Intelligent Internet of Things (IoT) systems based on deep neural networ...
research
10/19/2020

When Bots Take Over the Stock Market: Evasion Attacks Against Algorithmic Traders

In recent years, machine learning has become prevalent in numerous tasks...

Please sign up or login with your details

Forgot password? Click here to reset