Improving the Transferability of Targeted Adversarial Examples through Object-Based Diverse Input

03/17/2022
by   Junyoung Byun, et al.
0

The transferability of adversarial examples allows the deception on black-box models, and transfer-based targeted attacks have attracted a lot of interest due to their practical applicability. To maximize the transfer success rate, adversarial examples should avoid overfitting to the source model, and image augmentation is one of the primary approaches for this. However, prior works utilize simple image transformations such as resizing, which limits input diversity. To tackle this limitation, we propose the object-based diverse input (ODI) method that draws an adversarial image on a 3D object and induces the rendered image to be classified as the target class. Our motivation comes from the humans' superior perception of an image printed on a 3D object. If the image is clear enough, humans can recognize the image content in a variety of viewing conditions. Likewise, if an adversarial example looks like the target class to the model, the model should also classify the rendered image of the 3D object as the target class. The ODI method effectively diversifies the input by leveraging an ensemble of multiple source objects and randomizing viewing conditions. In our experimental results on the ImageNet-Compatible dataset, this method boosts the average targeted attack success rate from 28.3 compared to the state-of-the-art methods. We also demonstrate the applicability of the ODI method to adversarial examples on the face verification task and its superior performance improvement. Our code is available at https://github.com/dreamflake/ODI.

READ FULL TEXT
research
07/05/2021

Boosting Transferability of Targeted Adversarial Examples via Hierarchical Generative Networks

Transfer-based adversarial attacks can effectively evaluate model robust...
research
05/24/2023

Introducing Competition to Boost the Transferability of Targeted Adversarial Examples through Clean Feature Mixup

Deep neural networks are widely known to be susceptible to adversarial e...
research
03/07/2023

Logit Margin Matters: Improving Transferable Targeted Adversarial Attack by Logit Calibration

Previous works have extensively studied the transferability of adversari...
research
07/12/2022

Frequency Domain Model Augmentation for Adversarial Attack

For black-box attacks, the gap between the substitute model and the vict...
research
07/29/2021

Feature Importance-aware Transferable Adversarial Attacks

Transferability of adversarial examples is of central importance for att...
research
05/31/2021

Transferable Sparse Adversarial Attack

Deep neural networks have shown their vulnerability to adversarial attac...
research
08/18/2022

Enhancing Targeted Attack Transferability via Diversified Weight Pruning

Malicious attackers can generate targeted adversarial examples by imposi...

Please sign up or login with your details

Forgot password? Click here to reset