MaskBlock: Transferable Adversarial Examples with Bayes Approach

08/13/2022
by   Mingyuan Fan, et al.
0

The transferability of adversarial examples (AEs) across diverse models is of critical importance for black-box adversarial attacks, where attackers cannot access the information about black-box models. However, crafted AEs always present poor transferability. In this paper, by regarding the transferability of AEs as generalization ability of the model, we reveal that vanilla black-box attacks craft AEs via solving a maximum likelihood estimation (MLE) problem. For MLE, the results probably are model-specific local optimum when available data is small, i.e., limiting the transferability of AEs. By contrast, we re-formulate crafting transferable AEs as the maximizing a posteriori probability estimation problem, which is an effective approach to boost the generalization of results with limited available data. Because Bayes posterior inference is commonly intractable, a simple yet effective method called MaskBlock is developed to approximately estimate. Moreover, we show that the formulated framework is a generalization version for various attack methods. Extensive experiments illustrate MaskBlock can significantly improve the transferability of crafted adversarial examples by up to about 20

READ FULL TEXT
research
02/28/2022

Enhance transferability of adversarial examples with model architecture

Transferability of adversarial examples is of critical importance to lau...
research
11/17/2020

Generating universal language adversarial examples by understanding and enhancing the transferability across neural models

Deep neural network models are vulnerable to adversarial attacks. In man...
research
07/23/2019

Enhancing Adversarial Example Transferability with an Intermediate Level Attack

Neural networks are vulnerable to adversarial examples, malicious inputs...
research
05/19/2022

Enhancing the Transferability of Adversarial Examples via a Few Queries

Due to the vulnerability of deep neural networks, the black-box attack h...
research
07/26/2022

LGV: Boosting Adversarial Example Transferability from Large Geometric Vicinity

We propose transferability from Large Geometric Vicinity (LGV), a new te...
research
06/15/2021

Model Extraction and Adversarial Attacks on Neural Networks using Switching Power Information

Artificial neural networks (ANNs) have gained significant popularity in ...
research
04/11/2023

Boosting Cross-task Transferability of Adversarial Patches with Visual Relations

The transferability of adversarial examples is a crucial aspect of evalu...

Please sign up or login with your details

Forgot password? Click here to reset