Improving Transferability of Adversarial Examples via Bayesian Attacks

07/21/2023
by   Qizhang Li, et al.
0

This paper presents a substantial extension of our work published at ICLR. Our ICLR work advocated for enhancing transferability in adversarial examples by incorporating a Bayesian formulation into model parameters, which effectively emulates the ensemble of infinitely many deep neural networks, while, in this paper, we introduce a novel extension by incorporating the Bayesian formulation into the model input as well, enabling the joint diversification of both the model input and model parameters. Our empirical findings demonstrate that: 1) the combination of Bayesian formulations for both the model input and model parameters yields significant improvements in transferability; 2) by introducing advanced approximations of the posterior distribution over the model input, adversarial transferability achieves further enhancement, surpassing all state-of-the-arts when attacking without model fine-tuning. Moreover, we propose a principled approach to fine-tune model parameters in such an extended Bayesian formulation. The derived optimization objective inherently encourages flat minima in the parameter space and input space. Extensive experiments demonstrate that our method achieves a new state-of-the-art on transfer-based attacks, improving the average success rate on ImageNet and CIFAR-10 by 19.14 our ICLR basic Bayesian method. We will make our code publicly available.

READ FULL TEXT
research
02/10/2023

Making Substitute Models More Bayesian Can Enhance Transferability of Adversarial Examples

The transferability of adversarial examples across deep neural networks ...
research
03/29/2021

Enhancing the Transferability of Adversarial Attacks through Variance Tuning

Deep neural networks are vulnerable to adversarial examples that mislead...
research
09/21/2020

Improving Ensemble Robustness by Collaboratively Promoting and Demoting Adversarial Robustness

Ensemble-based adversarial training is a principled approach to achieve ...
research
07/26/2023

Identifiability and Falsifiability: Two Challenges for Bayesian Model Expansion

We study the identifiability of model parameters and falsifiability of m...
research
03/17/2023

Fuzziness-tuned: Improving the Transferability of Adversarial Examples

With the development of adversarial attacks, adversairal examples have b...
research
11/10/2020

Efficient and Transferable Adversarial Examples from Bayesian Neural Networks

Deep neural networks are vulnerable to evasion attacks, i.e., carefully ...

Please sign up or login with your details

Forgot password? Click here to reset