Rethinking Model Ensemble in Transfer-based Adversarial Attacks

03/16/2023
by   Huanran Chen, et al.
1

Deep learning models are vulnerable to adversarial examples. Transfer-based adversarial attacks attract tremendous attention as they can identify the weaknesses of deep learning models in a black-box manner. An effective strategy to improve the transferability of adversarial examples is attacking an ensemble of models. However, previous works simply average the outputs of different models, lacking an in-depth analysis on how and why model ensemble can strongly improve the transferability. In this work, we rethink the ensemble in adversarial attacks and define the common weakness of model ensemble with the properties of the flatness of loss landscape and the closeness to the local optimum of each model. We empirically and theoretically show that these two properties are strongly correlated with the transferability and propose a Common Weakness Attack (CWA) to generate more transferable adversarial examples by promoting these two properties. Experimental results on both image classification and object detection tasks validate the effectiveness of our approach to improve the adversarial transferability, especially when attacking adversarially trained models.

READ FULL TEXT
research
08/21/2023

Enhancing Adversarial Attacks: The Similar Target Method

Deep neural networks are vulnerable to adversarial examples, posing a th...
research
05/12/2020

Evaluating Ensemble Robustness Against Adversarial Attacks

Adversarial examples, which are slightly perturbed inputs generated with...
research
06/18/2022

Comment on Transferability and Input Transformation with Additive Noise

Adversarial attacks have verified the existence of the vulnerability of ...
research
04/18/2023

Towards the Transferable Audio Adversarial Attack via Ensemble Methods

In recent years, deep learning (DL) models have achieved significant pro...
research
05/08/2019

Enhancing Cross-task Transferability of Adversarial Examples with Dispersion Reduction

Neural networks are known to be vulnerable to carefully crafted adversar...
research
07/26/2022

LGV: Boosting Adversarial Example Transferability from Large Geometric Vicinity

We propose transferability from Large Geometric Vicinity (LGV), a new te...
research
08/20/2018

Stochastic Combinatorial Ensembles for Defending Against Adversarial Examples

Many deep learning algorithms can be easily fooled with simple adversari...

Please sign up or login with your details

Forgot password? Click here to reset