Diversified Adversarial Attacks based on Conjugate Gradient Method

06/20/2022
by   Keiichiro Yamamura, et al.
0

Deep learning models are vulnerable to adversarial examples, and adversarial attacks used to generate such examples have attracted considerable research interest. Although existing methods based on the steepest descent have achieved high attack success rates, ill-conditioned problems occasionally reduce their performance. To address this limitation, we utilize the conjugate gradient (CG) method, which is effective for this type of problem, and propose a novel attack algorithm inspired by the CG method, named the Auto Conjugate Gradient (ACG) attack. The results of large-scale evaluation experiments conducted on the latest robust models show that, for most models, ACG was able to find more adversarial examples with fewer iterations than the existing SOTA algorithm Auto-PGD (APGD). We investigated the difference in search performance between ACG and APGD in terms of diversification and intensification, and define a measure called Diversity Index (DI) to quantify the degree of diversity. From the analysis of the diversity using this index, we show that the more diverse search of the proposed method remarkably improves its attack success rate.

READ FULL TEXT

page 1

page 14

research
09/27/2021

MUTEN: Boosting Gradient-Based Adversarial Attacks via Mutant-Based Ensembles

Deep Neural Networks (DNNs) are vulnerable to adversarial examples, whic...
research
03/17/2023

Fuzziness-tuned: Improving the Transferability of Adversarial Examples

With the development of adversarial attacks, adversairal examples have b...
research
10/15/2021

Generating Natural Language Adversarial Examples through An Improved Beam Search Algorithm

The research of adversarial attacks in the text domain attracts many int...
research
11/19/2021

Resilience from Diversity: Population-based approach to harden models against adversarial attacks

Traditional deep learning models exhibit intriguing vulnerabilities that...
research
01/16/2021

Multi-objective Search of Robust Neural Architectures against Multiple Types of Adversarial Attacks

Many existing deep learning models are vulnerable to adversarial example...
research
07/13/2023

Multi-objective Evolutionary Search of Variable-length Composite Semantic Perturbations

Deep neural networks have proven to be vulnerable to adversarial attacks...
research
09/13/2021

Adversarial Examples for Evaluating Math Word Problem Solvers

Standard accuracy metrics have shown that Math Word Problem (MWP) solver...

Please sign up or login with your details

Forgot password? Click here to reset