Multi-objective Evolutionary Search of Variable-length Composite Semantic Perturbations

07/13/2023
by   Jialiang Sun, et al.
0

Deep neural networks have proven to be vulnerable to adversarial attacks in the form of adding specific perturbations on images to make wrong outputs. Designing stronger adversarial attack methods can help more reliably evaluate the robustness of DNN models. To release the harbor burden and improve the attack performance, auto machine learning (AutoML) has recently emerged as one successful technique to help automatically find the near-optimal adversarial attack strategy. However, existing works about AutoML for adversarial attacks only focus on L_∞-norm-based perturbations. In fact, semantic perturbations attract increasing attention due to their naturalnesses and physical realizability. To bridge the gap between AutoML and semantic adversarial attacks, we propose a novel method called multi-objective evolutionary search of variable-length composite semantic perturbations (MES-VCSP). Specifically, we construct the mathematical model of variable-length composite semantic perturbations, which provides five gradient-based semantic attack methods. The same type of perturbation in an attack sequence is allowed to be performed multiple times. Besides, we introduce the multi-objective evolutionary search consisting of NSGA-II and neighborhood search to find near-optimal variable-length attack sequences. Experimental results on CIFAR10 and ImageNet datasets show that compared with existing methods, MES-VCSP can obtain adversarial examples with a higher attack success rate, more naturalness, and less time cost.

READ FULL TEXT

page 3

page 4

page 7

page 15

page 16

page 17

research
08/15/2022

A Multi-objective Memetic Algorithm for Auto Adversarial Attack Optimization Design

The phenomenon of adversarial examples has been revealed in variant scen...
research
12/10/2020

Composite Adversarial Attacks

Adversarial attack is a technique for deceiving Machine Learning (ML) mo...
research
02/09/2022

Towards Compositional Adversarial Robustness: Generalizing Adversarial Training to Composite Semantic Perturbations

Model robustness against adversarial examples of single perturbation typ...
research
09/14/2022

TSFool: Crafting High-quality Adversarial Time Series through Multi-objective Optimization to Fool Recurrent Neural Network Classifiers

Deep neural network (DNN) classifiers are vulnerable to adversarial atta...
research
07/16/2022

CARBEN: Composite Adversarial Robustness Benchmark

Prior literature on adversarial attack methods has mainly focused on att...
research
06/20/2022

Diversified Adversarial Attacks based on Conjugate Gradient Method

Deep learning models are vulnerable to adversarial examples, and adversa...
research
05/05/2023

White-Box Multi-Objective Adversarial Attack on Dialogue Generation

Pre-trained transformers are popular in state-of-the-art dialogue genera...

Please sign up or login with your details

Forgot password? Click here to reset