Composite Adversarial Attacks

12/10/2020
by   Xiaofeng Mao, et al.
0

Adversarial attack is a technique for deceiving Machine Learning (ML) models, which provides a way to evaluate the adversarial robustness. In practice, attack algorithms are artificially selected and tuned by human experts to break a ML system. However, manual selection of attackers tends to be sub-optimal, leading to a mistakenly assessment of model security. In this paper, a new procedure called Composite Adversarial Attack (CAA) is proposed for automatically searching the best combination of attack algorithms and their hyper-parameters from a candidate pool of 32 base attackers. We design a search space where attack policy is represented as an attacking sequence, i.e., the output of the previous attacker is used as the initialization input for successors. Multi-objective NSGA-II genetic algorithm is adopted for finding the strongest attack policy with minimum complexity. The experimental result shows CAA beats 10 top attackers on 11 diverse defenses with less elapsed time (6 × faster than AutoAttack), and achieves the new state-of-the-art on l_∞, l_2 and unrestricted adversarial attacks.

READ FULL TEXT

page 4

page 13

research
07/13/2023

Multi-objective Evolutionary Search of Variable-length Composite Semantic Perturbations

Deep neural networks have proven to be vulnerable to adversarial attacks...
research
08/15/2022

A Multi-objective Memetic Algorithm for Auto Adversarial Attack Optimization Design

The phenomenon of adversarial examples has been revealed in variant scen...
research
01/08/2021

Adversarial Attack Attribution: Discovering Attributable Signals in Adversarial ML Attacks

Machine Learning (ML) models are known to be vulnerable to adversarial i...
research
10/24/2022

SpacePhish: The Evasion-space of Adversarial Attacks against Phishing Website Detectors using Machine Learning

Existing literature on adversarial Machine Learning (ML) focuses either ...
research
07/28/2023

Adversarial training for tabular data with attack propagation

Adversarial attacks are a major concern in security-centered application...
research
11/23/2022

Reliable Robustness Evaluation via Automatically Constructed Attack Ensembles

Attack Ensemble (AE), which combines multiple attacks together, provides...
research
07/16/2023

On the Robustness of Split Learning against Adversarial Attacks

Split learning enables collaborative deep learning model training while ...

Please sign up or login with your details

Forgot password? Click here to reset