Multiple Perturbation Attack: Attack Pixelwise Under Different ℓ_p-norms For Better Adversarial Performance

12/05/2022
by   Ngoc N. Tran, et al.
0

Adversarial machine learning has been both a major concern and a hot topic recently, especially with the ubiquitous use of deep neural networks in the current landscape. Adversarial attacks and defenses are usually likened to a cat-and-mouse game in which defenders and attackers evolve over the time. On one hand, the goal is to develop strong and robust deep networks that are resistant to malicious actors. On the other hand, in order to achieve that, we need to devise even stronger adversarial attacks to challenge these defense models. Most of existing attacks employs a single ℓ_p distance (commonly, p∈{1,2,∞}) to define the concept of closeness and performs steepest gradient ascent w.r.t. this p-norm to update all pixels in an adversarial example in the same way. These ℓ_p attacks each has its own pros and cons; and there is no single attack that can successfully break through defense models that are robust against multiple ℓ_p norms simultaneously. Motivated by these observations, we come up with a natural approach: combining various ℓ_p gradient projections on a pixel level to achieve a joint adversarial perturbation. Specifically, we learn how to perturb each pixel to maximize the attack performance, while maintaining the overall visual imperceptibility of adversarial examples. Finally, through various experiments with standardized benchmarks, we show that our method outperforms most current strong attacks across state-of-the-art defense mechanisms, while retaining its ability to remain clean visually.

READ FULL TEXT

page 3

page 14

page 15

page 16

research
04/19/2021

LAFEAT: Piercing Through Adversarial Defenses with Latent Features

Deep convolutional neural networks are susceptible to adversarial attack...
research
06/21/2021

Delving into the pixels of adversarial samples

Despite extensive research into adversarial attacks, we do not know how ...
research
10/26/2022

LP-BFGS attack: An adversarial attack based on the Hessian with limited pixels

Deep neural networks are vulnerable to adversarial attacks. Most white-b...
research
08/22/2023

Designing an attack-defense game: how to increase robustness of financial transaction models via a competition

Given the escalating risks of malicious attacks in the finance sector an...
research
01/30/2023

On the Efficacy of Metrics to Describe Adversarial Attacks

Adversarial defenses are naturally evaluated on their ability to tolerat...
research
03/26/2018

Clipping free attacks against artificial neural networks

During the last years, a remarkable breakthrough has been made in AI dom...
research
12/18/2020

RAILS: A Robust Adversarial Immune-inspired Learning System

Adversarial attacks against deep neural networks are continuously evolvi...

Please sign up or login with your details

Forgot password? Click here to reset