Is current research on adversarial robustness addressing the right problem?

07/31/2022
by   Ali Borji, et al.
0

Short answer: Yes, Long answer: No! Indeed, research on adversarial robustness has led to invaluable insights helping us understand and explore different aspects of the problem. Many attacks and defenses have been proposed over the last couple of years. The problem, however, remains largely unsolved and poorly understood. Here, I argue that the current formulation of the problem serves short term goals, and needs to be revised for us to achieve bigger gains. Specifically, the bound on perturbation has created a somewhat contrived setting and needs to be relaxed. This has misled us to focus on model classes that are not expressive enough to begin with. Instead, inspired by human vision and the fact that we rely more on robust features such as shape, vertices, and foreground objects than non-robust features such as texture, efforts should be steered towards looking for significantly different classes of models. Maybe instead of narrowing down on imperceptible adversarial perturbations, we should attack a more general problem which is finding architectures that are simultaneously robust to perceptible perturbations, geometric transformations (e.g. rotation, scaling), image distortions (lighting, blur), and more (e.g. occlusion, shadow). Only then we may be able to solve the problem of adversarial vulnerability.

READ FULL TEXT

page 2

page 4

page 5

page 7

research
05/27/2019

Provable robustness against all adversarial l_p-perturbations for p≥ 1

In recent years several adversarial attacks and defenses have been propo...
research
06/22/2022

Robust Universal Adversarial Perturbations

Universal Adversarial Perturbations (UAPs) are imperceptible, image-agno...
research
08/12/2019

Adversarial Neural Pruning

It is well known that neural networks are susceptible to adversarial per...
research
04/06/2021

Exploring Targeted Universal Adversarial Perturbations to End-to-end ASR Models

Although end-to-end automatic speech recognition (e2e ASR) models are wi...
research
06/07/2021

Adversarial Attack and Defense in Deep Ranking

Deep Neural Network classifiers are vulnerable to adversarial attack, wh...
research
12/07/2017

A Rotation and a Translation Suffice: Fooling CNNs with Simple Transformations

Recent work has shown that neural network-based vision classifiers exhib...

Please sign up or login with your details

Forgot password? Click here to reset