Adversarial Amendment is the Only Force Capable of Transforming an Enemy into a Friend

05/18/2023
by   Chong Yu, et al.
0

Adversarial attack is commonly regarded as a huge threat to neural networks because of misleading behavior. This paper presents an opposite perspective: adversarial attacks can be harnessed to improve neural models if amended correctly. Unlike traditional adversarial defense or adversarial training schemes that aim to improve the adversarial robustness, the proposed adversarial amendment (AdvAmd) method aims to improve the original accuracy level of neural models on benign samples. We thoroughly analyze the distribution mismatch between the benign and adversarial samples. This distribution mismatch and the mutual learning mechanism with the same learning ratio applied in prior art defense strategies is the main cause leading the accuracy degradation for benign samples. The proposed AdvAmd is demonstrated to steadily heal the accuracy degradation and even leads to a certain accuracy boost of common neural models on benign classification, object detection, and segmentation tasks. The efficacy of the AdvAmd is contributed by three key components: mediate samples (to reduce the influence of distribution mismatch with a fine-grained amendment), auxiliary batch norm (to solve the mutual learning mechanism and the smoother judgment surface), and AdvAmd loss (to adjust the learning ratios according to different attack vulnerabilities) through quantitative and ablation experiments.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/25/2023

PEARL: Preprocessing Enhanced Adversarial Robust Learning of Image Deraining for Semantic Segmentation

In light of the significant progress made in the development and applica...
research
03/27/2022

Rebuild and Ensemble: Exploring Defense Against Text Adversaries

Adversarial attacks can mislead strong neural models; as such, in NLP ta...
research
06/09/2022

CARLA-GeAR: a Dataset Generator for a Systematic Evaluation of Adversarial Robustness of Vision Models

Adversarial examples represent a serious threat for deep neural networks...
research
09/11/2020

Defending Against Multiple and Unforeseen Adversarial Videos

Adversarial examples of deep neural networks have been actively investig...
research
09/03/2023

AdvMono3D: Advanced Monocular 3D Object Detection with Depth-Aware Robust Adversarial Training

Monocular 3D object detection plays a pivotal role in the field of auton...
research
08/18/2020

Adversarial Attack and Defense Strategies for Deep Speaker Recognition Systems

Robust speaker recognition, including in the presence of malicious attac...

Please sign up or login with your details

Forgot password? Click here to reset