Soft Adversarial Training Can Retain Natural Accuracy

06/04/2022
by   Abhijith Sharma, et al.
0

Adversarial training for neural networks has been in the limelight in recent years. The advancement in neural network architectures over the last decade has led to significant improvement in their performance. It sparked an interest in their deployment for real-time applications. This process initiated the need to understand the vulnerability of these models to adversarial attacks. It is instrumental in designing models that are robust against adversaries. Recent works have proposed novel techniques to counter the adversaries, most often sacrificing natural accuracy. Most suggest training with an adversarial version of the inputs, constantly moving away from the original distribution. The focus of our work is to use abstract certification to extract a subset of inputs for (hence we call it 'soft') adversarial training. We propose a training framework that can retain natural accuracy without sacrificing robustness in a constrained setting. Our framework specifically targets moderately critical applications which require a reasonable balance between robustness and accuracy. The results testify to the idea of soft adversarial training for the defense against adversarial attacks. At last, we propose the scope of future work for further improvement of this framework.

READ FULL TEXT

page 2

page 4

research
02/05/2022

Layer-wise Regularized Adversarial Training using Layers Sustainability Analysis (LSA) framework

Deep neural network models are used today in various applications of art...
research
10/18/2022

Scaling Adversarial Training to Large Perturbation Bounds

The vulnerability of Deep Neural Networks to Adversarial Attacks has fue...
research
05/03/2022

Adversarial Training for High-Stakes Reliability

In the future, powerful AI systems may be deployed in high-stakes settin...
research
07/27/2023

NSA: Naturalistic Support Artifact to Boost Network Confidence

Visual AI systems are vulnerable to natural and synthetic physical corru...
research
10/20/2020

Towards Understanding the Dynamics of the First-Order Adversaries

An acknowledged weakness of neural networks is their vulnerability to ad...
research
04/15/2022

Revisiting the Adversarial Robustness-Accuracy Tradeoff in Robot Learning

Adversarial training (i.e., training on adversarially perturbed input da...
research
10/10/2018

Is PGD-Adversarial Training Necessary? Alternative Training via a Soft-Quantization Network with Noisy-Natural Samples Only

Recent work on adversarial attack and defense suggests that PGD is a uni...

Please sign up or login with your details

Forgot password? Click here to reset