Adversarial Training Using Feedback Loops

Deep neural networks (DNN) have found wide applicability in numerous fields due to their ability to accurately learn very complex input-output relations. Despite their accuracy and extensive use, DNNs are highly susceptible to adversarial attacks due to limited generalizability. For future progress in the field, it is essential to build DNNs that are robust to any kind of perturbations to the data points. In the past, many techniques have been proposed to robustify DNNs using first-order derivative information of the network. This paper proposes a new robustification approach based on control theory. A neural network architecture that incorporates feedback control, named Feedback Neural Networks, is proposed. The controller is itself a neural network, which is trained using regular and adversarial data such as to stabilize the system outputs. The novel adversarial training approach based on the feedback control architecture is called Feedback Looped Adversarial Training (FLAT). Numerical results on standard test problems empirically show that our FLAT method is more effective than the state-of-the-art to guard against adversarial attacks.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/21/2021

Dual Head Adversarial Training

Deep neural networks (DNNs) are known to be vulnerable to adversarial ex...
research
06/13/2022

Distributed Adversarial Training to Robustify Deep Neural Networks at Scale

Current deep neural networks (DNNs) are vulnerable to adversarial attack...
research
05/15/2020

Initializing Perturbations in Multiple Directions for Fast Adversarial Training

Recent developments in the filed of Deep Learning have demonstrated that...
research
12/28/2022

Differentiable Search of Accurate and Robust Architectures

Deep neural networks (DNNs) are found to be vulnerable to adversarial at...
research
03/04/2017

Adversarial Generation of Real-time Feedback with Neural Networks for Simulation-based Training

Simulation-based training (SBT) is gaining popularity as a low-cost and ...
research
10/13/2022

AccelAT: A Framework for Accelerating the Adversarial Training of Deep Neural Networks through Accuracy Gradient

Adversarial training is exploited to develop a robust Deep Neural Networ...
research
03/15/2022

NINNs: Nudging Induced Neural Networks

New algorithms called nudging induced neural networks (NINNs), to contro...

Please sign up or login with your details

Forgot password? Click here to reset