Layer-wise Regularized Adversarial Training using Layers Sustainability Analysis (LSA) framework

02/05/2022
by   Mohammad Khalooei, et al.
2

Deep neural network models are used today in various applications of artificial intelligence, the strengthening of which, in the face of adversarial attacks is of particular importance. An appropriate solution to adversarial attacks is adversarial training, which reaches a trade-off between robustness and generalization. This paper introduces a novel framework (Layer Sustainability Analysis (LSA)) for the analysis of layer vulnerability in a given neural network in the scenario of adversarial attacks. LSA can be a helpful toolkit to assess deep neural networks and to extend adversarial training approaches towards improving the sustainability of model layers via layer monitoring and analysis. The LSA framework identifies a list of Most Vulnerable Layers (MVL list) of a given network. The relative error, as a comparison measure, is used to evaluate the representation sustainability of each layer against adversarial attack inputs. The proposed approach for obtaining robust neural networks to fend off adversarial attacks is based on a layer-wise regularization (LR) over LSA proposal(s) for adversarial training (AT); i.e. the AT-LR procedure. AT-LR could be used with any benchmark adversarial attack to reduce the vulnerability of network layers and to improve conventional adversarial training approaches. The proposed idea performs well theoretically and experimentally for state-of-the-art multilayer perceptron and convolutional neural network architectures. Compared with the AT-LR and its corresponding base adversarial training, the classification accuracy of more significant perturbations increased by 16.35 MNIST, and CIFAR-10 benchmark datasets in comparison with the AT-LR and its corresponding base adversarial training, respectively. The LSA framework is available and published at https://github.com/khalooei/LSA.

READ FULL TEXT

page 13

page 21

research
05/13/2019

Harnessing the Vulnerability of Latent Layers in Adversarially Trained Models

Neural networks are vulnerable to adversarial attacks -- small visually ...
research
06/07/2022

Adaptive Regularization for Adversarial Training

Adversarial training, which is to enhance robustness against adversarial...
research
06/04/2022

Soft Adversarial Training Can Retain Natural Accuracy

Adversarial training for neural networks has been in the limelight in re...
research
01/04/2023

Availability Adversarial Attack and Countermeasures for Deep Learning-based Load Forecasting

The forecast of electrical loads is essential for the planning and opera...
research
07/30/2023

On Neural Network approximation of ideal adversarial attack and convergence of adversarial training

Adversarial attacks are usually expressed in terms of a gradient-based o...
research
03/25/2022

Improving robustness of jet tagging algorithms with adversarial training

Deep learning is a standard tool in the field of high-energy physics, fa...
research
06/24/2018

SSIMLayer: Towards Robust Deep Representation Learning via Nonlinear Structural Similarity

Deeper convolutional neural networks provide more capacity to approximat...

Please sign up or login with your details

Forgot password? Click here to reset