Robust Sparse Regularization: Simultaneously Optimizing Neural Network Robustness and Compactness

05/30/2019
by   Adnan Siraj Rakin, et al.
0

Deep Neural Network (DNN) trained by the gradient descent method is known to be vulnerable to maliciously perturbed adversarial input, aka. adversarial attack. As one of the countermeasures against adversarial attack, increasing the model capacity for DNN robustness enhancement was discussed and reported as an effective approach by many recent works. In this work, we show that shrinking the model size through proper weight pruning can even be helpful to improve the DNN robustness under adversarial attack. For obtaining a simultaneously robust and compact DNN model, we propose a multi-objective training method called Robust Sparse Regularization (RSR), through the fusion of various regularization techniques, including channel-wise noise injection, lasso weight penalty, and adversarial training. We conduct extensive experiments across popular ResNet-20, ResNet-18 and VGG-16 DNN architectures to demonstrate the effectiveness of RSR against popular white-box (i.e., PGD and FGSM) and black-box attacks. Thanks to RSR, 85 can be pruned while still achieving 0.68 perturbed-data accuracy respectively on CIFAR-10 dataset, in comparison to its PGD adversarial training baseline.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/22/2018

Parametric Noise Injection: Trainable Randomness to Improve Deep Neural Network Robustness against Adversarial Attack

Recent development in the field of Deep Learning have exposed the underl...
research
10/07/2021

Fingerprinting Multi-exit Deep Neural Network Models via Inference Time

Transforming large deep neural network (DNN) models into the multi-exit ...
research
06/02/2022

Mask-Guided Divergence Loss Improves the Generalization and Robustness of Deep Neural Network

Deep neural network (DNN) with dropout can be regarded as an ensemble mo...
research
11/05/2020

Deep-Dup: An Adversarial Weight Duplication Attack Framework to Crush Deep Neural Network in Multi-Tenant FPGA

The wide deployment of Deep Neural Networks (DNN) in high-performance cl...
research
09/19/2019

Absum: Simple Regularization Method for Reducing Structural Sensitivity of Convolutional Neural Networks

We propose Absum, which is a regularization method for improving adversa...
research
05/28/2021

Robust Regularization with Adversarial Labelling of Perturbed Samples

Recent researches have suggested that the predictive accuracy of neural ...
research
03/06/2023

Adversarial Sampling for Fairness Testing in Deep Neural Network

In this research, we focus on the usage of adversarial sampling to test ...

Please sign up or login with your details

Forgot password? Click here to reset