Robust Regularization with Adversarial Labelling of Perturbed Samples

05/28/2021
by   Xiaohui Guo, et al.
0

Recent researches have suggested that the predictive accuracy of neural network may contend with its adversarial robustness. This presents challenges in designing effective regularization schemes that also provide strong adversarial robustness. Revisiting Vicinal Risk Minimization (VRM) as a unifying regularization principle, we propose Adversarial Labelling of Perturbed Samples (ALPS) as a regularization scheme that aims at improving the generalization ability and adversarial robustness of the trained model. ALPS trains neural networks with synthetic samples formed by perturbing each authentic input sample towards another one along with an adversarially assigned label. The ALPS regularization objective is formulated as a min-max problem, in which the outer problem is minimizing an upper-bound of the VRM loss, and the inner problem is L_1-ball constrained adversarial labelling on perturbed sample. The analytic solution to the induced inner maximization problem is elegantly derived, which enables computational efficiency. Experiments on the SVHN, CIFAR-10, CIFAR-100 and Tiny-ImageNet datasets show that the ALPS has a state-of-the-art regularization performance while also serving as an effective adversarial training scheme.

READ FULL TEXT
research
04/04/2020

Adversarial Robustness through Regularization: A Second-Order Approach

Adversarial training is a common approach to improving the robustness of...
research
03/29/2023

Beyond Empirical Risk Minimization: Local Structure Preserving Regularization for Improving Adversarial Robustness

It is broadly known that deep neural networks are susceptible to being f...
research
11/03/2018

Learning to Defense by Learning to Attack

Adversarial training provides a principled approach for training robust ...
research
05/30/2019

Robust Sparse Regularization: Simultaneously Optimizing Neural Network Robustness and Compactness

Deep Neural Network (DNN) trained by the gradient descent method is know...
research
05/27/2019

Scaleable input gradient regularization for adversarial robustness

Input gradient regularization is not thought to be an effective means fo...
research
12/10/2019

On Certifying Robust Models by Polyhedral Envelope

Certifying neural networks enables one to offer guarantees on a model's ...
research
03/09/2020

An Empirical Evaluation on Robustness and Uncertainty of Regularization Methods

Despite apparent human-level performances of deep neural networks (DNN),...

Please sign up or login with your details

Forgot password? Click here to reset