Towards Understanding the Regularization of Adversarial Robustness on Neural Networks

11/15/2020
by   Yuxin Wen, et al.
0

The problem of adversarial examples has shown that modern Neural Network (NN) models could be rather fragile. Among the more established techniques to solve the problem, one is to require the model to be ϵ-adversarially robust (AR); that is, to require the model not to change predicted labels when any given input examples are perturbed within a certain range. However, it is observed that such methods would lead to standard performance degradation, i.e., the degradation on natural examples. In this work, we study the degradation through the regularization perspective. We identify quantities from generalization analysis of NNs; with the identified quantities we empirically find that AR is achieved by regularizing/biasing NNs towards less confident solutions by making the changes in the feature space (induced by changes in the instance space) of most layers smoother uniformly in all directions; so to a certain extent, it prevents sudden change in prediction w.r.t. perturbations. However, the end result of such smoothing concentrates samples around decision boundaries, resulting in less confident solutions, and leads to worse standard performance. Our studies suggest that one might consider ways that build AR into NNs in a gentler way to avoid the problematic regularization.

READ FULL TEXT
research
09/30/2018

On Regularization and Robustness of Deep Neural Networks

Despite their success, deep neural networks suffer from several drawback...
research
02/21/2018

Generalizable Adversarial Examples Detection Based on Bi-model Decision Mismatch

Deep neural networks (DNNs) have shown phenomenal success in a wide rang...
research
06/07/2019

Reliable Classification Explanations via Adversarial Attacks on Robust Networks

Neural Networks (NNs) have been found vulnerable to a class of impercept...
research
12/25/2018

Adversarial Feature Genome: a Data Driven Adversarial Examples Recognition Method

Convolutional neural networks (CNNs) are easily spoofed by adversarial e...
research
04/22/2020

Adversarial examples and where to find them

Adversarial robustness of trained models has attracted considerable atte...
research
06/30/2020

Neural Network Virtual Sensors for Fuel Injection Quantities with Provable Performance Specifications

Recent work has shown that it is possible to learn neural networks with ...
research
09/20/2023

Using Property Elicitation to Understand the Impacts of Fairness Constraints

Predictive algorithms are often trained by optimizing some loss function...

Please sign up or login with your details

Forgot password? Click here to reset