Non-Singular Adversarial Robustness of Neural Networks

02/23/2021
by   Yu-Lin Tsai, et al.
0

Adversarial robustness has become an emerging challenge for neural network owing to its over-sensitivity to small input perturbations. While being critical, we argue that solving this singular issue alone fails to provide a comprehensive robustness assessment. Even worse, the conclusions drawn from singular robustness may give a false sense of overall model robustness. Specifically, our findings show that adversarially trained models that are robust to input perturbations are still (or even more) vulnerable to weight perturbations when compared to standard models. In this paper, we formalize the notion of non-singular adversarial robustness for neural networks through the lens of joint perturbations to data inputs as well as model weights. To our best knowledge, this study is the first work considering simultaneous input-weight adversarial perturbations. Based on a multi-layer feed-forward neural network model with ReLU activation functions and standard classification loss, we establish error analysis for quantifying the loss sensitivity subject to ℓ_∞-norm bounded perturbations on data inputs and model weights. Based on the error analysis, we propose novel regularization functions for robust training and demonstrate improved non-singular robustness against joint input-weight adversarial perturbations.

READ FULL TEXT
research
03/03/2021

Formalizing Generalization and Robustness of Neural Networks to Weight Perturbations

Studying the sensitivity of weight perturbation in neural networks and i...
research
09/09/2018

Training for Faster Adversarial Robustness Verification via Inducing ReLU Stability

We explore the concept of co-design in the context of neural network ver...
research
06/03/2019

Adversarial Risk Bounds for Neural Networks through Sparsity based Compression

Neural networks have been shown to be vulnerable against minor adversari...
research
05/31/2019

Residual Networks as Nonlinear Systems: Stability Analysis using Linearization

We regard pre-trained residual networks (ResNets) as nonlinear systems a...
research
08/04/2020

Can Adversarial Weight Perturbations Inject Neural Backdoors?

Adversarial machine learning has exposed several security hazards of neu...
research
12/07/2017

A Rotation and a Translation Suffice: Fooling CNNs with Simple Transformations

Recent work has shown that neural network-based vision classifiers exhib...
research
02/12/2020

Fast Geometric Projections for Local Robustness Certification

Local robustness ensures that a model classifies all inputs within an ϵ-...

Please sign up or login with your details

Forgot password? Click here to reset