Formalizing Generalization and Robustness of Neural Networks to Weight Perturbations

03/03/2021
by   Yu-Lin Tsai, et al.
0

Studying the sensitivity of weight perturbation in neural networks and its impacts on model performance, including generalization and robustness, is an active research topic due to its implications on a wide range of machine learning tasks such as model compression, generalization gap assessment, and adversarial attacks. In this paper, we provide the first formal analysis for feed-forward neural networks with non-negative monotone activation functions against norm-bounded weight perturbations, in terms of the robustness in pairwise class margin functions and the Rademacher complexity for generalization. We further design a new theory-driven loss function for training generalizable and robust neural networks against weight perturbations. Empirical experiments are conducted to validate our theoretical analysis. Our results offer fundamental insights for characterizing the generalization and robustness of neural networks against weight perturbations.

READ FULL TEXT
research
02/23/2021

Non-Singular Adversarial Robustness of Neural Networks

Adversarial robustness has become an emerging challenge for neural netwo...
research
10/29/2018

Rademacher Complexity for Adversarially Robust Generalization

Many machine learning models are vulnerable to adversarial attacks. It h...
research
02/23/2018

Sensitivity and Generalization in Neural Networks: an Empirical Study

In practice it is often found that large over-parameterized neural netwo...
research
06/26/2022

Self-Healing Robust Neural Networks via Closed-Loop Control

Despite the wide applications of neural networks, there have been increa...
research
10/27/2018

Towards Robust Deep Neural Networks

We examine the relationship between the energy landscape of neural netwo...
research
06/23/2021

Minimum sharpness: Scale-invariant parameter-robustness of neural networks

Toward achieving robust and defensive neural networks, the robustness ag...
research
10/01/2019

Truth or Backpropaganda? An Empirical Investigation of Deep Learning Theory

We empirically evaluate common assumptions about neural networks that ar...

Please sign up or login with your details

Forgot password? Click here to reset