A general framework for defining and optimizing robustness

06/19/2020
by   Alessandro Tibo, et al.
0

Robustness of neural networks has recently attracted a great amount of interest. The many investigations in this area lack a precise common foundation of robustness concepts. Therefore, in this paper, we propose a rigorous and flexible framework for defining different types of robustness that also help to explain the interplay between adversarial robustness and generalization. The different robustness objectives directly lead to an adjustable family of loss functions. For two robustness concepts of particular interest we show effective ways to minimize the corresponding loss functions. One loss is designed to strengthen robustness against adversarial off-manifold attacks, and another to improve generalization under the given data distribution. Empirical results show that we can effectively train under different robustness objectives, obtaining higher robustness scores and better generalization, for the two examples respectively, compared to the state-of-the-art data augmentation and regularization techniques.

READ FULL TEXT
research
02/03/2022

Certifying Out-of-Domain Generalization for Blackbox Functions

Certifying the robustness of model performance under bounded data distri...
research
10/09/2020

How Does Mixup Help With Robustness and Generalization?

Mixup is a popular data augmentation technique based on taking convex co...
research
10/15/2020

Maximum-Entropy Adversarial Data Augmentation for Improved Generalization and Robustness

Adversarial data augmentation has shown promise for training robust deep...
research
09/12/2019

Feedback Learning for Improving the Robustness of Neural Networks

Recent research studies revealed that neural networks are vulnerable to ...
research
07/12/2023

Learning Stochastic Dynamical Systems as an Implicit Regularization with Graph Neural Networks

Stochastic Gumbel graph networks are proposed to learn high-dimensional ...
research
07/22/2019

Understanding Adversarial Robustness Through Loss Landscape Geometries

The pursuit of explaining and improving generalization in deep learning ...
research
10/08/2020

Improve Adversarial Robustness via Weight Penalization on Classification Layer

It is well-known that deep neural networks are vulnerable to adversarial...

Please sign up or login with your details

Forgot password? Click here to reset