Understanding Adversarial Robustness Through Loss Landscape Geometries

07/22/2019
by   Vinay Uday Prabhu, et al.
0

The pursuit of explaining and improving generalization in deep learning has elicited efforts both in regularization techniques as well as visualization techniques of the loss surface geometry. The latter is related to the intuition prevalent in the community that flatter local optima leads to lower generalization error. In this paper, we harness the state-of-the-art "filter normalization" technique of loss-surface visualization to qualitatively understand the consequences of using adversarial training data augmentation as the explicit regularization technique of choice. Much to our surprise, we discover that this oft deployed adversarial augmentation technique does not actually result in "flatter" loss-landscapes, which requires rethinking adversarial training generalization, and the relationship between generalization and loss landscapes geometries.

READ FULL TEXT
research
04/13/2020

Revisiting Loss Landscape for Adversarial Robustness

The study on improving the robustness of deep neural networks against ad...
research
10/09/2020

How Does Mixup Help With Robustness and Generalization?

Mixup is a popular data augmentation technique based on taking convex co...
research
04/13/2023

Understanding Overfitting in Adversarial Training via Kernel Regression

Adversarial training and data augmentation with noise are widely adopted...
research
11/23/2018

Robustness via curvature regularization, and vice versa

State-of-the-art classifiers have been shown to be largely vulnerable to...
research
09/29/2018

Interpreting Adversarial Robustness: A View from Decision Surface in Input Space

One popular hypothesis of neural network generalization is that the flat...
research
06/15/2020

On the Loss Landscape of Adversarial Training: Identifying Challenges and How to Overcome Them

We analyze the influence of adversarial training on the loss landscape o...
research
06/19/2020

A general framework for defining and optimizing robustness

Robustness of neural networks has recently attracted a great amount of i...

Please sign up or login with your details

Forgot password? Click here to reset