Boundary thickness and robustness in learning models

07/09/2020
by   Yaoqing Yang, et al.
14

Robustness of machine learning models to various adversarial and non-adversarial corruptions continues to be of interest. In this paper, we introduce the notion of the boundary thickness of a classifier, and we describe its connection with and usefulness for model robustness. Thick decision boundaries lead to improved performance, while thin decision boundaries lead to overfitting (e.g., measured by the robust generalization gap between training and testing) and lower robustness. We show that a thicker boundary helps improve robustness against adversarial examples (e.g., improving the robust test accuracy of adversarial training) as well as so-called out-of-distribution (OOD) transforms, and we show that many commonly-used regularization and data augmentation procedures can increase boundary thickness. On the theoretical side, we establish that maximizing boundary thickness during training is akin to the so-called mixup training. Using these observations, we show that noise-augmentation on mixup training further increases boundary thickness, thereby combating vulnerability to various forms of adversarial attacks and OOD transforms. We can also show that the performance improvement in several lines of recent work happens in conjunction with a thicker boundary.

READ FULL TEXT
POST COMMENT

Comments

There are no comments yet.

Authors

page 20

page 22

page 23

page 26

02/28/2019

Enhancing the Robustness of Deep Neural Networks by Boundary Conditional GAN

Deep neural networks have been widely deployed in various machine learni...
08/07/2019

Investigating Decision Boundaries of Trained Neural Networks

Deep learning models have been the subject of study from various perspec...
03/01/2021

Explaining Adversarial Vulnerability with a Data Sparsity Hypothesis

Despite many proposed algorithms to provide robustness to deep learning ...
07/08/2020

How benign is benign overfitting?

We investigate two causes for adversarial vulnerability in deep neural n...
01/15/2021

Heating up decision boundaries: isocapacitory saturation, adversarial scenarios and generalization bounds

In the present work we study classifiers' decision boundaries via Browni...
04/11/2021

Achieving Model Robustness through Discrete Adversarial Training

Discrete adversarial attacks are symbolic perturbations to a language in...
10/09/2020

How Does Mixup Help With Robustness and Generalization?

Mixup is a popular data augmentation technique based on taking convex co...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.