Boundary thickness and robustness in learning models

by   Yaoqing Yang, et al.

Robustness of machine learning models to various adversarial and non-adversarial corruptions continues to be of interest. In this paper, we introduce the notion of the boundary thickness of a classifier, and we describe its connection with and usefulness for model robustness. Thick decision boundaries lead to improved performance, while thin decision boundaries lead to overfitting (e.g., measured by the robust generalization gap between training and testing) and lower robustness. We show that a thicker boundary helps improve robustness against adversarial examples (e.g., improving the robust test accuracy of adversarial training) as well as so-called out-of-distribution (OOD) transforms, and we show that many commonly-used regularization and data augmentation procedures can increase boundary thickness. On the theoretical side, we establish that maximizing boundary thickness during training is akin to the so-called mixup training. Using these observations, we show that noise-augmentation on mixup training further increases boundary thickness, thereby combating vulnerability to various forms of adversarial attacks and OOD transforms. We can also show that the performance improvement in several lines of recent work happens in conjunction with a thicker boundary.



There are no comments yet.


page 20

page 22

page 23

page 26


Enhancing the Robustness of Deep Neural Networks by Boundary Conditional GAN

Deep neural networks have been widely deployed in various machine learni...

Investigating Decision Boundaries of Trained Neural Networks

Deep learning models have been the subject of study from various perspec...

Explaining Adversarial Vulnerability with a Data Sparsity Hypothesis

Despite many proposed algorithms to provide robustness to deep learning ...

How benign is benign overfitting?

We investigate two causes for adversarial vulnerability in deep neural n...

Heating up decision boundaries: isocapacitory saturation, adversarial scenarios and generalization bounds

In the present work we study classifiers' decision boundaries via Browni...

Achieving Model Robustness through Discrete Adversarial Training

Discrete adversarial attacks are symbolic perturbations to a language in...

How Does Mixup Help With Robustness and Generalization?

Mixup is a popular data augmentation technique based on taking convex co...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.