Boundary thickness and robustness in learning models

07/09/2020
by   Yaoqing Yang, et al.
14

Robustness of machine learning models to various adversarial and non-adversarial corruptions continues to be of interest. In this paper, we introduce the notion of the boundary thickness of a classifier, and we describe its connection with and usefulness for model robustness. Thick decision boundaries lead to improved performance, while thin decision boundaries lead to overfitting (e.g., measured by the robust generalization gap between training and testing) and lower robustness. We show that a thicker boundary helps improve robustness against adversarial examples (e.g., improving the robust test accuracy of adversarial training) as well as so-called out-of-distribution (OOD) transforms, and we show that many commonly-used regularization and data augmentation procedures can increase boundary thickness. On the theoretical side, we establish that maximizing boundary thickness during training is akin to the so-called mixup training. Using these observations, we show that noise-augmentation on mixup training further increases boundary thickness, thereby combating vulnerability to various forms of adversarial attacks and OOD transforms. We can also show that the performance improvement in several lines of recent work happens in conjunction with a thicker boundary.

READ FULL TEXT

page 20

page 22

page 23

page 26

research
02/28/2019

Enhancing the Robustness of Deep Neural Networks by Boundary Conditional GAN

Deep neural networks have been widely deployed in various machine learni...
research
02/19/2023

Stationary Point Losses for Robust Model

The inability to guarantee robustness is one of the major obstacles to t...
research
10/09/2020

How Does Mixup Help With Robustness and Generalization?

Mixup is a popular data augmentation technique based on taking convex co...
research
03/01/2021

Explaining Adversarial Vulnerability with a Data Sparsity Hypothesis

Despite many proposed algorithms to provide robustness to deep learning ...
research
02/06/2023

Exploring and Exploiting Decision Boundary Dynamics for Adversarial Robustness

The robustness of a deep classifier can be characterized by its margins:...
research
01/15/2021

Heating up decision boundaries: isocapacitory saturation, adversarial scenarios and generalization bounds

In the present work we study classifiers' decision boundaries via Browni...
research
06/15/2023

Exact Count of Boundary Pieces of ReLU Classifiers: Towards the Proper Complexity Measure for Classification

Classic learning theory suggests that proper regularization is the key t...

Please sign up or login with your details

Forgot password? Click here to reset