Exploring and Exploiting Decision Boundary Dynamics for Adversarial Robustness

02/06/2023
by   Yuancheng Xu, et al.
1

The robustness of a deep classifier can be characterized by its margins: the decision boundary's distances to natural data points. However, it is unclear whether existing robust training methods effectively increase the margin for each vulnerable point during training. To understand this, we propose a continuous-time framework for quantifying the relative speed of the decision boundary with respect to each individual point. Through visualizing the moving speed of the decision boundary under Adversarial Training, one of the most effective robust training algorithms, a surprising moving-behavior is revealed: the decision boundary moves away from some vulnerable points but simultaneously moves closer to others, decreasing their margins. To alleviate these conflicting dynamics of the decision boundary, we propose Dynamics-aware Robust Training (DyART), which encourages the decision boundary to engage in movement that prioritizes increasing smaller margins. In contrast to prior works, DyART directly operates on the margins rather than their indirect approximations, allowing for more targeted and effective robustness improvement. Experiments on the CIFAR-10 and Tiny-ImageNet datasets verify that DyART alleviates the conflicting dynamics of the decision boundary and obtains improved robustness under various perturbation sizes compared to the state-of-the-art defenses. Our code is available at https://github.com/Yuancheng-Xu/Dynamics-Aware-Robust-Training.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/05/2020

Understanding the Decision Boundary of Deep Neural Networks: An Empirical Study

Despite achieving remarkable performance on many image classification ta...
research
11/23/2020

Learnable Boundary Guided Adversarial Training

Previous adversarial training raises model robustness under the compromi...
research
07/09/2020

Boundary thickness and robustness in learning models

Robustness of machine learning models to various adversarial and non-adv...
research
08/06/2023

CGBA: Curvature-aware Geometric Black-box Attack

Decision-based black-box attacks often necessitate a large number of que...
research
05/06/2021

Understanding Catastrophic Overfitting in Adversarial Training

Recently, FGSM adversarial training is found to be able to train a robus...
research
03/27/2023

Multiphysics discovery with moving boundaries using Ensemble SINDy and Peridynamic Differential Operator

This study proposes a novel framework for learning the underlying physic...
research
09/29/2021

BulletTrain: Accelerating Robust Neural Network Training via Boundary Example Mining

Neural network robustness has become a central topic in machine learning...

Please sign up or login with your details

Forgot password? Click here to reset