Understanding the Decision Boundary of Deep Neural Networks: An Empirical Study

02/05/2020
by   David Mickisch, et al.
0

Despite achieving remarkable performance on many image classification tasks, state-of-the-art machine learning (ML) classifiers remain vulnerable to small input perturbations. Especially, the existence of adversarial examples raises concerns about the deployment of ML models in safety- and security-critical environments, like autonomous driving and disease detection. Over the last few years, numerous defense methods have been published with the goal of improving adversarial as well as corruption robustness. However, the proposed measures succeeded only to a very limited extent. This limited progress is partly due to the lack of understanding of the decision boundary and decision regions of deep neural networks. Therefore, we study the minimum distance of data points to the decision boundary and how this margin evolves over the training of a deep neural network. By conducting experiments on MNIST, FASHION-MNIST, and CIFAR-10, we observe that the decision boundary moves closer to natural images over training. This phenomenon even remains intact in the late epochs of training, where the classifier already obtains low training and test error rates. On the other hand, adversarial training appears to have the potential to prevent this undesired convergence of the decision boundary.

READ FULL TEXT
research
06/08/2022

Latent Boundary-guided Adversarial Training

Deep Neural Networks (DNNs) have recently achieved great success in many...
research
02/06/2023

Exploring and Exploiting Decision Boundary Dynamics for Adversarial Robustness

The robustness of a deep classifier can be characterized by its margins:...
research
05/26/2017

Classification regions of deep neural networks

The goal of this paper is to analyze the geometric properties of deep ne...
research
08/30/2022

Robustness and invariance properties of image classifiers

Deep neural networks have achieved impressive results in many image clas...
research
10/17/2018

Provable Robustness of ReLU networks via Maximization of Linear Regions

It has been shown that neural network classifiers are not robust. This r...
research
07/10/2018

Fooling the classifier: Ligand antagonism and adversarial examples

Machine learning algorithms are sensitive to so-called adversarial pertu...
research
01/08/2018

Boundary Optimizing Network (BON)

Despite all the success that deep neural networks have seen in classifyi...

Please sign up or login with your details

Forgot password? Click here to reset