Cross-Entropy Loss and Low-Rank Features Have Responsibility for Adversarial Examples

01/24/2019
by   Kamil Nar, et al.
10

State-of-the-art neural networks are vulnerable to adversarial examples; they can easily misclassify inputs that are imperceptibly different than their training and test data. In this work, we establish that the use of cross-entropy loss function and the low-rank features of the training data have responsibility for the existence of these inputs. Based on this observation, we suggest that addressing adversarial examples requires rethinking the use of cross-entropy loss function and looking for an alternative that is more suited for minimization with low-rank features. In this direction, we present a training scheme called differential training, which uses a loss function defined on the differences between the features of points from opposite classes. We show that differential training can ensure a large margin between the decision boundary of the neural network and the points in the training dataset. This larger margin increases the amount of perturbation needed to flip the prediction of the classifier and makes it harder to find an adversarial example with small perturbations. We test differential training on a binary classification task with CIFAR-10 dataset and demonstrate that it radically reduces the ratio of images for which an adversarial example could be found -- not only in the training dataset, but in the test dataset as well.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/08/2020

Reinforcement Based Learning on Classification Task Could Yield Better Generalization and Adversarial Accuracy

Deep Learning has become interestingly popular in computer vision, mostl...
research
05/24/2023

Fantastic DNN Classifiers and How to Identify them without Data

Current algorithms and architecture can create excellent DNN classifier ...
research
03/08/2018

Rethinking Feature Distribution for Loss Functions in Image Classification

We propose a large-margin Gaussian Mixture (L-GM) loss for deep neural n...
research
05/28/2021

Towards optimally abstaining from prediction

A common challenge across all areas of machine learning is that training...
research
11/23/2016

Tunable Sensitivity to Large Errors in Neural Network Training

When humans learn a new concept, they might ignore examples that they ca...
research
02/19/2023

Stationary Point Losses for Robust Model

The inability to guarantee robustness is one of the major obstacles to t...
research
07/27/2020

Towards Accuracy-Fairness Paradox: Adversarial Example-based Data Augmentation for Visual Debiasing

Machine learning fairness concerns about the biases towards certain prot...

Please sign up or login with your details

Forgot password? Click here to reset