Deep-RBF Networks Revisited: Robust Classification with Rejection

12/07/2018
by   Pourya Habib Zadeh, et al.
0

One of the main drawbacks of deep neural networks, like many other classifiers, is their vulnerability to adversarial attacks. An important reason for their vulnerability is assigning high confidence to regions with few or even no feature points. By feature points, we mean a nonlinear transformation of the input space extracting a meaningful representation of the input data. On the other hand, deep-RBF networks assign high confidence only to the regions containing enough feature points, but they have been discounted due to the widely-held belief that they have the vanishing gradient problem. In this paper, we revisit the deep-RBF networks by first giving a general formulation for them, and then proposing a family of cost functions thereof inspired by metric learning. In the proposed deep-RBF learning algorithm, the vanishing gradient problem does not occur. We make these networks robust to adversarial attack by adding the reject option to their output layer. Through several experiments on the MNIST dataset, we demonstrate that our proposed method not only achieves significant classification accuracy but is also very resistant to various adversarial attacks.

READ FULL TEXT

page 3

page 6

page 9

page 17

research
07/13/2022

On the Robustness of Bayesian Neural Networks to Adversarial Attacks

Vulnerability to adversarial attacks is one of the principal hurdles to ...
research
02/11/2020

Robustness of Bayesian Neural Networks to Gradient-Based Attacks

Vulnerability to adversarial attacks is one of the principal hurdles to ...
research
08/12/2023

Not So Robust After All: Evaluating the Robustness of Deep Neural Networks to Unseen Adversarial Attacks

Deep neural networks (DNNs) have gained prominence in various applicatio...
research
11/05/2020

Sampled Nonlocal Gradients for Stronger Adversarial Attacks

The vulnerability of deep neural networks to small and even imperceptibl...
research
06/27/2018

Gradient Similarity: An Explainable Approach to Detect Adversarial Attacks against Deep Learning

Deep neural networks are susceptible to small-but-specific adversarial p...
research
10/04/2019

Requirements for Developing Robust Neural Networks

Validation accuracy is a necessary, but not sufficient, measure of a neu...

Please sign up or login with your details

Forgot password? Click here to reset