Robust and Information-theoretically Safe Bias Classifier against Adversarial Attacks

11/08/2021
by   Lijia Yu, et al.
0

In this paper, the bias classifier is introduced, that is, the bias part of a DNN with Relu as the activation function is used as a classifier. The work is motivated by the fact that the bias part is a piecewise constant function with zero gradient and hence cannot be directly attacked by gradient-based methods to generate adversaries such as FGSM. The existence of the bias classifier is proved an effective training method for the bias classifier is proposed. It is proved that by adding a proper random first-degree part to the bias classifier, an information-theoretically safe classifier against the original-model gradient-based attack is obtained in the sense that the attack generates a totally random direction for generating adversaries. This seems to be the first time that the concept of information-theoretically safe classifier is proposed. Several attack methods for the bias classifier are proposed and numerical experiments are used to show that the bias classifier is more robust than DNNs against these attacks in most cases.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/06/2019

Random Directional Attack for Fooling Deep Neural Networks

Deep neural networks (DNNs) have been widely used in many fields such as...
research
06/02/2023

Adversarial Attack Based on Prediction-Correction

Deep neural networks (DNNs) are vulnerable to adversarial examples obtai...
research
05/25/2019

Resisting Adversarial Attacks by k-Winners-Take-All

We propose a simple change to the current neural network structure for d...
research
10/25/2021

Fast Gradient Non-sign Methods

Adversarial attacks make their success in fooling DNNs and among them, g...
research
08/01/2022

Attacking Adversarial Defences by Smoothing the Loss Landscape

This paper investigates a family of methods for defending against advers...
research
03/01/2023

Backdoor for Debias: Mitigating Model Bias with Backdoor Attack-based Artificial Bias

With the swift advancement of deep learning, state-of-the-art algorithms...

Please sign up or login with your details

Forgot password? Click here to reset