Defending Against Adversarial Attacks Using Random Forests

06/16/2019
by   Yifan Ding, et al.
1

As deep neural networks (DNNs) have become increasingly important and popular, the robustness of DNNs is the key to the safety of both the Internet and the physical world. Unfortunately, some recent studies show that adversarial examples, which are hard to be distinguished from real examples, can easily fool DNNs and manipulate their predictions. Upon observing that adversarial examples are mostly generated by gradient-based methods, in this paper, we first propose to use a simple yet very effective non-differentiable hybrid model that combines DNNs and random forests, rather than hide gradients from attackers, to defend against the attacks. Our experiments show that our model can successfully and completely defend the white-box attacks, has a lower transferability, and is quite resistant to three representative types of black-box attacks; while at the same time, our model achieves similar classification accuracy as the original DNNs. Finally, we investigate and suggest a criterion to define where to grow random forests in DNNs.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
01/25/2021

Generalizing Adversarial Examples by AdaBelief Optimizer

Recent research has proved that deep neural networks (DNNs) are vulnerab...
research
03/24/2020

PoisHygiene: Detecting and Mitigating Poisoning Attacks in Neural Networks

The black-box nature of deep neural networks (DNNs) facilitates attacker...
research
12/07/2020

Backpropagating Linearly Improves Transferability of Adversarial Examples

The vulnerability of deep neural networks (DNNs) to adversarial examples...
research
05/21/2022

Gradient Concealment: Free Lunch for Defending Adversarial Attacks

Recent studies show that the deep neural networks (DNNs) have achieved g...
research
03/08/2023

Immune Defense: A Novel Adversarial Defense Mechanism for Preventing the Generation of Adversarial Examples

The vulnerability of Deep Neural Networks (DNNs) to adversarial examples...
research
07/04/2022

Hessian-Free Second-Order Adversarial Examples for Adversarial Learning

Recent studies show deep neural networks (DNNs) are extremely vulnerable...
research
08/24/2022

Attacking Neural Binary Function Detection

Binary analyses based on deep neural networks (DNNs), or neural binary a...

Please sign up or login with your details

Forgot password? Click here to reset