DeepCloak: Masking Deep Neural Network Models for Robustness Against Adversarial Samples

02/22/2017
by   Ji Gao, et al.
0

Recent studies have shown that deep neural networks (DNN) are vulnerable to adversarial samples: maliciously-perturbed samples crafted to yield incorrect model outputs. Such attacks can severely undermine DNN systems, particularly in security-sensitive settings. It was observed that an adversary could easily generate adversarial samples by making a small perturbation on irrelevant feature dimensions that are unnecessary for the current classification task. To overcome this problem, we introduce a defensive mechanism called DeepCloak. By identifying and removing unnecessary features in a DNN model, DeepCloak limits the capacity an attacker can use generating adversarial samples and therefore increase the robustness against such inputs. Comparing with other defensive approaches, DeepCloak is easy to implement and computationally efficient. Experimental results show that DeepCloak can increase the performance of state-of-the-art DNN models against adversarial samples.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/14/2018

Detecting Adversarial Samples for Deep Neural Networks through Mutation Testing

Recently, it has been shown that deep neural networks (DNN) are subject ...
research
11/14/2015

Distillation as a Defense to Adversarial Perturbations against Deep Neural Networks

Deep learning algorithms have been shown to perform extremely well on ma...
research
08/30/2023

MDTD: A Multi Domain Trojan Detector for Deep Neural Networks

Machine learning models that use deep neural networks (DNNs) are vulnera...
research
04/14/2022

Q-TART: Quickly Training for Adversarial Robustness and in-Transferability

Raw deep neural network (DNN) performance is not enough; in real-world s...
research
01/21/2022

The Security of Deep Learning Defences for Medical Imaging

Deep learning has shown great promise in the domain of medical image ana...
research
11/18/2018

The Taboo Trap: Behavioural Detection of Adversarial Samples

Deep Neural Networks (DNNs) have become a powerful tool for a wide range...
research
12/13/2019

Potential adversarial samples for white-box attacks

Deep convolutional neural networks can be highly vulnerable to small per...

Please sign up or login with your details

Forgot password? Click here to reset