Potential adversarial samples for white-box attacks

12/13/2019
by   Amir Nazemi, et al.
0

Deep convolutional neural networks can be highly vulnerable to small perturbations of their inputs, potentially a major issue or limitation on system robustness when using deep networks as classifiers. In this paper we propose a low-cost method to explore marginal sample data near trained classifier decision boundaries, thus identifying potential adversarial samples. By finding such adversarial samples it is possible to reduce the search space of adversarial attack algorithms while keeping a reasonable successful perturbation rate. In our developed strategy, the potential adversarial samples represent only 61 adversarial samples produced by iFGSM and 92 successfully perturbed by DeepFool on CIFAR10.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/09/2021

Target Training Does Adversarial Training Without Adversarial Samples

Neural network classifiers are vulnerable to misclassification of advers...
research
04/01/2019

Adversarial Defense by Restricting the Hidden Space of Deep Neural Networks

Deep neural networks are vulnerable to adversarial attacks, which can fo...
research
06/10/2020

Adversarial Attacks on Brain-Inspired Hyperdimensional Computing-Based Classifiers

Being an emerging class of in-memory computing architecture, brain-inspi...
research
02/22/2017

DeepCloak: Masking Deep Neural Network Models for Robustness Against Adversarial Samples

Recent studies have shown that deep neural networks (DNN) are vulnerable...
research
06/09/2020

Towards an Intrinsic Definition of Robustness for a Classifier

The robustness of classifiers has become a question of paramount importa...
research
12/06/2018

Towards Leveraging the Information of Gradients in Optimization-based Adversarial Attack

In recent years, deep neural networks demonstrated state-of-the-art perf...
research
02/15/2020

Hold me tight! Influence of discriminative features on deep network boundaries

Important insights towards the explainability of neural networks and the...

Please sign up or login with your details

Forgot password? Click here to reset