AdvJND: Generating Adversarial Examples with Just Noticeable Difference

02/01/2020
by   Zifei Zhang, et al.
6

Compared with traditional machine learning models, deep neural networks perform better, especially in image classification tasks. However, they are vulnerable to adversarial examples. Adding small perturbations on examples causes a good-performance model to misclassify the crafted examples, without category differences in the human eyes, and fools deep models successfully. There are two requirements for generating adversarial examples: the attack success rate and image fidelity metrics. Generally, perturbations are increased to ensure the adversarial examples' high attack success rate; however, the adversarial examples obtained have poor concealment. To alleviate the tradeoff between the attack success rate and image fidelity, we propose a method named AdvJND, adding visual model coefficients, just noticeable difference coefficients, in the constraint of a distortion function when generating adversarial examples. In fact, the visual subjective feeling of the human eyes is added as a priori information, which decides the distribution of perturbations, to improve the image quality of adversarial examples. We tested our method on the FashionMNIST, CIFAR10, and MiniImageNet datasets. Adversarial examples generated by our AdvJND algorithm yield gradient distributions that are similar to those of the original inputs. Hence, the crafted noise can be hidden in the original inputs, thus improving the attack concealment significantly.

READ FULL TEXT

page 2

page 6

page 7

page 8

page 9

page 10

research
02/01/2019

Adversarial Example Generation

Deep Neural Networks have achieved remarkable success in computer vision...
research
03/13/2021

Generating Unrestricted Adversarial Examples via Three Parameters

Deep neural networks have been shown to be vulnerable to adversarial exa...
research
01/29/2020

Just Noticeable Difference for Machines to Generate Adversarial Images

One way of designing a robust machine learning algorithm is to generate ...
research
10/15/2020

A Hamiltonian Monte Carlo Method for Probabilistic Adversarial Attack and Learning

Although deep convolutional neural networks (CNNs) have demonstrated rem...
research
06/30/2020

Generating Adversarial Examples with an Optimized Quality

Deep learning models are widely used in a range of application areas, su...
research
12/06/2018

Towards Leveraging the Information of Gradients in Optimization-based Adversarial Attack

In recent years, deep neural networks demonstrated state-of-the-art perf...
research
07/30/2019

Impact of Adversarial Examples on Deep Learning Models for Biomedical Image Segmentation

Deep learning models, which are increasingly being used in the field of ...

Please sign up or login with your details

Forgot password? Click here to reset