DeepAI AI Chat
Log In Sign Up

Fast-UAP: Algorithm for Speeding up Universal Adversarial Perturbation Generation with Orientation of Perturbation Vectors

by   Jiazhu Dai, et al.

Convolutional neural networks (CNN) have become one of the most popular machine learning tools and are being applied in various tasks, however, CNN models are vulnerable to universal perturbations, which are usually human-imperceptible but can cause natural images to be misclassified with high probability. One of the state-of-the-art algorithms to generate universal perturbations is known as UAP. UAP only aggregates the minimal perturbations in every iteration, which will lead to generated universal perturbation whose magnitude cannot rise up efficiently and cause a slow generation. In this paper, we proposed an optimized algorithm to improve the performance of crafting universal perturbations based on orientation of perturbation vectors. At each iteration, instead of choosing minimal perturbation vector with respect to each image, we aggregate the current instance of universal perturbation with the perturbation which has similar orientation to the former so that the magnitude of the aggregation will rise up as large as possible at every iteration. The experiment results show that we get universal perturbations in a shorter time and with a smaller number of training images. Furthermore, we observe in experiments that universal perturbations generated by our proposed algorithm have an average increment of fooling rate by 8 attacks and black-box attacks comparing with universal perturbations generated by UAP.


page 2

page 6

page 8

page 10


Universal adversarial perturbations

Given a state-of-the-art deep neural network classifier, we show the exi...

Frequency-Tuned Universal Adversarial Attacks

Researchers have shown that the predictions of a convolutional neural ne...

FG-UAP: Feature-Gathering Universal Adversarial Perturbation

Deep Neural Networks (DNNs) are susceptible to elaborately designed pert...

Universal Decision-Based Black-Box Perturbations: Breaking Security-Through-Obscurity Defenses

We study the problem of finding a universal (image-agnostic) perturbatio...

Universal Hard-label Black-Box Perturbations: Breaking Security-Through-Obscurity Defenses

We study the problem of finding a universal (image-agnostic) perturbatio...

Adversarial Turing Patterns from Cellular Automata

State-of-the-art deep classifiers are intriguingly vulnerable to univers...

A study of the effect of JPG compression on adversarial images

Neural network image classifiers are known to be vulnerable to adversari...