Fast-UAP: Algorithm for Speeding up Universal Adversarial Perturbation Generation with Orientation of Perturbation Vectors

11/04/2019
by   Jiazhu Dai, et al.
7

Convolutional neural networks (CNN) have become one of the most popular machine learning tools and are being applied in various tasks, however, CNN models are vulnerable to universal perturbations, which are usually human-imperceptible but can cause natural images to be misclassified with high probability. One of the state-of-the-art algorithms to generate universal perturbations is known as UAP. UAP only aggregates the minimal perturbations in every iteration, which will lead to generated universal perturbation whose magnitude cannot rise up efficiently and cause a slow generation. In this paper, we proposed an optimized algorithm to improve the performance of crafting universal perturbations based on orientation of perturbation vectors. At each iteration, instead of choosing minimal perturbation vector with respect to each image, we aggregate the current instance of universal perturbation with the perturbation which has similar orientation to the former so that the magnitude of the aggregation will rise up as large as possible at every iteration. The experiment results show that we get universal perturbations in a shorter time and with a smaller number of training images. Furthermore, we observe in experiments that universal perturbations generated by our proposed algorithm have an average increment of fooling rate by 8 attacks and black-box attacks comparing with universal perturbations generated by UAP.

READ FULL TEXT

page 2

page 6

page 8

page 10

research
10/26/2016

Universal adversarial perturbations

Given a state-of-the-art deep neural network classifier, we show the exi...
research
03/11/2020

Frequency-Tuned Universal Adversarial Attacks

Researchers have shown that the predictions of a convolutional neural ne...
research
09/27/2022

FG-UAP: Feature-Gathering Universal Adversarial Perturbation

Deep Neural Networks (DNNs) are susceptible to elaborately designed pert...
research
11/09/2018

Universal Decision-Based Black-Box Perturbations: Breaking Security-Through-Obscurity Defenses

We study the problem of finding a universal (image-agnostic) perturbatio...
research
11/18/2020

Adversarial Turing Patterns from Cellular Automata

State-of-the-art deep classifiers are intriguingly vulnerable to univers...
research
08/02/2016

A study of the effect of JPG compression on adversarial images

Neural network image classifiers are known to be vulnerable to adversari...
research
11/09/2018

Universal Hard-label Black-Box Perturbations: Breaking Security-Through-Obscurity Defenses

We study the problem of finding a universal (image-agnostic) perturbatio...

Please sign up or login with your details

Forgot password? Click here to reset