Universal, transferable and targeted adversarial attacks

08/29/2019
by   Junde Wu, et al.
22

Deep Neural Network has been found vulnerable in many previous works. A kind of well-designed inputs, which called adversarial examples, can lead the networks to make incorrect predictions. Depending on the different scenarios, requirements/goals and capabilities, the difficulty of the attack will be different. For example, targeted attack is more difficult than non-targeted attack. A universal attack is more difficult than a non-universal attack. A transferable attack is more difficult than a nontransferable one. The question is: Is there exist an attack that can survival in the most harsh environment to meet all these requirements. Although many cheap and effective attacks have been proposed, this question hasn't been fully answered over large models and large scale dataset. In this paper, we build a neural network to learn a universal mapping from the sources to the adversarial examples. These examples can fool classification networks into classifying all of them to one targeted class. Besides, they are also transferable between different models.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset