CAAD 2018: Generating Transferable Adversarial Examples

09/29/2018
by   Yash Sharma, et al.
0

Deep neural networks (DNNs) are vulnerable to adversarial examples, perturbations carefully crafted to fool the targeted DNN, in both the non-targeted and targeted case. In the non-targeted case, the attacker simply aims to induce misclassification. In the targeted case, the attacker aims to induce classification to a specified target class. In addition, it has been observed that strong adversarial examples can transfer to unknown models, yielding a serious security concern. The NIPS 2017 competition was organized to accelerate research in adversarial attacks and defenses, taking place in the realistic setting where submitted adversarial attacks attempt to transfer to submitted defenses. The CAAD 2018 competition took place with nearly identical rules to the NIPS 2017 one. Given the requirement that the NIPS 2017 submissions were to be open-sourced, participants in the CAAD 2018 competition were able to directly build upon previous solutions, and thus improve the state-of-the-art in this setting. Our team participated in the CAAD 2018 competition, and won 1st place in both attack subtracks, non-targeted and targeted adversarial attacks, and 3rd place in defense. We outline our solutions and development results in this article. We hope our results can inform researchers in both generating and defending against adversarial examples.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/12/2019

A survey on Adversarial Attacks and Defenses in Text

Deep neural networks (DNNs) have shown an inherent vulnerability to adve...
research
03/31/2018

Adversarial Attacks and Defences Competition

To accelerate research on adversarial examples and robustness of machine...
research
08/29/2019

Universal, transferable and targeted adversarial attacks

Deep Neural Network has been found vulnerable in many previous works. A ...
research
07/05/2020

You Autocomplete Me: Poisoning Vulnerabilities in Neural Code Completion

Code autocompletion is an integral feature of modern code editors and ID...
research
04/09/2018

An ADMM-Based Universal Framework for Adversarial Attacks on Deep Neural Networks

Deep neural networks (DNNs) are known vulnerable to adversarial attacks....
research
03/17/2021

Can Targeted Adversarial Examples Transfer When the Source and Target Models Have No Label Space Overlap?

We design blackbox transfer-based targeted adversarial attacks for an en...
research
04/27/2020

Transferable Perturbations of Deep Feature Distributions

Almost all current adversarial attacks of CNN classifiers rely on inform...

Please sign up or login with your details

Forgot password? Click here to reset