SemanticAdv: Generating Adversarial Examples via Attribute-conditional Image Editing

06/19/2019
by   Haonan Qiu, et al.
7

Deep neural networks (DNNs) have achieved great success in various applications due to their strong expressive power. However, recent studies have shown that DNNs are vulnerable to adversarial examples which are manipulated instances targeting to mislead DNNs to make incorrect predictions. Currently, most such adversarial examples try to guarantee "subtle perturbation" by limiting its L_p norm. In this paper, we aim to explore the impact of semantic manipulation on DNNs predictions by manipulating the semantic attributes of images and generate "unrestricted adversarial examples". Such semantic based perturbation is more practical compared with pixel level manipulation. In particular, we propose an algorithm SemanticAdv which leverages disentangled semantic factors to generate adversarial perturbation via altering either single or a combination of semantic attributes. We conduct extensive experiments to show that the semantic based adversarial examples can not only fool different learning tasks such as face verification and landmark detection, but also achieve high attack success rate against real-world black-box services such as Azure face verification service. Such structured adversarial examples with controlled semantic manipulation can shed light on further understanding about vulnerabilities of DNNs as well as potential defensive approaches.

READ FULL TEXT

page 2

page 5

page 6

page 7

page 8

research
10/11/2018

Characterizing Adversarial Examples Based on Spatial Consistency Information for Semantic Segmentation

Deep Neural Networks (DNNs) have been widely applied in various recognit...
research
01/06/2020

Generating Semantic Adversarial Examples via Feature Manipulation

The vulnerability of deep neural networks to adversarial attacks has bee...
research
10/11/2018

Realistic Adversarial Examples in 3D Meshes

Highly expressive models such as deep neural networks (DNNs) have been w...
research
01/21/2019

Generating Textual Adversarial Examples for Deep Learning Models: A Survey

With the development of high computational devices, deep neural networks...
research
10/27/2019

Spot Evasion Attacks: Adversarial Examples for License Plate Recognition Systems with Convolution Neural Networks

Recent studies have shown convolution neural networks (CNNs) for image r...
research
12/27/2021

Adversarial Attack for Asynchronous Event-based Data

Deep neural networks (DNNs) are vulnerable to adversarial examples that ...
research
07/11/2019

Adversarial Objects Against LiDAR-Based Autonomous Driving Systems

Deep neural networks (DNNs) are found to be vulnerable against adversari...

Please sign up or login with your details

Forgot password? Click here to reset