Generating Adversarial Fragments with Adversarial Networks for Physical-world Implementation

07/09/2019
by   Zelun Kong, et al.
5

Although deep neural networks have been widely applied in many application domains, they are found to be vulnerable to adversarial attacks. A recent promising set of attacking techniques have been proposed, which mainly focus on generating adversarial examples under digital-world settings. Such strategies are unfortunately not implementable for any physical-world scenarios such as autonomous driving. In this paper, we present FragGAN, a new GAN-based framework which is capable of generating an adversarial image which differs from the original input image only through replacing a targeted fragment within the image using a corresponding visually indistinguishable adversarial fragment. FragGAN ensures that the resulting entire image is effective in attacking. For any physical-world implementation, an attacker could physically print out the adversarial fragment and then paste it onto the original fragment (e.g., a roadside sign for autonomous driving scenarios). FragGAN also enables clean-label attacks against image classification, as the resulting attacks may succeed even without modifying any essential content of an image. Extensive experiments including physical-world case studies on state-of-the-art autonomous steering and image classification models demonstrate that FragGAN is highly effective and superior to simple extensions of existing approaches. To the best of our knowledge, FragGAN is the first approach that can implement effective and clean-label physical-world attacks.

READ FULL TEXT

page 7

page 8

page 12

page 14

page 15

page 16

page 17

page 18

research
12/27/2018

DeepBillboard: Systematic Physical-World Testing of Autonomous Driving Systems

Deep Neural Networks (DNNs) have been widely applied in many autonomous ...
research
03/12/2019

Simple Physical Adversarial Examples against End-to-End Autonomous Driving Models

Recent advances in machine learning, especially techniques such as deep ...
research
04/27/2023

Detection of Adversarial Physical Attacks in Time-Series Image Data

Deep neural networks (DNN) have become a common sensing modality in auto...
research
09/20/2019

Defending Against Physically Realizable Attacks on Image Classification

We study the problem of defending deep neural network approaches for ima...
research
10/17/2020

Finding Physical Adversarial Examples for Autonomous Driving with Fast and Differentiable Image Compositing

There is considerable evidence that deep neural networks are vulnerable ...
research
07/11/2019

Adversarial Objects Against LiDAR-Based Autonomous Driving Systems

Deep neural networks (DNNs) are found to be vulnerable against adversari...
research
05/26/2019

Enhancing ML Robustness Using Physical-World Constraints

Recent advances in Machine Learning (ML) have demonstrated that neural n...

Please sign up or login with your details

Forgot password? Click here to reset