Deceiving Image-to-Image Translation Networks for Autonomous Driving with Adversarial Perturbations

01/06/2020
by   Lin Wang, et al.
21

Deep neural networks (DNNs) have achieved impressive performance on handling computer vision problems, however, it has been found that DNNs are vulnerable to adversarial examples. For such reason, adversarial perturbations have been recently studied in several respects. However, most previous works have focused on image classification tasks, and it has never been studied regarding adversarial perturbations on Image-to-image (Im2Im) translation tasks, showing great success in handling paired and/or unpaired mapping problems in the field of autonomous driving and robotics. This paper examines different types of adversarial perturbations that can fool Im2Im frameworks for autonomous driving purpose. We propose both quasi-physical and digital adversarial perturbations that can make Im2Im models yield unexpected results. We then empirically analyze these perturbations and show that they generalize well under both paired for image synthesis and unpaired settings for style transfer. We also validate that there exist some perturbation thresholds over which the Im2Im mapping is disrupted or impossible. The existence of these perturbations reveals that there exist crucial weaknesses in Im2Im models. Lastly, we show that our methods illustrate how perturbations affect the quality of outputs, pioneering the improvement of the robustness of current SOTA networks for autonomous driving.

READ FULL TEXT

page 1

page 3

page 5

page 6

page 7

research
12/27/2018

DeepBillboard: Systematic Physical-World Testing of Autonomous Driving Systems

Deep Neural Networks (DNNs) have been widely applied in many autonomous ...
research
03/03/2017

Adversarial Examples for Semantic Image Segmentation

Machine learning methods in general and Deep Neural Networks in particul...
research
05/30/2022

Searching for the Essence of Adversarial Perturbations

Neural networks have achieved the state-of-the-art performance in variou...
research
09/11/2017

Art of singular vectors and universal adversarial perturbations

Vulnerability of Deep Neural Networks (DNNs) to adversarial attacks has ...
research
03/18/2021

TOP: Backdoor Detection in Neural Networks via Transferability of Perturbation

Deep neural networks (DNNs) are vulnerable to "backdoor" poisoning attac...
research
03/27/2021

On the benefits of robust models in modulation recognition

Given the rapid changes in telecommunication systems and their higher de...
research
09/07/2020

Dynamically Computing Adversarial Perturbations for Recurrent Neural Networks

Convolutional and recurrent neural networks have been widely employed to...

Please sign up or login with your details

Forgot password? Click here to reset