It's Raining Cats or Dogs? Adversarial Rain Attack on DNN Perception

09/19/2020
by   Liming Zhai, et al.
1

Rain is a common phenomenon in nature and an essential factor for many deep neural network (DNN) based perception systems. Rain can often post inevitable threats that must be carefully addressed especially in the context of safety and security-sensitive scenarios (e.g., autonomous driving). Therefore, a comprehensive investigation of the potential risks of the rain to a DNN is of great importance. Unfortunately, in practice, it is often rather difficult to collect or synthesize rainy images that can represent all raining situations that possibly occur in the real world. To this end, in this paper, we start from a new perspective and propose to combine two totally different studies, i.e., rainy image synthesis and adversarial attack. We present an adversarial rain attack, with which we could simulate various rainy situations with the guidance of deployed DNNs and reveal the potential threat factors that can be brought by rain, helping to develop more rain-robust DNNs. In particular, we propose a factor-aware rain generation that simulates rain steaks according to the camera exposure process and models the learnable rain factors for adversarial attack. With this generator, we further propose the adversarial rain attack against the image classification and object detection, where the rain factors are guided by the various DNNs. As a result, it enables to comprehensively study the impacts of the rain factors to DNNs. Our largescale evaluation on three datasets, i.e., NeurIPS'17 DEV, MS COCO and KITTI, demonstrates that our synthesized rainy images can not only present visually realistic appearances, but also exhibit strong adversarial capability, which builds the foundation for further rain-robust perception studies.

READ FULL TEXT

page 1

page 3

page 6

page 8

research
04/02/2021

RABA: A Robust Avatar Backdoor Attack on Deep Neural Network

With the development of Deep Neural Network (DNN), as well as the demand...
research
07/19/2023

Backdoor Attack against Object Detection with Clean Annotation

Deep neural networks (DNNs) have shown unprecedented success in object d...
research
09/19/2020

Adversarial Exposure Attack on Diabetic Retinopathy Imagery

Diabetic retinopathy (DR) is a leading cause of vision loss in the world...
research
06/01/2022

NeuroUnlock: Unlocking the Architecture of Obfuscated Deep Neural Networks

The advancements of deep neural networks (DNNs) have led to their deploy...
research
03/11/2021

Adversarial Laser Beam: Effective Physical-World Attack to DNNs in a Blink

Though it is well known that the performance of deep neural networks (DN...
research
06/27/2023

IMPOSITION: Implicit Backdoor Attack through Scenario Injection

This paper presents a novel backdoor attack called IMPlicit BackdOor Att...
research
11/05/2019

The Tale of Evil Twins: Adversarial Inputs versus Backdoored Models

Despite their tremendous success in a wide range of applications, deep n...

Please sign up or login with your details

Forgot password? Click here to reset