AdvRain: Adversarial Raindrops to Attack Camera-based Smart Vision Systems

03/02/2023
by   Amira Guesmi, et al.
0

Vision-based perception modules are increasingly deployed in many applications, especially autonomous vehicles and intelligent robots. These modules are being used to acquire information about the surroundings and identify obstacles. Hence, accurate detection and classification are essential to reach appropriate decisions and take appropriate and safe actions at all times. Current studies have demonstrated that "printed adversarial attacks", known as physical adversarial attacks, can successfully mislead perception models such as object detectors and image classifiers. However, most of these physical attacks are based on noticeable and eye-catching patterns for generated perturbations making them identifiable/detectable by human eye or in test drives. In this paper, we propose a camera-based inconspicuous adversarial attack (AdvRain) capable of fooling camera-based perception systems over all objects of the same class. Unlike mask based fake-weather attacks that require access to the underlying computing hardware or image memory, our attack is based on emulating the effects of a natural weather condition (i.e., Raindrops) that can be printed on a translucent sticker, which is externally placed over the lens of a camera. To accomplish this, we provide an iterative process based on performing a random search aiming to identify critical positions to make sure that the performed transformation is adversarial for a target classifier. Our transformation is based on blurring predefined parts of the captured image corresponding to the areas covered by the raindrop. We achieve a drop in average model accuracy of more than 45% and 40% on VGG19 for ImageNet and Resnet34 for Caltech-101, respectively, using only 20 raindrops.

READ FULL TEXT

page 1

page 2

page 4

page 6

page 7

research
03/22/2023

State-of-the-art optical-based physical adversarial attacks for deep learning computer vision systems

Adversarial attacks can mislead deep learning models to make false predi...
research
01/21/2020

GhostImage: Perception Domain Attacks against Vision-based Object Classification Systems

In vision-based object classification systems, imaging sensors perceive ...
research
07/06/2019

Affine Disentangled GAN for Interpretable and Robust AV Perception

Autonomous vehicles (AV) have progressed rapidly with the advancements i...
research
01/25/2021

They See Me Rollin': Inherent Vulnerability of the Rolling Shutter in CMOS Image Sensors

Cameras have become a fundamental component of vision-based intelligent ...
research
03/21/2019

Adversarial camera stickers: A Physical Camera Attack on Deep Learning Classifier

Recent work has thoroughly documented the susceptibility of deep learnin...
research
03/21/2019

Adversarial camera stickers: A physical camera-based attack on deep learning systems

Recent work has thoroughly documented the susceptibility of deep learnin...

Please sign up or login with your details

Forgot password? Click here to reset