Evaluating the Robustness of Semantic Segmentation for Autonomous Driving against Real-World Adversarial Patch Attacks

08/13/2021
by   Federico Nesti, et al.
0

Deep learning and convolutional neural networks allow achieving impressive performance in computer vision tasks, such as object detection and semantic segmentation (SS). However, recent studies have shown evident weaknesses of such models against adversarial perturbations. In a real-world scenario instead, like autonomous driving, more attention should be devoted to real-world adversarial examples (RWAEs), which are physical objects (e.g., billboards and printable patches) optimized to be adversarial to the entire perception pipeline. This paper presents an in-depth evaluation of the robustness of popular SS models by testing the effects of both digital and real-world adversarial patches. These patches are crafted with powerful attacks enriched with a novel loss function. Firstly, an investigation on the Cityscapes dataset is conducted by extending the Expectation Over Transformation (EOT) paradigm to cope with SS. Then, a novel attack optimization, called scene-specific attack, is proposed. Such an attack leverages the CARLA driving simulator to improve the transferability of the proposed EOT-based attack to a real 3D environment. Finally, a printed physical billboard containing an adversarial patch was tested in an outdoor driving scenario to assess the feasibility of the studied attacks in the real world. Exhaustive experiments revealed that the proposed attack formulations outperform previous work to craft both digital and real-world adversarial patches for SS. At the same time, the experimental results showed how these attacks are notably less effective in the real world, hence questioning the practical relevance of adversarial attacks to SS models for autonomous/assisted driving.

READ FULL TEXT

page 1

page 7

page 8

research
01/05/2022

On the Real-World Adversarial Robustness of Real-Time Semantic Segmentation Models for Autonomous Driving

The existence of real-world adversarial examples (commonly in the form o...
research
02/10/2021

Enhancing Real-World Adversarial Patches with 3D Modeling Techniques

Although many studies have examined adversarial examples in the real wor...
research
08/19/2022

Real-Time Robust Video Object Detection System Against Physical-World Adversarial Attacks

DNN-based video object detection (VOD) powers autonomous driving and vid...
research
06/09/2022

CARLA-GeAR: a Dataset Generator for a Systematic Evaluation of Adversarial Robustness of Vision Models

Adversarial examples represent a serious threat for deep neural networks...
research
05/22/2023

Towards Benchmarking and Assessing Visual Naturalness of Physical World Adversarial Attacks

Physical world adversarial attack is a highly practical and threatening ...
research
07/08/2020

SLAP: Improving Physical Adversarial Examples with Short-Lived Adversarial Perturbations

Whilst significant research effort into adversarial examples (AE) has em...
research
03/14/2022

Defending From Physically-Realizable Adversarial Attacks Through Internal Over-Activation Analysis

This work presents Z-Mask, a robust and effective strategy to improve th...

Please sign up or login with your details

Forgot password? Click here to reset