CARLA-GeAR: a Dataset Generator for a Systematic Evaluation of Adversarial Robustness of Vision Models

06/09/2022
by   Federico Nesti, et al.
21

Adversarial examples represent a serious threat for deep neural networks in several application domains and a huge amount of work has been produced to investigate them and mitigate their effects. Nevertheless, no much work has been devoted to the generation of datasets specifically designed to evaluate the adversarial robustness of neural models. This paper presents CARLA-GeAR, a tool for the automatic generation of photo-realistic synthetic datasets that can be used for a systematic evaluation of the adversarial robustness of neural models against physical adversarial patches, as well as for comparing the performance of different adversarial defense/detection methods. The tool is built on the CARLA simulator, using its Python API, and allows the generation of datasets for several vision tasks in the context of autonomous driving. The adversarial patches included in the generated datasets are attached to billboards or the back of a truck and are crafted by using state-of-the-art white-box attack strategies to maximize the prediction error of the model under test. Finally, the paper presents an experimental study to evaluate the performance of some defense methods against such attacks, showing how the datasets generated with CARLA-GeAR might be used in future work as a benchmark for adversarial defense in the real world. All the code and datasets used in this paper are available at http://carlagear.retis.santannapisa.it.

READ FULL TEXT

page 2

page 5

page 7

page 9

research
08/13/2021

Evaluating the Robustness of Semantic Segmentation for Autonomous Driving against Real-World Adversarial Patch Attacks

Deep learning and convolutional neural networks allow achieving impressi...
research
08/29/2021

Searching for an Effective Defender: Benchmarking Defense against Adversarial Word Substitution

Recent studies have shown that deep neural networks are vulnerable to in...
research
01/05/2022

On the Real-World Adversarial Robustness of Real-Time Semantic Segmentation Models for Autonomous Driving

The existence of real-world adversarial examples (commonly in the form o...
research
05/18/2023

Adversarial Amendment is the Only Force Capable of Transforming an Enemy into a Friend

Adversarial attack is commonly regarded as a huge threat to neural netwo...
research
09/02/2021

Real World Robustness from Systematic Noise

Systematic error, which is not determined by chance, often refers to the...
research
08/20/2021

PatchCleanser: Certifiably Robust Defense against Adversarial Patches for Any Image Classifier

The adversarial patch attack against image classification models aims to...
research
11/29/2022

How Important are Good Method Names in Neural Code Generation? A Model Robustness Perspective

Pre-trained code generation models (PCGMs) have been widely applied in n...

Please sign up or login with your details

Forgot password? Click here to reset