Exploring the Physical World Adversarial Robustness of Vehicle Detection

08/07/2023
by   Wei Jiang, et al.
0

Adversarial attacks can compromise the robustness of real-world detection models. However, evaluating these models under real-world conditions poses challenges due to resource-intensive experiments. Virtual simulations offer an alternative, but the absence of standardized benchmarks hampers progress. Addressing this, we propose an innovative instant-level data generation pipeline using the CARLA simulator. Through this pipeline, we establish the Discrete and Continuous Instant-level (DCI) dataset, enabling comprehensive experiments involving three detection models and three physical adversarial attacks. Our findings highlight diverse model performances under adversarial conditions. Yolo v6 demonstrates remarkable resilience, experiencing just a marginal 6.59 attack yields a substantial 14.51 other algorithms. We also note that static scenes yield higher recognition AP values, and outcomes remain relatively consistent across varying weather conditions. Intriguingly, our study suggests that advancements in adversarial attack algorithms may be approaching its “limitation”.In summary, our work underscores the significance of adversarial attacks in real-world contexts and introduces the DCI dataset as a versatile benchmark. Our findings provide valuable insights for enhancing the robustness of detection models and offer guidance for future research endeavors in the realm of adversarial attacks.

READ FULL TEXT

page 2

page 5

page 7

page 9

page 10

page 11

research
04/11/2023

Benchmarking the Physical-world Adversarial Robustness of Vehicle Detection

Adversarial attacks in the physical world can harm the robustness of det...
research
02/07/2022

On The Empirical Effectiveness of Unrealistic Adversarial Hardening Against Realistic Adversarial Attacks

While the literature on security attacks and defense of Machine Learning...
research
11/24/2019

Robustness Metrics for Real-World Adversarial Examples

We explore metrics to evaluate the robustness of real-world adversarial ...
research
05/29/2023

Exploiting Explainability to Design Adversarial Attacks and Evaluate Attack Resilience in Hate-Speech Detection Models

The advent of social media has given rise to numerous ethical challenges...
research
02/15/2022

Recent Advances in Reliable Deep Graph Learning: Adversarial Attack, Inherent Noise, and Distribution Shift

Deep graph learning (DGL) has achieved remarkable progress in both busin...

Please sign up or login with your details

Forgot password? Click here to reset