Shadows Aren't So Dangerous After All: A Fast and Robust Defense Against Shadow-Based Adversarial Attacks

08/18/2022
by   Andrew Wang, et al.
0

Robust classification is essential in tasks like autonomous vehicle sign recognition, where the downsides of misclassification can be grave. Adversarial attacks threaten the robustness of neural network classifiers, causing them to consistently and confidently misidentify road signs. One such class of attack, shadow-based attacks, causes misidentifications by applying a natural-looking shadow to input images, resulting in road signs that appear natural to a human observer but confusing for these classifiers. Current defenses against such attacks use a simple adversarial training procedure to achieve a rather low 25% and 40% robustness on the GTSRB and LISA test sets, respectively. In this paper, we propose a robust, fast, and generalizable method, designed to defend against shadow attacks in the context of road sign recognition, that augments source images with binary adaptive threshold and edge maps. We empirically show its robustness against shadow attacks, and reformulate the problem to show its similarity ε perturbation-based attacks. Experimental results show that our edge defense results in 78% robustness while maintaining 98% benign test accuracy on the GTSRB test set, with similar results from our threshold defense. Link to our code is in the paper.

READ FULL TEXT

page 2

page 3

page 4

research
04/25/2022

A Hybrid Defense Method against Adversarial Attacks on Traffic Sign Classifiers in Autonomous Vehicles

Adversarial attacks can make deep neural network (DNN) models predict in...
research
06/25/2023

Computational Asymmetries in Robust Classification

In the context of adversarial robustness, we make three strongly related...
research
06/13/2021

ATRAS: Adversarially Trained Robust Architecture Search

In this paper, we explore the effect of architecture completeness on adv...
research
09/03/2023

Robust Adversarial Defense by Tensor Factorization

As machine learning techniques become increasingly prevalent in data ana...
research
06/03/2022

Gradient Obfuscation Checklist Test Gives a False Sense of Security

One popular group of defense techniques against adversarial attacks is b...
research
08/30/2023

Explainable and Trustworthy Traffic Sign Detection for Safe Autonomous Driving: An Inductive Logic Programming Approach

Traffic sign detection is a critical task in the operation of Autonomous...
research
03/27/2023

Classifier Robustness Enhancement Via Test-Time Transformation

It has been recently discovered that adversarially trained classifiers e...

Please sign up or login with your details

Forgot password? Click here to reset