Towards Benchmarking and Assessing Visual Naturalness of Physical World Adversarial Attacks

05/22/2023
by   Simin Li, et al.
0

Physical world adversarial attack is a highly practical and threatening attack, which fools real world deep learning systems by generating conspicuous and maliciously crafted real world artifacts. In physical world attacks, evaluating naturalness is highly emphasized since human can easily detect and remove unnatural attacks. However, current studies evaluate naturalness in a case-by-case fashion, which suffers from errors, bias and inconsistencies. In this paper, we take the first step to benchmark and assess visual naturalness of physical world attacks, taking autonomous driving scenario as the first attempt. First, to benchmark attack naturalness, we contribute the first Physical Attack Naturalness (PAN) dataset with human rating and gaze. PAN verifies several insights for the first time: naturalness is (disparately) affected by contextual features (i.e., environmental and semantic variations) and correlates with behavioral feature (i.e., gaze signal). Second, to automatically assess attack naturalness that aligns with human ratings, we further introduce Dual Prior Alignment (DPA) network, which aims to embed human knowledge into model reasoning process. Specifically, DPA imitates human reasoning in naturalness assessment by rating prior alignment and mimics human gaze behavior by attentive prior alignment. We hope our work fosters researches to improve and automatically assess naturalness of physical world attacks. Our code and dataset can be found at https://github.com/zhangsn-19/PAN.

READ FULL TEXT

page 1

page 4

page 6

page 8

research
08/13/2021

Evaluating the Robustness of Semantic Segmentation for Autonomous Driving against Real-World Adversarial Patch Attacks

Deep learning and convolutional neural networks allow achieving impressi...
research
04/15/2021

Robust Backdoor Attacks against Deep Neural Networks in Real Physical World

Deep neural networks (DNN) have been widely deployed in various practica...
research
12/02/2018

SentiNet: Detecting Physical Attacks Against Deep Learning Systems

SentiNet is a novel detection framework for physical attacks on neural n...
research
06/21/2023

Evaluating Adversarial Robustness of Convolution-based Human Motion Prediction

Human motion prediction has achieved a brilliant performance with the he...
research
12/12/2022

HOTCOLD Block: Fooling Thermal Infrared Detectors with a Novel Wearable Design

Adversarial attacks on thermal infrared imaging expose the risk of relat...
research
05/31/2022

Hide and Seek: on the Stealthiness of Attacks against Deep Learning Systems

With the growing popularity of artificial intelligence and machine learn...
research
03/15/2022

A Wearables-Driven Attack on Examination Proctoring

Multiple choice questions are at the heart of many standardized tests an...

Please sign up or login with your details

Forgot password? Click here to reset