Isometric 3D Adversarial Examples in the Physical World

10/27/2022
by   Yibo Miao, et al.
0

3D deep learning models are shown to be as vulnerable to adversarial examples as 2D models. However, existing attack methods are still far from stealthy and suffer from severe performance degradation in the physical world. Although 3D data is highly structured, it is difficult to bound the perturbations with simple metrics in the Euclidean space. In this paper, we propose a novel ϵ-isometric (ϵ-ISO) attack to generate natural and robust 3D adversarial examples in the physical world by considering the geometric properties of 3D objects and the invariance to physical transformations. For naturalness, we constrain the adversarial example to be ϵ-isometric to the original one by adopting the Gaussian curvature as a surrogate metric guaranteed by a theoretical analysis. For invariance to physical transformations, we propose a maxima over transformation (MaxOT) method that actively searches for the most harmful transformations rather than random ones to make the generated adversarial example more robust in the physical world. Experiments on typical point cloud recognition models validate that our approach can significantly improve the attack success rate and naturalness of the generated 3D adversarial examples than the state-of-the-art attack methods.

READ FULL TEXT

page 2

page 4

page 9

page 19

research
11/27/2020

Robust and Natural Physical Adversarial Examples for Object Detectors

Recently, many studies show that deep neural networks (DNNs) are suscept...
research
10/28/2018

Robust Audio Adversarial Example for a Physical Attack

The success of deep learning in recent years has raised concerns about a...
research
06/19/2019

Cloud-based Image Classification Service Is Not Robust To Simple Transformations: A Forgotten Battlefield

Many recent works demonstrated that Deep Learning models are vulnerable ...
research
03/01/2021

Dual Attention Suppression Attack: Generate Adversarial Camouflage in Physical World

Deep learning models are vulnerable to adversarial examples. As a more t...
research
12/05/2019

Region-Wise Attack: On Efficient Generation of Robust Physical Adversarial Examples

Deep neural networks (DNNs) are shown to be susceptible to adversarial e...
research
08/20/2021

Application of Adversarial Examples to Physical ECG Signals

This work aims to assess the reality and feasibility of the adversarial ...
research
04/13/2023

Adversarial Examples from Dimensional Invariance

Adversarial examples have been found for various deep as well as shallow...

Please sign up or login with your details

Forgot password? Click here to reset