Can 3D Adversarial Logos Cloak Humans?

06/25/2020
by   Tianlong Chen, et al.
5

With the trend of adversarial attacks, researchers attempt to fool trained object detectors in 2D scenes. Among many of them, an intriguing new form of attack with potential real-world usage is to append adversarial patches (e.g. logos) to images. Nevertheless, much less have we known about adversarial attacks from 3D rendering views, which is essential for the attack to be persistently strong in the physical world. This paper presents a new 3D adversarial logo attack: we construct an arbitrary shape logo from a 2D texture image and map this image into a 3D adversarial logo via a texture mapping called logo transformation. The resulting 3D adversarial logo is then viewed as an adversarial texture enabling easy manipulation of its shape and position. This greatly extends the versatility of adversarial training for computer graphics synthesized imagery. Contrary to the traditional adversarial patch, this new form of attack is mapped into the 3D object world and back-propagates to the 2D image domain through differentiable rendering. In addition, and unlike existing adversarial patches, our new 3D adversarial logo is shown to fool state-of-the-art deep object detectors robustly under model rotations, leading to one step further for realistic attacks in the physical world. Our codes are available at https://github.com/TAMU-VITA/3D_Adversarial_Logo.

READ FULL TEXT

page 1

page 4

page 6

page 7

research
04/22/2021

Learning Transferable 3D Adversarial Cloaks for Deep Trained Detectors

This paper presents a novel patch-based adversarial attack pipeline that...
research
12/12/2022

HOTCOLD Block: Fooling Thermal Infrared Detectors with a Novel Wearable Design

Adversarial attacks on thermal infrared imaging expose the risk of relat...
research
06/14/2023

X-Detect: Explainable Adversarial Patch Detection for Object Detectors in Retail

Object detection models, which are widely used in various domains (such ...
research
03/07/2023

Patch of Invisibility: Naturalistic Black-Box Adversarial Attacks on Object Detectors

Adversarial attacks on deep-learning models have been receiving increase...
research
03/18/2022

DTA: Physical Camouflage Attacks using Differentiable Transformation Network

To perform adversarial attacks in the physical world, many studies have ...
research
02/19/2023

X-Adv: Physical Adversarial Object Attacks against X-ray Prohibited Item Detection

Adversarial attacks are valuable for evaluating the robustness of deep l...
research
10/17/2022

Differential Evolution based Dual Adversarial Camouflage: Fooling Human Eyes and Object Detectors

Recent studies reveal that deep neural network (DNN) based object detect...

Please sign up or login with your details

Forgot password? Click here to reset