Realistic Adversarial Examples in 3D Meshes

10/11/2018
by   Dawei Yang, et al.
0

Highly expressive models such as deep neural networks (DNNs) have been widely applied to various applications and achieved increasing success. However, recent studies show that such machine learning models appear to be vulnerable against adversarial examples. So far adversarial examples have been heavily explored for 2D images, while few works have conducted to understand vulnerabilities of 3D objects which exist in real world, where 3D objects are projected to 2D domains by photo taking for different learning (recognition) tasks. In this paper, we consider adversarial behaviors in practical scenarios by manipulating the shape and texture of a given 3D mesh representation of an object. Our goal is to project the optimized "adversarial meshes" to 2D with a photorealistic renderer, and still able to mislead different machine learning models. Extensive experiments show that by generating unnoticeable 3D adversarial perturbation on shape or texture for a 3D mesh, the corresponding projected 2D instance can either lead classifiers to misclassify the victim object as an arbitrary malicious target, or hide any target object within the scene from object detectors. We conduct human studies to show that our optimized adversarial 3D perturbation is highly unnoticeable for human vision systems. In addition to the subtle perturbation for a given 3D mesh, we also propose to synthesize a realistic 3D mesh and put in a scene mimicking similar rendering conditions and therefore attack different machine learning models. In-depth analysis of transferability among various 3D renderers and vulnerable regions of meshes are provided to help better understand adversarial behaviors in real-world.

READ FULL TEXT
research
04/12/2019

Big but Imperceptible Adversarial Perturbations via Semantic Manipulation

Machine learning, especially deep learning, is widely applied to a range...
research
06/19/2019

SemanticAdv: Generating Adversarial Examples via Attribute-conditional Image Editing

Deep neural networks (DNNs) have achieved great success in various appli...
research
10/17/2022

Differential Evolution based Dual Adversarial Camouflage: Fooling Human Eyes and Object Detectors

Recent studies reveal that deep neural network (DNN) based object detect...
research
01/08/2018

Spatially transformed adversarial examples

Recent studies show that widely used deep neural networks (DNNs) are vul...
research
12/26/2018

Adversarial Attack and Defense on Graph Data: A Survey

Deep neural networks (DNNs) have been widely applied in various applicat...
research
06/30/2020

Generating Adversarial Examples with an Optimized Quality

Deep learning models are widely used in a range of application areas, su...
research
09/11/2018

Taking a machine's perspective: Human deciphering of adversarial images

How similar is the human mind to the sophisticated machine-learning syst...

Please sign up or login with your details

Forgot password? Click here to reset