Messing Up 3D Virtual Environments: Transferable Adversarial 3D Objects

09/17/2021
by   Enrico Meloni, et al.
9

In the last few years, the scientific community showed a remarkable and increasing interest towards 3D Virtual Environments, training and testing Machine Learning-based models in realistic virtual worlds. On one hand, these environments could also become a mean to study the weaknesses of Machine Learning algorithms, or to simulate training settings that allow Machine Learning models to gain robustness to 3D adversarial attacks. On the other hand, their growing popularity might also attract those that aim at creating adversarial conditions to invalidate the benchmarking process, especially in the case of public environments that allow the contribution from a large community of people. Most of the existing Adversarial Machine Learning approaches are focused on static images, and little work has been done in studying how to deal with 3D environments and how a 3D object should be altered to fool a classifier that observes it. In this paper, we study how to craft adversarial 3D objects by altering their textures, using a tool chain composed of easily accessible elements. We show that it is possible, and indeed simple, to create adversarial objects using off-the-shelf limited surrogate renderers that can compute gradients with respect to the parameters of the rendering process, and, to a certain extent, to transfer the attacks to more advanced 3D engines. We propose a saliency-based attack that intersects the two classes of renderers in order to focus the alteration to those texture elements that are estimated to be effective in the target engine, evaluating its impact in popular neural classifiers.

READ FULL TEXT

page 1

page 5

page 7

research
09/08/2018

On the Intriguing Connections of Regularization, Input Gradients and Transferability of Evasion and Poisoning Attacks

Transferability captures the ability of an attack against a machine-lear...
research
04/24/2023

Evaluating Adversarial Robustness on Document Image Classification

Adversarial attacks and defenses have gained increasing interest on comp...
research
02/21/2020

Adversarial Attacks on Machine Learning Systems for High-Frequency Trading

Algorithmic trading systems are often completely automated, and deep lea...
research
06/01/2023

Adversarial-Aware Deep Learning System based on a Secondary Classical Machine Learning Verification Approach

Deep learning models have been used in creating various effective image ...
research
07/16/2020

SAILenv: Learning in Virtual Visual Environments Made Simple

Recently, researchers in Machine Learning algorithms, Computer Vision sc...
research
04/13/2022

Stealing Malware Classifiers and AVs at Low False Positive Conditions

Model stealing attacks have been successfully used in many machine learn...
research
02/19/2023

X-Adv: Physical Adversarial Object Attacks against X-ray Prohibited Item Detection

Adversarial attacks are valuable for evaluating the robustness of deep l...

Please sign up or login with your details

Forgot password? Click here to reset