Generating Band-Limited Adversarial Surfaces Using Neural Networks

11/14/2021
by   Roee Ben Shlomo, et al.
3

Generating adversarial examples is the art of creating a noise that is added to an input signal of a classifying neural network, and thus changing the network's classification, while keeping the noise as tenuous as possible. While the subject is well-researched in the 2D regime, it is lagging behind in the 3D regime, i.e. attacking a classifying network that works on 3D point-clouds or meshes and, for example, classifies the pose of people's 3D scans. As of now, the vast majority of papers that describe adversarial attacks in this regime work by methods of optimization. In this technical report we suggest a neural network that generates the attacks. This network utilizes PointNet's architecture with some alterations. While the previous articles on which we based our work on have to optimize each shape separately, i.e. tailor an attack from scratch for each individual input without any learning, we attempt to create a unified model that can deduce the needed adversarial example with a single forward run.

READ FULL TEXT

page 7

page 13

page 24

research
06/18/2022

Adversarial Robustness is at Odds with Lazy Training

Recent works show that random neural networks are vulnerable against adv...
research
11/24/2021

Thundernna: a white box adversarial attack

The existing work shows that the neural network trained by naive gradien...
research
11/17/2021

Generating Unrestricted 3D Adversarial Point Clouds

Utilizing 3D point cloud data has become an urgent need for the deployme...
research
05/30/2021

Generating Adversarial Examples with Graph Neural Networks

Recent years have witnessed the deployment of adversarial attacks to eva...
research
07/01/2021

Using Anomaly Feature Vectors for Detecting, Classifying and Warning of Outlier Adversarial Examples

We present DeClaW, a system for detecting, classifying, and warning of a...
research
06/14/2019

Perceptual Based Adversarial Audio Attacks

Recent work has shown the possibility of adversarial attacks on automati...

Please sign up or login with your details

Forgot password? Click here to reset