Adversarial Attacks Beyond the Image Space

11/20/2017
by   Xiaohui Zeng, et al.
0

Generating adversarial examples is an intriguing problem and an important way of understanding the working mechanism of deep neural networks. Recently, it has attracted a lot of attention in the computer vision community. Most existing approaches generated perturbations in image space, i.e., each pixel can be modified independently. However, it remains unclear whether these adversarial examples are authentic, in the sense that they correspond to actual changes in physical properties. This paper aims at exploring this topic in the contexts of object classification and visual question answering. The baselines are set to be several state-of-the-art deep neural networks which receive 2D input images. We augment these networks with a differentiable 3D rendering layer in front, so that a 3D scene (in physical space) is rendered into a 2D image (in image space), and then mapped to a prediction (in output space). There are two (direct or indirect) ways of attacking the physical parameters. The former back-propagates the gradients of error signals from output space to physical space directly, while the latter first constructs an adversary in image space, and then attempts to find the best solution in physical space that is rendered into this image. An important finding is that attacking physical space is much more difficult, as the direct method, compared with that used in image space, produces a much lower success rate and requires heavier perturbations to be added. On the other hand, the indirect method does not work out, suggesting that adversaries generated in image space are inauthentic. By interpreting them in physical space, most of these adversaries can be filtered out, showing promise in defending adversaries.

READ FULL TEXT

page 1

page 6

page 8

research
11/27/2020

Robust and Natural Physical Adversarial Examples for Object Detectors

Recently, many studies show that deep neural networks (DNNs) are suscept...
research
08/08/2018

Beyond Pixel Norm-Balls: Parametric Adversaries using an Analytically Differentiable Renderer

Many machine learning image classifiers are vulnerable to adversarial at...
research
12/24/2020

Exploring Adversarial Examples via Invertible Neural Networks

Adversarial examples (AEs) are images that can mislead deep neural netwo...
research
09/28/2022

A Survey on Physical Adversarial Attack in Computer Vision

In the past decade, deep learning has dramatically changed the tradition...
research
02/17/2020

On the Similarity of Deep Learning Representations Across Didactic and Adversarial Examples

The increasing use of deep neural networks (DNNs) has motivated a parall...
research
11/26/2020

Invisible Perturbations: Physical Adversarial Examples Exploiting the Rolling Shutter Effect

Physical adversarial examples for camera-based computer vision have so f...

Please sign up or login with your details

Forgot password? Click here to reset