Adversarial Manipulation of Deep Representations

11/16/2015
by   Sara Sabour, et al.
0

We show that the representation of an image in a deep neural network (DNN) can be manipulated to mimic those of other natural images, with only minor, imperceptible perturbations to the original image. Previous methods for generating adversarial images focused on image perturbations designed to produce erroneous class labels, while we concentrate on the internal layers of DNN representations. In this way our new class of adversarial images differs qualitatively from others. While the adversary is perceptually similar to one image, its internal representation appears remarkably similar to a different image, one from a different class, bearing little if any apparent similarity to the input; they appear generic and consistent with the space of natural images. This phenomenon raises questions about DNN representations, as well as the properties of natural images themselves.

READ FULL TEXT

page 3

page 4

page 12

page 14

page 15

page 16

page 17

page 18

research
03/16/2018

Semantic Adversarial Examples

Deep neural networks are known to be vulnerable to adversarial examples,...
research
03/13/2019

Aesthetics of Neural Network Art

This paper proposes a way to understand neural network artworks as juxta...
research
10/16/2019

A New Defense Against Adversarial Images: Turning a Weakness into a Strength

Natural images are virtually surrounded by low-density misclassified reg...
research
06/19/2014

Why are images smooth?

It is a well observed phenomenon that natural images are smooth, in the ...
research
06/15/2022

Disentangling visual and written concepts in CLIP

The CLIP network measures the similarity between natural text and images...
research
03/15/2021

Understanding invariance via feedforward inversion of discriminatively trained classifiers

A discriminatively trained neural net classifier achieves optimal perfor...
research
04/17/2019

Adversarial Defense Through Network Profiling Based Path Extraction

Recently, researchers have started decomposing deep neural network model...

Please sign up or login with your details

Forgot password? Click here to reset