Deep Neural Networks are Easily Fooled: High Confidence Predictions for Unrecognizable Images

12/05/2014
by   Anh Nguyen, et al.
0

Deep neural networks (DNNs) have recently been achieving state-of-the-art performance on a variety of pattern-recognition tasks, most notably visual classification problems. Given that DNNs are now able to classify objects in images with near-human-level performance, questions naturally arise as to what differences remain between computer and human vision. A recent study revealed that changing an image (e.g. of a lion) in a way imperceptible to humans can cause a DNN to label the image as something else entirely (e.g. mislabeling a lion a library). Here we show a related result: it is easy to produce images that are completely unrecognizable to humans, but that state-of-the-art DNNs believe to be recognizable objects with 99.99 certainty that white noise static is a lion). Specifically, we take convolutional neural networks trained to perform well on either the ImageNet or MNIST datasets and then find images with evolutionary algorithms or gradient ascent that DNNs label with high confidence as belonging to each dataset class. It is possible to produce images totally unrecognizable to human eyes that DNNs believe with near certainty are familiar objects, which we call "fooling images" (more generally, fooling examples). Our results shed light on interesting differences between human vision and current DNNs, and raise questions about the generality of DNN computer vision.

READ FULL TEXT

page 4

page 14

page 15

page 16

page 17

page 18

page 19

page 20

06/21/2017

Comparing deep neural networks against humans: object recognition when the signal gets weaker

Human visual object recognition is typically rapid and seemingly effortl...
12/07/2020

Sparse Fooling Images: Fooling Machine Perception through Unrecognizable Images

In recent years, deep neural networks (DNNs) have achieved equivalent or...
10/23/2015

Confusing Deep Convolution Networks by Relabelling

Deep convolutional neural networks have become the gold standard for ima...
05/27/2019

Deep Neural Networks Abstract Like Humans

Deep neural networks (DNNs) have revolutionized AI due to their remarkab...
01/06/2016

Image-based Vehicle Analysis using Deep Neural Network: A Systematic Study

We address the vehicle detection and classification problems using Deep ...
08/27/2018

Generalisation in humans and deep neural networks

We compare the robustness of humans and current convolutional deep neura...
12/22/2018

Dissociable neural representations of adversarially perturbed images in deep neural networks and the human brain

Despite the remarkable similarities between deep neural networks (DNN) a...

Code Repositories

pytorch-cnn-adversarial-attacks

Pytorch implementation of convolutional neural network adversarial attack techniques


view repo