Sparse Fooling Images: Fooling Machine Perception through Unrecognizable Images

12/07/2020
by   Soichiro Kumano, et al.
8

In recent years, deep neural networks (DNNs) have achieved equivalent or even higher accuracy in various recognition tasks than humans. However, some images exist that lead DNNs to a completely wrong decision, whereas humans never fail with these images. Among others, fooling images are those that are not recognizable as natural objects such as dogs and cats, but DNNs classify these images into classes with high confidence scores. In this paper, we propose a new class of fooling images, sparse fooling images (SFIs), which are single color images with a small number of altered pixels. Unlike existing fooling images, which retain some characteristic features of natural objects, SFIs do not have any local or global features that can be recognizable to humans; however, in machine perception (i.e., by DNN classifiers), SFIs are recognizable as natural objects and classified to certain classes with high confidence scores. We propose two methods to generate SFIs for different settings (semiblack-box and white-box). We also experimentally demonstrate the vulnerability of DNNs through out-of-distribution detection and compare three architectures in terms of the robustness against SFIs. This study gives rise to questions on the structure and robustness of CNNs and discusses the differences between human and machine perception.

READ FULL TEXT

page 1

page 4

page 5

page 6

page 12

page 13

research
12/05/2014

Deep Neural Networks are Easily Fooled: High Confidence Predictions for Unrecognizable Images

Deep neural networks (DNNs) have recently been achieving state-of-the-ar...
research
02/08/2019

Minimal Images in Deep Neural Networks: Fragile Object Recognition in Natural Images

The human ability to recognize objects is impaired when the object is no...
research
09/11/2023

Divergences in Color Perception between Deep Neural Networks and Humans

Deep neural networks (DNNs) are increasingly proposed as models of human...
research
05/07/2019

Representation of White- and Black-Box Adversarial Examples in Deep Neural Networks and Humans: A Functional Magnetic Resonance Imaging Study

The recent success of brain-inspired deep neural networks (DNNs) in solv...
research
04/20/2020

The Notorious Difficulty of Comparing Human and Machine Perception

With the rise of machines to human-level performance in complex recognit...
research
11/28/2018

Strike (with) a Pose: Neural Networks Are Easily Fooled by Strange Poses of Familiar Objects

Despite excellent performance on stationary test sets, deep neural netwo...
research
11/28/2020

Differences between human and machine perception in medical diagnosis

Deep neural networks (DNNs) show promise in image-based medical diagnosi...

Please sign up or login with your details

Forgot password? Click here to reset