Taking a machine's perspective: Humans can decipher adversarial images

09/11/2018
by   Zhenglong Zhou, et al.
0

How similar is the human mind to the sophisticated machine-learning systems that mirror its performance? Models of object categorization based on convolutional neural networks (CNNs) have achieved human-level benchmarks in assigning known labels to novel images. These advances promise to support transformative technologies such as autonomous vehicles and machine diagnosis; beyond this, they also serve as candidate models for the visual system itself -- not only in their output but perhaps even in their underlying mechanisms and principles. However, unlike human vision, CNNs can be "fooled" by adversarial examples -- carefully crafted images that appear as nonsense patterns to humans but are recognized as familiar objects by machines, or that appear as one object to humans and a different object to machines. This seemingly extreme divergence between human and machine classification challenges the promise of these new advances, both as applied image-recognition systems and also as models of the human mind. Surprisingly, however, little work has empirically investigated human classification of such adversarial stimuli: Does human and machine performance fundamentally diverge? Or could humans decipher such images and predict the machine's preferred labels? Here, we show that human and machine classification of adversarial stimuli are robustly related: In eight experiments on five prominent and diverse adversarial imagesets, human subjects reliably identified the machine's chosen label over relevant foils. This pattern persisted for images with strong antecedent identities, and even for images described as "totally unrecognizable to human eyes". We suggest that human intuition may be a more reliable guide to machine (mis)classification than has typically been imagined, and we explore the consequences of this result for minds and machines alike.

READ FULL TEXT
research
09/11/2018

Taking a machine's perspective: Human deciphering of adversarial images

How similar is the human mind to the sophisticated machine-learning syst...
research
02/22/2018

Adversarial Examples that Fool both Human and Computer Vision

Machine learning models are vulnerable to adversarial examples: small ch...
research
10/18/2022

Why do people judge humans differently from machines? The role of agency and experience

People are known to judge artificial intelligence using a utilitarian mo...
research
05/19/2018

Learning Hierarchical Visual Representations in Deep Neural Networks Using Hierarchical Linguistic Labels

Modern convolutional neural networks (CNNs) are able to achieve human-le...
research
07/12/2017

NO Need to Worry about Adversarial Examples in Object Detection in Autonomous Vehicles

It has been shown that most machine learning algorithms are susceptible ...
research
11/14/2022

What Images are More Memorable to Machines?

This paper studies the problem of measuring and predicting how memorable...
research
05/07/2022

Ultra-fast image categorization in vivo and in silico

Humans are able to robustly categorize images and can, for instance, det...

Please sign up or login with your details

Forgot password? Click here to reset