Adversarial Examples that Fool both Human and Computer Vision

by   Gamaleldin F. Elsayed, et al.

Machine learning models are vulnerable to adversarial examples: small changes to images can cause computer vision models to make mistakes such as identifying a school bus as an ostrich. However, it is still an open question whether humans are prone to similar mistakes. Here, we create the first adversarial examples designed to fool humans, by leveraging recent techniques that transfer adversarial examples from computer vision models with known parameters and architecture to other models with unknown parameters and architecture, and by modifying models to more closely match the initial processing of the human visual system. We find that adversarial examples that strongly transfer across computer vision models influence the classifications made by time-limited human observers.


page 1

page 2

page 5

page 7

page 8

page 13


Are Accuracy and Robustness Correlated?

Machine learning models are vulnerable to adversarial examples formed by...

Imperceptible, Robust, and Targeted Adversarial Examples for Automatic Speech Recognition

Adversarial examples are inputs to machine learning models designed by a...

How the Softmax Output is Misleading for Evaluating the Strength of Adversarial Examples

Even before deep learning architectures became the de facto models for c...

Invisible Perturbations: Physical Adversarial Examples Exploiting the Rolling Shutter Effect

Physical adversarial examples for camera-based computer vision have so f...

Taking a machine's perspective: Humans can decipher adversarial images

How similar is the human mind to the sophisticated machine-learning syst...

Shared Visual Abstractions

This paper presents abstract art created by neural networks and broadly ...

Optical Illusions Images Dataset

Human vision is capable of performing many tasks not optimized for in it...

Please sign up or login with your details

Forgot password? Click here to reset