Comparing deep neural networks against humans: object recognition when the signal gets weaker

06/21/2017
by   Robert Geirhos, et al.
0

Human visual object recognition is typically rapid and seemingly effortless, as well as largely independent of viewpoint and object orientation. Until very recently, animate visual systems were the only ones capable of this remarkable computational feat. This has changed with the rise of a class of computer vision algorithms called deep neural networks (DNNs) that achieve human-level classification performance on object recognition tasks. Furthermore, a growing number of studies report similarities in the way DNNs and the human visual system process objects, suggesting that current DNNs may be good models of human visual object recognition. Yet there clearly exist important architectural and processing differences between state-of-the-art DNNs and the primate visual system. The potential behavioural consequences of these differences are not well understood. We aim to address this issue by comparing human and DNN generalisation abilities towards image degradations. We find the human visual system to be more robust to image manipulations like contrast reduction, additive noise or novel eidolon-distortions. In addition, we find progressively diverging classification error-patterns between man and DNNs when the signal gets weaker, indicating that there may still be marked differences in the way humans and current DNNs perform visual object recognition. We envision that our findings as well as our carefully measured and freely available behavioural datasets provide a new useful benchmark for the computer vision community to improve the robustness of DNNs and a motivation for neuroscientists to search for mechanisms in the brain that could facilitate this robustness.

READ FULL TEXT

page 11

page 12

page 15

page 23

page 24

page 28

page 31

research
05/26/2023

Are Deep Neural Networks Adequate Behavioural Models of Human Visual Perception?

Deep neural networks (DNNs) are machine learning algorithms that have re...
research
08/27/2018

Generalisation in humans and deep neural networks

We compare the robustness of humans and current convolutional deep neura...
research
05/20/2022

The developmental trajectory of object recognition robustness: children are like small adults but unlike big deep neural networks

In laboratory object recognition tasks based on undistorted photographs,...
research
12/05/2014

Deep Neural Networks are Easily Fooled: High Confidence Predictions for Unrecognizable Images

Deep neural networks (DNNs) have recently been achieving state-of-the-ar...
research
12/22/2018

Dissociable neural representations of adversarially perturbed images in deep neural networks and the human brain

Despite the remarkable similarities between deep neural networks (DNN) a...
research
01/03/2023

Explainability and Robustness of Deep Visual Classification Models

In the computer vision community, Convolutional Neural Networks (CNNs), ...
research
11/08/2022

Harmonizing the object recognition strategies of deep neural networks with humans

The many successes of deep neural networks (DNNs) over the past decade h...

Please sign up or login with your details

Forgot password? Click here to reset