-
Comparing deep neural networks against humans: object recognition when the signal gets weaker
Human visual object recognition is typically rapid and seemingly effortl...
read it
-
A Study and Comparison of Human and Deep Learning Recognition Performance Under Visual Distortions
Deep neural networks (DNNs) achieve excellent performance on standard cl...
read it
-
Deep Neural Networks are Easily Fooled: High Confidence Predictions for Unrecognizable Images
Deep neural networks (DNNs) have recently been achieving state-of-the-ar...
read it
-
Computing the Testing Error without a Testing Set
Deep Neural Networks (DNNs) have revolutionized computer vision. We now ...
read it
-
Seeing Eye-to-Eye? A Comparison of Object Recognition Performance in Humans and Deep Convolutional Neural Networks under Image Manipulation
For a considerable time, deep convolutional neural networks (DCNNs) have...
read it
-
What Else Can Fool Deep Learning? Addressing Color Constancy Errors on Deep Neural Network Performance
There is active research targeting local image manipulations that can fo...
read it
-
Minimal Images in Deep Neural Networks: Fragile Object Recognition in Natural Images
The human ability to recognize objects is impaired when the object is no...
read it
Generalisation in humans and deep neural networks
We compare the robustness of humans and current convolutional deep neural networks (DNNs) on object recognition under twelve different types of image degradations. First, using three well known DNNs (ResNet-152, VGG-19, GoogLeNet) we find the human visual system to be more robust to nearly all of the tested image manipulations, and we observe progressively diverging classification error-patterns between humans and DNNs when the signal gets weaker. Secondly, we show that DNNs trained directly on distorted images consistently surpass human performance on the exact distortion types they were trained on, yet they display extremely poor generalisation abilities when tested on other distortion types. For example, training on salt-and-pepper noise does not imply robustness on uniform white noise and vice versa. Thus, changes in the noise distribution between training and testing constitutes a crucial challenge to deep learning vision systems that can be systematically addressed in a lifelong machine learning approach. Our new dataset consisting of 83K carefully measured human psychophysical trials provide a useful reference for lifelong robustness against image degradations set by the human visual system.
READ FULL TEXT
Comments
There are no comments yet.