How intelligent are convolutional neural networks?

09/18/2017
by   Zhennan Yan, et al.
0

Motivated by the Gestalt pattern theory, and the Winograd Challenge for language understanding, we design synthetic experiments to investigate a deep learning algorithm's ability to infer simple (at least for human) visual concepts, such as symmetry, from examples. A visual concept is represented by randomly generated, positive as well as negative, example images. We then test the ability and speed of algorithms (and humans) to learn the concept from these images. The training and testing are performed progressively in multiple rounds, with each subsequent round deliberately designed to be more complex and confusing than the previous round(s), especially if the concept was not grasped by the learner. However, if the concept was understood, all the deliberate tests would become trivially easy. Our experiments show that humans can often infer a semantic concept quickly after looking at only a very small number of examples (this is often referred to as an "aha moment": a moment of sudden realization), and performs perfectly during all testing rounds (except for careless mistakes). On the contrary, deep convolutional neural networks (DCNN) could approximate some concepts statistically, but only after seeing many (x10^4) more examples. And it will still make obvious mistakes, especially during deliberate testing rounds or on samples outside the training distributions. This signals a lack of true "understanding", or a failure to reach the right "formula" for the semantics. We did find that some concepts are easier for DCNN than others. For example, simple "counting" is more learnable than "symmetry", while "uniformity" or "conformance" are much more difficult for DCNN to learn. To conclude, we propose an "Aha Challenge" for visual perception, calling for focused and quantitative research on Gestalt-style machine intelligence using limited training examples.

READ FULL TEXT

page 7

page 8

page 12

page 13

page 15

page 16

page 19

page 23

research
03/14/2019

Teaching with IMPACT

Like many problems in AI in their general form, supervised learning is c...
research
10/02/2020

Bongard-LOGO: A New Benchmark for Human-Level Concept Learning and Reasoning

Humans have an inherent ability to learn novel concepts from only a few ...
research
07/03/2020

A Competence-aware Curriculum for Visual Concepts Learning via Question Answering

Humans can progressively learn visual concepts from easy to hard questio...
research
02/05/2020

CHAIN: Concept-harmonized Hierarchical Inference Interpretation of Deep Convolutional Neural Networks

With the great success of networks, it witnesses the increasing demand f...
research
01/29/2020

Evaluating the Progress of Deep Learning for Visual Relational Concepts

Convolutional Neural Networks (CNNs) have become the state of the art me...
research
08/17/2017

General AI Challenge - Round One: Gradual Learning

The General AI Challenge is an initiative to encourage the wider artific...
research
02/08/2021

Semiquantitative Group Testing in at Most Two Rounds

Semiquantitative group testing (SQGT) is a pooling method in which the t...

Please sign up or login with your details

Forgot password? Click here to reset