Compositional diversity in visual concept learning

by   Yanli Zhou, et al.

Humans leverage compositionality to efficiently learn new concepts, understanding how familiar parts can combine together to form novel objects. In contrast, popular computer vision models struggle to make the same types of inferences, requiring more data and generalizing less flexibly than people do. Here, we study these distinctively human abilities across a range of different types of visual composition, examining how people classify and generate “alien figures” with rich relational structure. We also develop a Bayesian program induction model which searches for the best programs for generating the candidate visual figures, utilizing a large program space containing different compositional mechanisms and abstractions. In few shot classification tasks, we find that people and the program induction model can make a range of meaningful compositional generalizations, with the model providing a strong account of the experimental data as well as interpretable parameters that reveal human assumptions about the factors invariant to category membership (here, to rotation and changing part attachment). In few shot generation tasks, both people and the models are able to construct compelling novel examples, with people behaving in additional structured ways beyond the model capabilities, e.g. making choices that complete a set or reconfiguring existing parts in highly novel ways. To capture these additional behavioral patterns, we develop an alternative model based on neuro-symbolic program induction: this model also composes new concepts from existing parts yet, distinctively, it utilizes neural network modules to successfully capture residual statistical structure. Together, our behavioral and computational findings show how people and models can produce a rich variety of compositional behavior when classifying and generating visual objects.


Flexible Compositional Learning of Structured Visual Concepts

Humans are highly efficient learners, with the ability to grasp the mean...

People infer recursive visual concepts from just a few examples

Machine learning has made major advances in categorizing objects in imag...

Human few-shot learning of compositional instructions

People learn in fast and flexible ways that have not been emulated by ma...

Identifying concept libraries from language about object structure

Our understanding of the visual world goes beyond naming objects, encomp...

CURI: A Benchmark for Productive Concept Learning Under Uncertainty

Humans can learn and reason under substantial uncertainty in a space of ...

Learning a Deep Generative Model like a Program: the Free Category Prior

Humans surpass the cognitive abilities of most other animals in our abil...

Fast and flexible: Human program induction in abstract reasoning tasks

The Abstraction and Reasoning Corpus (ARC) is a challenging program indu...

Please sign up or login with your details

Forgot password? Click here to reset