Visual Concept-Metaconcept Learning

02/04/2020
by   Chi Han, et al.
1

Humans reason with concepts and metaconcepts: we recognize red and green from visual input; we also understand that they describe the same property of objects (i.e., the color). In this paper, we propose the visual concept-metaconcept learner (VCML) for joint learning of concepts and metaconcepts from images and associated question-answer pairs. The key is to exploit the bidirectional connection between visual concepts and metaconcepts. Visual representations provide grounding cues for predicting relations between unseen pairs of concepts. Knowing that red and green describe the same property of objects, we generalize to the fact that cube and sphere also describe the same property of objects, since they both categorize the shape of objects. Meanwhile, knowledge about metaconcepts empowers visual concept learning from limited, noisy, and even biased data. From just a few examples of purple cubes we can understand a new color purple, which resembles the hue of the cubes instead of the shape of them. Evaluation on both synthetic and real-world datasets validates our claims.

READ FULL TEXT

page 13

page 14

research
03/30/2022

FALCON: Fast Visual Concept Learning by Integrating Images, Linguistic descriptions, and Conceptual Relations

We present a meta-learning framework for learning new visual concepts qu...
research
11/23/2020

Interpretable Visual Reasoning via Induced Symbolic Space

We study the problem of concept induction in visual reasoning, i.e., ide...
research
05/28/2022

Visual Superordinate Abstraction for Robust Concept Learning

Concept learning constructs visual representations that are connected to...
research
05/23/2023

Can Language Models Understand Physical Concepts?

Language models (LMs) gradually become general-purpose interfaces in the...
research
09/20/2022

DetCLIP: Dictionary-Enriched Visual-Concept Paralleled Pre-training for Open-world Detection

Open-world object detection, as a more general and challenging goal, aim...
research
10/01/2018

Visual Curiosity: Learning to Ask Questions to Learn Visual Recognition

In an open-world setting, it is inevitable that an intelligent agent (e....
research
08/11/2023

U-RED: Unsupervised 3D Shape Retrieval and Deformation for Partial Point Clouds

In this paper, we propose U-RED, an Unsupervised shape REtrieval and Def...

Please sign up or login with your details

Forgot password? Click here to reset