Acquiring Common Sense Spatial Knowledge through Implicit Spatial Templates

11/18/2017
by   Guillem Collell, et al.
0

Spatial understanding is a fundamental problem with wide-reaching real-world applications. The representation of spatial knowledge is often modeled with spatial templates, i.e., regions of acceptability of two objects under an explicit spatial relationship (e.g., "on", "below", etc.). In contrast with prior work that restricts spatial templates to explicit spatial prepositions (e.g., "glass on table"), here we extend this concept to implicit spatial language, i.e., those relationships (generally actions) for which the spatial arrangement of the objects is only implicitly implied (e.g., "man riding horse"). In contrast with explicit relationships, predicting spatial arrangements from implicit spatial language requires significant common sense spatial understanding. Here, we introduce the task of predicting spatial templates for two objects under a relationship, which can be seen as a spatial question-answering task with a (2D) continuous output ("where is the man w.r.t. a horse when the man is walking the horse?"). We present two simple neural-based models that leverage annotated images and structured text to learn this task. The good performance of these models reveals that spatial locations are to a large extent predictable from implicit spatial language. Crucially, the models attain similar performance in a challenging generalized setting, where the object-relation-object combinations (e.g.,"man walking dog") have never been seen before. Next, we go one step further by presenting the models with unseen objects (e.g., "dog"). In this scenario, we show that leveraging word embeddings enables the models to output accurate spatial predictions, proving that the models acquire solid common sense spatial knowledge allowing for such generalization.

READ FULL TEXT
research
07/19/2020

Understanding Spatial Relations through Multiple Modalities

Recognizing spatial relations and reasoning about them is essential in m...
research
11/30/2020

Deep Implicit Templates for 3D Shape Representation

Deep implicit functions (DIFs), as a kind of 3D shape representation, ar...
research
08/18/2023

Towards Grounded Visual Spatial Reasoning in Multi-Modal Vision Language Models

With the advances in large scale vision-and-language models (VLMs) it is...
research
11/08/2019

Why Do Masked Neural Language Models Still Need Common Sense Knowledge?

Currently, contextualized word representations are learned by intricate ...
research
09/08/2021

SORNet: Spatial Object-Centric Representations for Sequential Manipulation

Sequential manipulation tasks require a robot to perceive the state of a...
research
05/18/2023

TEPrompt: Task Enlightenment Prompt Learning for Implicit Discourse Relation Recognition

Implicit Discourse Relation Recognition (IDRR) aims at classifying the r...
research
11/04/2019

A Model for Spatial Outlier Detection Based on Weighted Neighborhood Relationship

Spatial outliers are used to discover inconsistent objects producing imp...

Please sign up or login with your details

Forgot password? Click here to reset