Enabling Robots to Draw and Tell: Towards Visually Grounded Multimodal Description Generation

01/14/2021
by   Ting Han, et al.
0

Socially competent robots should be equipped with the ability to perceive the world that surrounds them and communicate about it in a human-like manner. Representative skills that exhibit such ability include generating image descriptions and visually grounded referring expressions. In the NLG community, these generation tasks are largely investigated in non-interactive and language-only settings. However, in face-to-face interaction, humans often deploy multiple modalities to communicate, forming seamless integration of natural language, hand gestures and other modalities like sketches. To enable robots to describe what they perceive with speech and sketches/gestures, we propose to model the task of generating natural language together with free-hand sketches/hand gestures to describe visual scenes and real life objects, namely, visually-grounded multimodal description generation. In this paper, we discuss the challenges and evaluation metrics of the task, and how the task can benefit from progress recently made in the natural language processing and computer vision realms, where related topics such as visually grounded NLG, distributional semantics, and photo-based sketch generation have been extensively studied.

READ FULL TEXT
research
06/12/2018

iParaphrasing: Extracting Visually Grounded Paraphrases via an Image

A paraphrase is a restatement of the meaning of a text in other words. P...
research
04/14/2019

UR-FUNNY: A Multimodal Language Dataset for Understanding Humor

Humor is a unique and creative communicative behavior displayed during s...
research
10/07/2020

Towards Understanding Sample Variance in Visually Grounded Language Generation: Evaluations and Observations

A major challenge in visually grounded language generation is to build r...
research
06/30/2011

Grounded Semantic Composition for Visual Scenes

We present a visually-grounded language understanding model based on a s...
research
07/29/2020

Presentation and Analysis of a Multimodal Dataset for Grounded LanguageLearning

Grounded language acquisition – learning how language-based interactions...
research
09/21/2019

Visuallly Grounded Generation of Entailments from Premises

Natural Language Inference (NLI) is the task of determining the semantic...
research
02/04/2019

Exploring Temporal Dependencies in Multimodal Referring Expressions with Mixed Reality

In collaborative tasks, people rely both on verbal and non-verbal cues s...

Please sign up or login with your details

Forgot password? Click here to reset