Enabling Robots to Draw and Tell: Towards Visually Grounded Multimodal Description Generation

01/14/2021
by   Ting Han, et al.
0

Socially competent robots should be equipped with the ability to perceive the world that surrounds them and communicate about it in a human-like manner. Representative skills that exhibit such ability include generating image descriptions and visually grounded referring expressions. In the NLG community, these generation tasks are largely investigated in non-interactive and language-only settings. However, in face-to-face interaction, humans often deploy multiple modalities to communicate, forming seamless integration of natural language, hand gestures and other modalities like sketches. To enable robots to describe what they perceive with speech and sketches/gestures, we propose to model the task of generating natural language together with free-hand sketches/hand gestures to describe visual scenes and real life objects, namely, visually-grounded multimodal description generation. In this paper, we discuss the challenges and evaluation metrics of the task, and how the task can benefit from progress recently made in the natural language processing and computer vision realms, where related topics such as visually grounded NLG, distributional semantics, and photo-based sketch generation have been extensively studied.

READ FULL TEXT
POST COMMENT

Comments

There are no comments yet.

Authors

06/12/2018

iParaphrasing: Extracting Visually Grounded Paraphrases via an Image

A paraphrase is a restatement of the meaning of a text in other words. P...
04/14/2019

UR-FUNNY: A Multimodal Language Dataset for Understanding Humor

Humor is a unique and creative communicative behavior displayed during s...
06/30/2011

Grounded Semantic Composition for Visual Scenes

We present a visually-grounded language understanding model based on a s...
10/07/2020

Towards Understanding Sample Variance in Visually Grounded Language Generation: Evaluations and Observations

A major challenge in visually grounded language generation is to build r...
02/04/2019

Exploring Temporal Dependencies in Multimodal Referring Expressions with Mixed Reality

In collaborative tasks, people rely both on verbal and non-verbal cues s...
07/29/2020

Presentation and Analysis of a Multimodal Dataset for Grounded LanguageLearning

Grounded language acquisition – learning how language-based interactions...
07/14/2021

"How to best say it?" : Translating Directives in Machine Language into Natural Language in the Blocks World

We propose a method to generate optimal natural language for block place...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.