YouRefIt: Embodied Reference Understanding with Language and Gesture

09/08/2021
by   Yixin Chen, et al.
0

We study the understanding of embodied reference: One agent uses both language and gesture to refer to an object to another agent in a shared physical environment. Of note, this new visual task requires understanding multimodal cues with perspective-taking to identify which object is being referred to. To tackle this problem, we introduce YouRefIt, a new crowd-sourced dataset of embodied reference collected in various physical scenes; the dataset contains 4,195 unique reference clips in 432 indoor scenes. To the best of our knowledge, this is the first embodied reference dataset that allows us to study referring expressions in daily physical scenes to understand referential behavior, human communication, and human-robot interaction. We further devise two benchmarks for image-based and video-based embodied reference understanding. Comprehensive baselines and extensive experiments provide the very first result of machine perception on how the referring expressions and gestures affect the embodied reference understanding. Our results provide essential evidence that gestural cues are as critical as language cues in understanding the embodied reference.

READ FULL TEXT

page 2

page 3

page 4

page 5

page 6

page 9

page 10

page 11

research
08/05/2021

Communicative Learning with Natural Gestures for Embodied Navigation Agents with Human-in-the-Scene

Human-robot collaboration is an essential research topic in artificial i...
research
09/17/2022

Hand and Arm Gesture-based Human-Robot Interaction: A Review

The study of Human-Robot Interaction (HRI) aims to create close and frie...
research
05/31/2019

Visual Understanding and Narration: A Deeper Understanding and Explanation of Visual Scenes

We describe the task of Visual Understanding and Narration, in which a r...
research
03/13/2023

Contextually-rich human affect perception using multimodal scene information

The process of human affect understanding involves the ability to infer ...
research
07/07/2022

Finding Fallen Objects Via Asynchronous Audio-Visual Integration

The way an object looks and sounds provide complementary reflections of ...
research
04/04/2021

Perspective-corrected Spatial Referring Expression Generation for Human-Robot Interaction

Intelligent robots designed to interact with humans in real scenarios ne...
research
03/30/2021

3D AffordanceNet: A Benchmark for Visual Object Affordance Understanding

The ability to understand the ways to interact with objects from visual ...

Please sign up or login with your details

Forgot password? Click here to reset