Gesture-Informed Robot Assistance via Foundation Models

09/06/2023
by   Li-Heng Lin, et al.
0

Gestures serve as a fundamental and significant mode of non-verbal communication among humans. Deictic gestures (such as pointing towards an object), in particular, offer valuable means of efficiently expressing intent in situations where language is inaccessible, restricted, or highly specialized. As a result, it is essential for robots to comprehend gestures in order to infer human intentions and establish more effective coordination with them. Prior work often rely on a rigid hand-coded library of gestures along with their meanings. However, interpretation of gestures is often context-dependent, requiring more flexibility and common-sense reasoning. In this work, we propose a framework, GIRAF, for more flexibly interpreting gesture and language instructions by leveraging the power of large language models. Our framework is able to accurately infer human intent and contextualize the meaning of their gestures for more effective human-robot collaboration. We instantiate the framework for interpreting deictic gestures in table-top manipulation tasks and demonstrate that it is both effective and preferred by users, achieving 70 further demonstrate GIRAF's ability on reasoning about diverse types of gestures by curating a GestureInstruct dataset consisting of 36 different task scenarios. GIRAF achieved 81 tasks in GestureInstruct. Website: https://tinyurl.com/giraf23

READ FULL TEXT

page 3

page 7

page 16

page 17

page 18

page 19

page 20

page 21

research
03/08/2023

Communicating human intent to a robotic companion by multi-type gesture sentences

Human-Robot collaboration in home and industrial workspaces is on the ri...
research
01/24/2023

Context-aware robot control using gesture episodes

Collaborative robots became a popular tool for increasing productivity i...
research
05/21/2019

Design of conversational humanoid robot based on hardware independent gesture generation

With an increasing need for elderly and disability care, there is an inc...
research
07/03/2023

Advantages of Multimodal versus Verbal-Only Robot-to-Human Communication with an Anthropomorphic Robotic Mock Driver

Robots are increasingly used in shared environments with humans, making ...
research
01/31/2022

Beyond synchronization: Body gestures and gaze direction in duo performance

In this chapter, we focus on two main categories of visual interaction: ...
research
12/13/2019

Solving Visual Object Ambiguities when Pointing: An Unsupervised Learning Approach

Whenever we are addressing a specific object or refer to a certain spati...

Please sign up or login with your details

Forgot password? Click here to reset