An End-to-End Human Simulator for Task-Oriented Multimodal Human-Robot Collaboration

04/02/2023
by   Afagh Mehri Shervedani, et al.
0

This paper proposes a neural network-based user simulator that can provide a multimodal interactive environment for training Reinforcement Learning (RL) agents in collaborative tasks involving multiple modes of communication. The simulator is trained on the existing ELDERLY-AT-HOME corpus and accommodates multiple modalities such as language, pointing gestures, and haptic-ostensive actions. The paper also presents a novel multimodal data augmentation approach, which addresses the challenge of using a limited dataset due to the expensive and time-consuming nature of collecting human demonstrations. Overall, the study highlights the potential for using RL and multimodal user simulators in developing and improving domestic assistive robots.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset