Gaze-Informed Multi-Objective Imitation Learning from Human Demonstrations

02/25/2021
by   Ritwik Bera, et al.
0

In the field of human-robot interaction, teaching learning agents from human demonstrations via supervised learning has been widely studied and successfully applied to multiple domains such as self-driving cars and robot manipulation. However, the majority of the work on learning from human demonstrations utilizes only behavioral information from the demonstrator, i.e. what actions were taken, and ignores other useful information. In particular, eye gaze information can give valuable insight towards where the demonstrator is allocating their visual attention, and leveraging such information has the potential to improve agent performance. Previous approaches have only studied the utilization of attention in simple, synchronous environments, limiting their applicability to real-world domains. This work proposes a novel imitation learning architecture to learn concurrently from human action demonstration and eye tracking data to solve tasks where human gaze information provides important context. The proposed method is applied to a visual navigation task, in which an unmanned quadrotor is trained to search for and navigate to a target vehicle in a real-world, photorealistic simulated environment. When compared to a baseline imitation learning architecture, results show that the proposed gaze augmented imitation learning model is able to learn policies that achieve significantly higher task completion rates, with more efficient paths, while simultaneously learning to predict human visual attention. This research aims to highlight the importance of multimodal learning of visual attention information from additional human input modalities and encourages the community to adopt them when training agents from human demonstrations to perform visuomotor tasks.

READ FULL TEXT

page 4

page 5

page 7

research
06/01/2018

AGIL: Learning Attention from Human for Visuomotor Tasks

When intelligent agents learn visuomotor behaviors from human demonstrat...
research
08/29/2023

Enhancing Robot Learning through Learned Human-Attention Feature Maps

Robust and efficient learning remains a challenging problem in robotics,...
research
02/10/2022

Memory-based gaze prediction in deep imitation learning for robot manipulation

Deep imitation learning is a promising approach that does not require ha...
research
12/05/2020

Selective Eye-gaze Augmentation To Enhance Imitation Learning In Atari Games

This paper presents the selective use of eye-gaze information in learnin...
research
09/16/2022

Masked Imitation Learning: Discovering Environment-Invariant Modalities in Multimodal Demonstrations

Multimodal demonstrations provide robots with an abundance of informatio...
research
11/23/2019

Meta Adaptation using Importance Weighted Demonstrations

Imitation learning has gained immense popularity because of its high sam...
research
10/26/2018

Efficiently Combining Human Demonstrations and Interventions for Safe Training of Autonomous Systems in Real-Time

This paper investigates how to utilize different forms of human interact...

Please sign up or login with your details

Forgot password? Click here to reset