Situated Multimodal Control of a Mobile Robot: Navigation through a Virtual Environment

07/13/2020
by   Katherine Krajovic, et al.
4

We present a new interface for controlling a navigation robot in novel environments using coordinated gesture and language. We use a TurtleBot3 robot with a LIDAR and a camera, an embodied simulation of what the robot has encountered while exploring, and a cross-platform bridge facilitating generic communication. A human partner can deliver instructions to the robot using spoken English and gestures relative to the simulated environment, to guide the robot through navigation tasks.

READ FULL TEXT
research
08/05/2021

Communicative Learning with Natural Gestures for Embodied Navigation Agents with Human-in-the-Scene

Human-robot collaboration is an essential research topic in artificial i...
research
12/05/2017

Brain-Computer Interface meets ROS: A robotic approach to mentally drive telepresence robots

This paper shows and evaluates a novel approach to integrate a non-invas...
research
04/03/2020

VGPN: Voice-Guided Pointing Robot Navigation for Humans

Pointing gestures are widely used in robot navigationapproaches nowadays...
research
05/05/2023

Towards the Neuromorphic Computing for Offroad Robot Environment Perception and Navigation

My research objective is to explicitly bridge the gap between high compu...
research
11/01/2020

Can a Robot Trust You? A DRL-Based Approach to Trust-Driven Human-Guided Navigation

Humans are known to construct cognitive maps of their everyday surroundi...
research
10/12/2019

A Research Platform for Multi-Robot Dialogue with Humans

This paper presents a research platform that supports spoken dialogue in...
research
07/28/2020

Lifelong Navigation

This paper presents a continually self-improving lifelong learning frame...

Please sign up or login with your details

Forgot password? Click here to reset