SFU-Store-Nav: A Multimodal Dataset for Indoor Human Navigation

by   Zhitian Zhang, et al.

This article describes a dataset collected in a set of experiments that involves human participants and a robot. The set of experiments was conducted in the computing science robotics lab in Simon Fraser University, Burnaby, BC, Canada, and its aim is to gather data containing common gestures, movements, and other behaviours that may indicate humans' navigational intent relevant for autonomous robot navigation. The experiment simulates a shopping scenario where human participants come in to pick up items from his/her shopping list and interact with a Pepper robot that is programmed to help the human participant. We collected visual data and motion capture data from 108 human participants. The visual data contains live recordings of the experiments and the motion capture data contains the position and orientation of the human participants in world coordinates. This dataset could be valuable for researchers in the robotics, machine learning and computer vision community.


THÖR: Human-Robot Indoor Navigation Experiment and Accurate Motion Trajectories Dataset

Understanding human behavior is key for robots and intelligent systems t...

Socially Compliant Navigation Dataset (SCAND): A Large-Scale Dataset of Demonstrations for Social Navigation

Social navigation is the capability of an autonomous agent, such as a ro...

Sample-Efficient Training of Robotic Guide Using Human Path Prediction Network

Training a robot that engages with people is challenging, because it is ...

Modeling User Empathy Elicited by a Robot Storyteller

Virtual and robotic agents capable of perceiving human empathy have the ...

JRDB: A Dataset and Benchmark for Visual Perception for Navigation in Human Environments

We present JRDB, a novel dataset collected from our social mobile manipu...

Video Segmentation using Teacher-Student Adaptation in a Human Robot Interaction (HRI) Setting

Video segmentation is a challenging task that has many applications in r...

Sonified distance in sensory substitution does not always improve localization: comparison with a 2D and 3D handheld device

Early visual to auditory substitution devices encode 2D monocular images...