Action-conditioned Deep Visual Prediction with RoAM, a new Indoor Human Motion Dataset for Autonomous Robots

06/28/2023
by   Meenakshi Sarkar, et al.
0

With the increasing adoption of robots across industries, it is crucial to focus on developing advanced algorithms that enable robots to anticipate, comprehend, and plan their actions effectively in collaboration with humans. We introduce the Robot Autonomous Motion (RoAM) video dataset, which is collected with a custom-made turtlebot3 Burger robot in a variety of indoor environments recording various human motions from the robot's ego-vision. The dataset also includes synchronized records of the LiDAR scan and all control actions taken by the robot as it navigates around static and moving human agents. The unique dataset provides an opportunity to develop and benchmark new visual prediction frameworks that can predict future image frames based on the action taken by the recording agent in partially observable scenarios or cases where the imaging sensor is mounted on a moving platform. We have benchmarked the dataset on our novel deep visual prediction framework called ACPNet where the approximated future image frames are also conditioned on action taken by the robot and demonstrated its potential for incorporating robot dynamics into the video prediction paradigm for mobile robotics and autonomous navigation research.

READ FULL TEXT

page 1

page 3

page 7

research
05/23/2016

Unsupervised Learning for Physical Interaction through Video Prediction

A core challenge for an agent learning to interact with the world is to ...
research
10/25/2019

JRDB: A Dataset and Benchmark for Visual Perception for Navigation in Human Environments

We present JRDB, a novel dataset collected from our social mobile manipu...
research
06/24/2019

Planning Robot Motion using Deep Visual Prediction

In this paper, we introduce a novel framework that can learn to make vis...
research
10/12/2018

Sequential Learning of Movement Prediction in Dynamic Environments using LSTM Autoencoder

Predicting movement of objects while the action of learning agent intera...
research
10/28/2020

SFU-Store-Nav: A Multimodal Dataset for Indoor Human Navigation

This article describes a dataset collected in a set of experiments that ...
research
09/19/2013

Exploration and Exploitation in Visuomotor Prediction of Autonomous Agents

This paper discusses various techniques to let an agent learn how to pre...
research
07/24/2023

RoboChop: Autonomous Framework for Fruit and Vegetable Chopping Leveraging Foundational Models

With the goal of developing fully autonomous cooking robots, developing ...

Please sign up or login with your details

Forgot password? Click here to reset