Learning to Assess Danger from Movies for Cooperative Escape Planning in Hazardous Environments

07/27/2022
by   Vikram Shree, et al.
0

There has been a plethora of work towards improving robot perception and navigation, yet their application in hazardous environments, like during a fire or an earthquake, is still at a nascent stage. We hypothesize two key challenges here: first, it is difficult to replicate such scenarios in the real world, which is necessary for training and testing purposes. Second, current systems are not fully able to take advantage of the rich multi-modal data available in such hazardous environments. To address the first challenge, we propose to harness the enormous amount of visual content available in the form of movies and TV shows, and develop a dataset that can represent hazardous environments encountered in the real world. The data is annotated with high-level danger ratings for realistic disaster images, and corresponding keywords are provided that summarize the content of the scene. In response to the second challenge, we propose a multi-modal danger estimation pipeline for collaborative human-robot escape scenarios. Our Bayesian framework improves danger estimation by fusing information from robot's camera sensor and language inputs from the human. Furthermore, we augment the estimation module with a risk-aware planner that helps in identifying safer paths out of the dangerous environment. Through extensive simulations, we exhibit the advantages of our multi-modal perception framework that gets translated into tangible benefits such as higher success rate in a collaborative human-robot mission.

READ FULL TEXT

page 1

page 3

page 5

research
12/02/2020

Top-1 CORSMAL Challenge 2020 Submission: Filling Mass Estimation Using Multi-modal Observations of Human-robot Handovers

Human-robot object handover is a key skill for the future of human-robot...
research
09/13/2022

GrASPE: Graph based Multimodal Fusion for Robot Navigation in Unstructured Outdoor Environments

We present a novel trajectory traversability estimation and planning alg...
research
10/03/2022

FRIDA: A Collaborative Robot Painter with a Differentiable, Real2Sim2Real Planning Environment

Painting is an artistic process of rendering visual content that achieve...
research
07/28/2013

Knowledge Representation for Robots through Human-Robot Interaction

The representation of the knowledge needed by a robot to perform complex...
research
02/24/2022

Multi-Modal Legged Locomotion Framework with Automated Residual Reinforcement Learning

While quadruped robots usually have good stability and load capacity, bi...
research
12/27/2022

Audiovisual Database with 360 Video and Higher-Order Ambisonics Audio for Perception, Cognition, Behavior, and QoE Evaluation Research

Research into multi-modal perception, human cognition, behavior, and att...
research
07/09/2021

Using Depth for Improving Referring Expression Comprehension in Real-World Environments

In a human-robot collaborative task where a robot helps its partner by f...

Please sign up or login with your details

Forgot password? Click here to reset