-
Reactive motion planning with probabilistics safety guarantees
Motion planning in environments with multiple agents is critical to many...
read it
-
Multimodal representation models for prediction and control from partial information
Similar to humans, robots benefit from interacting with their environmen...
read it
-
Choosing Smartly: Adaptive Multimodal Fusion for Object Detection in Changing Environments
Object detection is an essential task for autonomous robots operating in...
read it
-
CREATE: Multimodal Dataset for Unsupervised Learning, Generative Modeling and Prediction of Sensory Data from a Mobile Robot in Indoor Environments
The CREATE database is composed of 14 hours of multimodal recordings fro...
read it
-
Factorized Multimodal Transformer for Multimodal Sequential Learning
The complex world around us is inherently multimodal and sequential (con...
read it
-
Learning Modular Representations for Long-Term Multi-Agent Motion Predictions
Context plays a significant role in the generation of motion for dynamic...
read it
Multimodal dynamics modeling for off-road autonomous vehicles
Dynamics modeling in outdoor and unstructured environments is difficult because different elements in the environment interact with the robot in ways that can be hard to predict. Leveraging multiple sensors to perceive maximal information about the robot's environment is thus crucial when building a model to perform predictions about the robot's dynamics with the goal of doing motion planning. We design a model capable of long-horizon motion predictions, leveraging vision, lidar and proprioception, which is robust to arbitrarily missing modalities at test time. We demonstrate in simulation that our model is able to leverage vision to predict traction changes. We then test our model using a real-world challenging dataset of a robot navigating through a forest, performing predictions in trajectories unseen during training. We try different modality combinations at test time and show that, while our model performs best when all modalities are present, it is still able to perform better than the baseline even when receiving only raw vision input and no proprioception, as well as when only receiving proprioception. Overall, our study demonstrates the importance of leveraging multiple sensors when doing dynamics modeling in outdoor conditions.
READ FULL TEXT
Comments
There are no comments yet.