DeepAI AI Chat
Log In Sign Up

Planning Robot Motion using Deep Visual Prediction

by   Meenakshi Sarkar, et al.
Proton Technologies AG

In this paper, we introduce a novel framework that can learn to make visual predictions about the motion of a robotic agent from raw video frames. Our proposed motion prediction network (PROM-Net) can learn in a completely unsupervised manner and efficiently predict up to 10 frames in the future. Moreover, unlike any other motion prediction models, it is lightweight and once trained it can be easily implemented on mobile platforms that have very limited computing capabilities. We have created a new robotic data set comprising LEGO Mindstorms moving along various trajectories in three different environments under different lighting conditions for testing and training the network. Finally, we introduce a framework that would use the predicted frames from the network as an input to a model predictive controller for motion planning in unknown dynamic environments with moving obstacles.


page 4

page 5


Predicted Composite Signed-Distance Fields for Real-Time Motion Planning in Dynamic Environments

We present a novel framework for motion planning in dynamic environments...

Informed Circular Fields for Global Reactive Obstacle Avoidance of Robotic Manipulators

In this paper a global reactive motion planning framework for robotic ma...

Motion Planning in Dynamic Environments Using Context-Aware Human Trajectory Prediction

Over the years, the separate fields of motion planning, mapping, and hum...

Map-Predictive Motion Planning in Unknown Environments

Algorithms for motion planning in unknown environments are generally lim...

Occupancy Map Prediction Using Generative and Fully Convolutional Networks for Vehicle Navigation

Fast, collision-free motion through unknown environments remains a chall...

KeyIn: Discovering Subgoal Structure with Keyframe-based Video Prediction

Real-world image sequences can often be naturally decomposed into a smal...