DeepIPC: Deeply Integrated Perception and Control for Mobile Robot in Real Environments

07/20/2022
by   Oskar Natan, et al.
14

We propose DeepIPC, an end-to-end multi-task model that handles both perception and control tasks in driving a mobile robot autonomously. The model consists of two main parts, perception and controller modules. The perception module takes RGB image and depth map to perform semantic segmentation and bird's eye view (BEV) semantic mapping along with providing their encoded features. Meanwhile, the controller module processes these features with the measurement of GNSS locations and angular speed to estimate waypoints that come with latent features. Then, two different agents are used to translate waypoints and latent features into a set of navigational controls to drive the robot. The model is evaluated by predicting driving records and performing automated driving under various conditions in the real environment. Based on the experimental results, DeepIPC achieves the best drivability and multi-task performance even with fewer parameters compared to the other models.

READ FULL TEXT

page 2

page 4

page 6

research
04/12/2022

Fully End-to-end Autonomous Driving with Semantic Depth Cloud Mapping and Multi-Agent

Focusing on the task of point-to-point navigation for an autonomous driv...
research
09/28/2018

Rethinking Self-driving: Multi-task Knowledge for Better Generalization and Accident Explanation Ability

Current end-to-end deep learning driving models have two problems: (1) P...
research
02/01/2018

Virtual-to-Real: Learning to Control in Visual Semantic Segmentation

Collecting training data from the physical world is usually time-consumi...
research
07/03/2018

Modular Vehicle Control for Transferring Semantic Information to Unseen Weather Conditions using GANs

End-to-end supervised learning has shown promising results for self-driv...
research
12/02/2021

Situation-Aware Environment Perception Using a Multi-Layer Attention Map

Within the field of automated driving, a clear trend in environment perc...
research
03/23/2023

NVAutoNet: Fast and Accurate 360^∘ 3D Perception For Self Driving

Robust real-time perception of 3D world is essential to the autonomous v...
research
07/12/2017

Learning a CNN-based End-to-End Controller for a Formula SAE Racecar

We present a set of CNN-based end-to-end models for controls of a Formul...

Please sign up or login with your details

Forgot password? Click here to reset