MGNet: Monocular Geometric Scene Understanding for Autonomous Driving

06/27/2022
by   Markus Schön, et al.
22

We introduce MGNet, a multi-task framework for monocular geometric scene understanding. We define monocular geometric scene understanding as the combination of two known tasks: Panoptic segmentation and self-supervised monocular depth estimation. Panoptic segmentation captures the full scene not only semantically, but also on an instance basis. Self-supervised monocular depth estimation uses geometric constraints derived from the camera measurement model in order to measure depth from monocular video sequences only. To the best of our knowledge, we are the first to propose the combination of these two tasks in one single model. Our model is designed with focus on low latency to provide fast inference in real-time on a single consumer-grade GPU. During deployment, our model produces dense 3D point clouds with instance aware semantic labels from single high-resolution camera images. We evaluate our model on two popular autonomous driving benchmarks, i.e., Cityscapes and KITTI, and show competitive performance among other real-time capable methods. Source code is available at https://github.com/markusschoen/MGNet.

READ FULL TEXT

page 1

page 8

page 13

page 14

research
09/17/2023

Deep Neighbor Layer Aggregation for Lightweight Self-Supervised Monocular Depth Estimation

With the frequent use of self-supervised monocular depth estimation in r...
research
08/08/2017

Fast Scene Understanding for Autonomous Driving

Most approaches for instance-aware semantic labeling traditionally focus...
research
03/17/2023

A Simple Attempt for 3D Occupancy Estimation in Autonomous Driving

The task of estimating 3D occupancy from surrounding view images is an e...
research
06/03/2020

PLG-IN: Pluggable Geometric Consistency Loss with Wasserstein Distance in Monocular Depth Estimation

We propose a novel objective to penalize geometric inconsistencies, to i...
research
09/14/2022

DevNet: Self-supervised Monocular Depth Learning via Density Volume Construction

Self-supervised depth learning from monocular images normally relies on ...
research
03/31/2021

Full Surround Monodepth from Multiple Cameras

Self-supervised monocular depth and ego-motion estimation is a promising...
research
03/03/2021

Multimodal Scale Consistency and Awareness for Monocular Self-Supervised Depth Estimation

Dense depth estimation is essential to scene-understanding for autonomou...

Please sign up or login with your details

Forgot password? Click here to reset