Every Pixel Counts ++: Joint Learning of Geometry and Motion with 3D Holistic Understanding

10/14/2018
by   Chenxu Luo, et al.
0

Learning to estimate 3D geometry in a single frame and optical flow from consecutive frames by watching unlabeled videos via deep convolutional network has made significant process recently. Current state-of-the-art (SOTA) methods treat the tasks independently. One important assumption of the current depth estimation pipeline is that the scene contains no moving object, which can be complemented by the optical flow. In this paper, we propose to address the two tasks as a whole, i.e. to jointly understand per-pixel 3D geometry and motion. This also eliminates the need of static scene assumption and enforces the inherent geometrical consistency during the learning process, yielding significantly improved results for both tasks. We call our method as "Every Pixel Counts++" or "EPC++". Specifically, during training, given two consecutive frames from a video, we adopt three parallel networks to predict the camera motion (MotionNet), dense depth map (DepthNet), and per-pixel optical flow between two frames (FlowNet) respectively. Comprehensive experiments were conducted on the KITTI 2012 and KITTI 2015 datasets. Performance on the five tasks of depth estimation, optical flow estimation, odometry, moving object segmentation and scene flow estimation shows that our approach outperforms other SOTA methods, demonstrating the effectiveness of each module of our proposed method.

READ FULL TEXT

page 2

page 3

page 7

page 10

page 11

page 12

page 14

research
06/27/2018

Every Pixel Counts: Unsupervised Geometry Learning with Holistic 3D Motion Understanding

Learning to estimate 3D geometry in a single image by watching unlabeled...
research
11/16/2020

EffiScene: Efficient Per-Pixel Rigidity Inference for Unsupervised Joint Learning of Optical Flow, Depth, Camera Pose and Motion Segmentation

This paper addresses the challenging unsupervised scene flow estimation ...
research
03/31/2020

Distilled Semantics for Comprehensive Scene Understanding from Videos

Whole understanding of the surroundings is paramount to autonomous syste...
research
01/07/2019

Learning Independent Object Motion from Unlabelled Stereoscopic Videos

We present a system for learning motion of independently moving objects ...
research
12/30/2019

Video Depth Estimation by Fusing Flow-to-Depth Proposals

We present an approach with a novel differentiable flow-to-depth layer f...
research
09/18/2022

SF2SE3: Clustering Scene Flow into SE(3)-Motions via Proposal and Selection

We propose SF2SE3, a novel approach to estimate scene dynamics in form o...
research
04/18/2018

Superframes, A Temporal Video Segmentation

The goal of video segmentation is to turn video data into a set of concr...

Please sign up or login with your details

Forgot password? Click here to reset