Exploiting temporal consistency for real-time video depth estimation

08/10/2019
by   Haokui Zhang, et al.
12

Accuracy of depth estimation from static images has been significantly improved recently, by exploiting hierarchical features from deep convolutional neural networks (CNNs). Compared with static images, vast information exists among video frames and can be exploited to improve the depth estimation performance. In this work, we focus on exploring temporal information from monocular videos for depth estimation. Specifically, we take the advantage of convolutional long short-term memory (CLSTM) and propose a novel spatial-temporal CSLTM (ST-CLSTM) structure. Our ST-CLSTM structure can capture not only the spatial features but also the temporal correlations/consistency among consecutive video frames with negligible increase in computational cost. Additionally, in order to maintain the temporal consistency among the estimated depth frames, we apply the generative adversarial learning scheme and design a temporal consistency loss. The temporal consistency loss is combined with the spatial loss to update the model in an end-to-end fashion. By taking advantage of the temporal information, we build a video depth estimation framework that runs in real-time and generates visually pleasant results. Moreover, our approach is flexible and can be generalized to most existing depth estimation frameworks. Code is available at: https://tinyurl.com/STCLSTM

READ FULL TEXT

page 3

page 7

research
12/05/2020

Depth estimation from 4D light field videos

Depth (disparity) estimation from 4D Light Field (LF) images has been a ...
research
07/31/2022

Less is More: Consistent Video Depth Estimation with Masked Frames Modeling

Temporal consistency is the key challenge of video depth estimation. Pre...
research
04/14/2023

Efficient Incremental Penetration Depth Estimation between Convex Geometries

Penetration depth (PD) is essential for robotics due to its extensive ap...
research
07/26/2023

MAMo: Leveraging Memory and Attention for Monocular Video Depth Estimation

We propose MAMo, a novel memory and attention frame-work for monocular v...
research
10/15/2021

Attention meets Geometry: Geometry Guided Spatial-Temporal Attention for Consistent Self-Supervised Monocular Depth Estimation

Inferring geometrically consistent dense 3D scenes across a tuple of tem...
research
04/20/2023

A geometry-aware deep network for depth estimation in monocular endoscopy

Monocular depth estimation is critical for endoscopists to perform spati...
research
09/16/2019

Temporally Consistent Depth Prediction with Flow-Guided Memory Units

Predicting depth from a monocular video sequence is an important task fo...

Please sign up or login with your details

Forgot password? Click here to reset