Multi range Real-time depth inference from a monocular stabilized footage using a Fully Convolutional Neural Network

09/12/2018
by   Clément Pinard, et al.
0

Using a neural network architecture for depth map inference from monocular stabilized videos with application to UAV videos in rigid scenes, we propose a multi-range architecture for unconstrained UAV flight, leveraging flight data from sensors to make accurate depth maps for uncluttered outdoor environment. We try our algorithm on both synthetic scenes and real UAV flight data. Quantitative results are given for synthetic scenes with a slightly noisy orientation, and show that our multi-range architecture improves depth inference. Along with this article is a video that present our results more thoroughly.

READ FULL TEXT

page 2

page 3

page 4

page 5

research
09/12/2018

End-to-end depth from motion with stabilized monocular videos

We propose a depth map inference system from monocular videos based on a...
research
09/12/2018

Learning structure-from-motionfrom motion

This work is based on a questioning of the quality metrics used by deep ...
research
12/01/2016

Flight Dynamics-based Recovery of a UAV Trajectory using Ground Cameras

We propose a new method to estimate the 6-dof trajectory of a flying obj...
research
08/18/2016

Efficient Multi-Frequency Phase Unwrapping using Kernel Density Estimation

In this paper we introduce an efficient method to unwrap multi-frequency...
research
09/30/2021

TöRF: Time-of-Flight Radiance Fields for Dynamic Scene View Synthesis

Neural networks can represent and accurately reconstruct radiance fields...
research
03/22/2018

Aligning Across Large Gaps in Time

We present a method of temporally-invariant image registration for outdo...

Please sign up or login with your details

Forgot password? Click here to reset