Learning Unsupervised Multi-View Stereopsis via Robust Photometric Consistency

05/07/2019
by   Tejas Khot, et al.
0

We present a learning based approach for multi-view stereopsis (MVS). While current deep MVS methods achieve impressive results, they crucially rely on ground-truth 3D training data, and acquisition of such precise 3D geometry for supervision is a major hurdle. Our framework instead leverages photometric consistency between multiple views as supervisory signal for learning depth prediction in a wide baseline MVS setup. However, naively applying photo consistency constraints is undesirable due to occlusion and lighting changes across views. To overcome this, we propose a robust loss formulation that: a) enforces first order consistency and b) for each point, selectively enforces consistency with some views, thus implicitly handling occlusions. We demonstrate our ability to learn MVS without 3D supervision using a real dataset, and show that each component of our proposed robust loss results in a significant improvement. We qualitatively observe that our reconstructions are often more complete than the acquired ground truth, further showing the merits of this approach. Lastly, our learned model generalizes to novel settings, and our approach allows adaptation of existing CNNs to datasets without ground-truth 3D by unsupervised finetuning. Project webpage: https://tejaskhot.github.io/unsup_mvs

READ FULL TEXT

page 1

page 4

page 6

page 8

page 13

page 14

page 15

research
08/30/2019

MVS^2: Deep Unsupervised Multi-view Stereo with Multi-View Symmetry

The success of existing deep-learning based multi-view stereo (MVS) appr...
research
04/30/2020

M^3VSNet: Unsupervised Multi-metric Multi-view Stereo Network

The present MVS methods with deep learning have an impressive performanc...
research
01/11/2018

Multi-view Consistency as Supervisory Signal for Learning Shape and Pose Prediction

We present a framework for learning single-view shape and pose predictio...
research
10/24/2022

360-MLC: Multi-view Layout Consistency for Self-training and Hyper-parameter Tuning

We present 360-MLC, a self-training method based on multi-view layout co...
research
11/01/2020

Unsupervised Metric Relocalization Using Transform Consistency Loss

Training networks to perform metric relocalization traditionally require...
research
04/30/2021

Deep Multi-View Stereo gone wild

Deep multi-view stereo (deep MVS) methods have been developed and extens...
research
07/15/2022

Partial Disentanglement via Mechanism Sparsity

Disentanglement via mechanism sparsity was introduced recently as a prin...

Please sign up or login with your details

Forgot password? Click here to reset