Semi-supervised Deep Multi-view Stereo

07/24/2022
by   Hongbin Xu, et al.
10

Significant progress has been witnessed in learning-based Multi-view Stereo (MVS) of supervised and unsupervised settings. To combine their respective merits in accuracy and completeness, meantime reducing the demand for expensive labeled data, this paper explores a novel semi-supervised setting of learning-based MVS problem that only a tiny part of the MVS data is attached with dense depth ground truth. However, due to huge variation of scenarios and flexible setting in views, semi-supervised MVS problem (Semi-MVS) may break the basic assumption in classic semi-supervised learning, that unlabeled data and labeled data share the same label space and data distribution. To handle these issues, we propose a novel semi-supervised MVS framework, namely SE-MVS. For the simple case that the basic assumption works in MVS data, consistency regularization encourages the model predictions to be consistent between original sample and randomly augmented sample via constraints on KL divergence. For further troublesome case that the basic assumption is conflicted in MVS data, we propose a novel style consistency loss to alleviate the negative effect caused by the distribution gap. The visual style of unlabeled sample is transferred to labeled sample to shrink the gap, and the model prediction of generated sample is further supervised with the label in original labeled sample. The experimental results on DTU, BlendedMVS, GTA-SFM, and Tanks&Temples datasets show the superior performance of the proposed method. With the same settings in backbone network, our proposed SE-MVS outperforms its fully-supervised and unsupervised baselines.

READ FULL TEXT

page 3

page 4

page 7

page 9

page 10

research
11/11/2018

Semi-supervised Deep Representation Learning for Multi-View Problems

While neural networks for learning representation of multi-view data hav...
research
03/15/2012

Modeling Multiple Annotator Expertise in the Semi-Supervised Learning Scenario

Learning algorithms normally assume that there is at most one annotation...
research
06/02/2023

Scaling Up Semi-supervised Learning with Unconstrained Unlabelled Data

We propose UnMixMatch, a semi-supervised learning framework which can le...
research
04/23/2020

Semi-Supervised Models via Data Augmentationfor Classifying Interactive Affective Responses

We present semi-supervised models with data augmentation (SMDA), a semi-...
research
03/28/2022

Semi-supervised anomaly detection algorithm based on KL divergence (SAD-KL)

The unlabeled data are generally assumed to be normal data in detecting ...
research
09/14/2021

Semi-Supervised Wide-Angle Portraits Correction by Multi-Scale Transformer

We propose a semi-supervised network for wide-angle portraits correction...
research
11/23/2020

AlphaMatch: Improving Consistency for Semi-supervised Learning with Alpha-divergence

Semi-supervised learning (SSL) is a key approach toward more data-effici...

Please sign up or login with your details

Forgot password? Click here to reset