KD-MVS: Knowledge Distillation Based Self-supervised Learning for MVS

07/21/2022
by   Yikang Ding, et al.
0

Supervised multi-view stereo (MVS) methods have achieved remarkable progress in terms of reconstruction quality, but suffer from the challenge of collecting large-scale ground-truth depth. In this paper, we propose a novel self-supervised training pipeline for MVS based on knowledge distillation, termed KD-MVS, which mainly consists of self-supervised teacher training and distillation-based student training. Specifically, the teacher model is trained in a self-supervised fashion using both photometric and featuremetric consistency. Then we distill the knowledge of the teacher model to the student model through probabilistic knowledge transferring. With the supervision of validated knowledge, the student model is able to outperform its teacher by a large margin. Extensive experiments performed on multiple datasets show our method can even outperform supervised methods.

READ FULL TEXT

page 4

page 6

page 10

page 12

page 14

page 20

page 22

page 23

research
04/13/2023

Multi-Mode Online Knowledge Distillation for Self-Supervised Visual Representation Learning

Self-supervised learning (SSL) has made remarkable progress in visual re...
research
08/01/2020

Distilling Visual Priors from Self-Supervised Learning

Convolutional Neural Networks (CNNs) are prone to overfit small training...
research
01/13/2022

SimReg: Regression as a Simple Yet Effective Tool for Self-supervised Knowledge Distillation

Feature regression is a simple way to distill large neural network model...
research
03/28/2023

Enhancing Depth Completion with Multi-View Monitored Distillation

This paper presents a novel method for depth completion, which leverages...
research
05/28/2023

DPHuBERT: Joint Distillation and Pruning of Self-Supervised Speech Models

Self-supervised learning (SSL) has achieved notable success in many spee...
research
02/04/2023

MOMA:Distill from Self-Supervised Teachers

Contrastive Learning and Masked Image Modelling have demonstrated except...
research
05/22/2023

EnSiam: Self-Supervised Learning With Ensemble Representations

Recently, contrastive self-supervised learning, where the proximity of r...

Please sign up or login with your details

Forgot password? Click here to reset