Descriptor Distillation: a Teacher-Student-Regularized Framework for Learning Local Descriptors

09/23/2022
by   Yuzhen Liu, et al.
0

Learning a fast and discriminative patch descriptor is a challenging topic in computer vision. Recently, many existing works focus on training various descriptor learning networks by minimizing a triplet loss (or its variants), which is expected to decrease the distance between each positive pair and increase the distance between each negative pair. However, such an expectation has to be lowered due to the non-perfect convergence of network optimizer to a local solution. Addressing this problem and the open computational speed problem, we propose a Descriptor Distillation framework for local descriptor learning, called DesDis, where a student model gains knowledge from a pre-trained teacher model, and it is further enhanced via a designed teacher-student regularizer. This teacher-student regularizer is to constrain the difference between the positive (also negative) pair similarity from the teacher model and that from the student model, and we theoretically prove that a more effective student model could be trained by minimizing a weighted combination of the triplet loss and this regularizer, than its teacher which is trained by minimizing the triplet loss singly. Under the proposed DesDis, many existing descriptor networks could be embedded as the teacher model, and accordingly, both equal-weight and light-weight student models could be derived, which outperform their teacher in either accuracy or speed. Experimental results on 3 public datasets demonstrate that the equal-weight student models, derived from the proposed DesDis framework by utilizing three typical descriptor learning networks as teacher models, could achieve significantly better performances than their teachers and several other comparative methods. In addition, the derived light-weight models could achieve 8 times or even faster speeds than the comparative methods under similar patch verification performances

READ FULL TEXT

page 1

page 6

page 9

page 10

research
03/15/2023

Descriptor Distillation for Efficient Multi-Robot SLAM

Performing accurate localization while maintaining the low-level communi...
research
02/27/2023

Leveraging Angular Distributions for Improved Knowledge Distillation

Knowledge distillation as a broad class of methods has led to the develo...
research
05/25/2023

Triplet Knowledge Distillation

In Knowledge Distillation, the teacher is generally much larger than the...
research
01/21/2022

Image-to-Video Re-Identification via Mutual Discriminative Knowledge Transfer

The gap in representations between image and video makes Image-to-Video ...
research
09/13/2020

DistilE: Distiling Knowledge Graph Embeddings for Faster and Cheaper Reasoning

Knowledge Graph Embedding (KGE) is a popular method for KG reasoning and...
research
08/03/2020

Teacher-Student Training and Triplet Loss for Facial Expression Recognition under Occlusion

In this paper, we study the task of facial expression recognition under ...
research
06/08/2021

SDGMNet: Statistic-based Dynamic Gradient Modulation for Local Descriptor Learning

Modifications on triplet loss that rescale the back-propagated gradients...

Please sign up or login with your details

Forgot password? Click here to reset