Self-supervised Training Sample Difficulty Balancing for Local Descriptor Learning

03/10/2023
by   Jiahan Zhang, et al.
0

In the case of an imbalance between positive and negative samples, hard negative mining strategies have been shown to help models learn more subtle differences between positive and negative samples, thus improving recognition performance. However, if too strict mining strategies are promoted in the dataset, there may be a risk of introducing false negative samples. Meanwhile, the implementation of the mining strategy disrupts the difficulty distribution of samples in the real dataset, which may cause the model to over-fit these difficult samples. Therefore, in this paper, we investigate how to trade off the difficulty of the mined samples in order to obtain and exploit high-quality negative samples, and try to solve the problem in terms of both the loss function and the training strategy. The proposed balance loss provides an effective discriminant for the quality of negative samples by combining a self-supervised approach to the loss function, and uses a dynamic gradient modulation strategy to achieve finer gradient adjustment for samples of different difficulties. The proposed annealing training strategy then constrains the difficulty of the samples drawn from negative sample mining to provide data sources with different difficulty distributions for the loss function, and uses samples of decreasing difficulty to train the model. Extensive experiments show that our new descriptors outperform previous state-of-the-art descriptors for patch validation, matching, and retrieval tasks.

READ FULL TEXT

page 4

page 7

research
01/27/2023

Bayesian Self-Supervised Contrastive Learning

Recent years have witnessed many successful applications of contrastive ...
research
11/15/2022

False: False Negative Samples Aware Contrastive Learning for Semantic Segmentation of High-Resolution Remote Sensing Image

The existing SSCL of RSI is built based on constructing positive and neg...
research
09/01/2021

Multi-Sample based Contrastive Loss for Top-k Recommendation

The top-k recommendation is a fundamental task in recommendation systems...
research
07/20/2022

Negative Samples are at Large: Leveraging Hard-distance Elastic Loss for Re-identification

We present a Momentum Re-identification (MoReID) framework that can leve...
research
03/01/2023

Selectively Hard Negative Mining for Alleviating Gradient Vanishing in Image-Text Matching

Recently, a series of Image-Text Matching (ITM) methods achieve impressi...
research
02/04/2023

Dynamical Equations With Bottom-up Self-Organizing Properties Learn Accurate Dynamical Hierarchies Without Any Loss Function

Self-organization is ubiquitous in nature and mind. However, machine lea...
research
06/08/2021

SDGMNet: Statistic-based Dynamic Gradient Modulation for Local Descriptor Learning

Modifications on triplet loss that rescale the back-propagated gradients...

Please sign up or login with your details

Forgot password? Click here to reset