No Pairs Left Behind: Improving Metric Learning with Regularized Triplet Objective

10/18/2022
by   A. Ali Heydari, et al.
0

We propose a novel formulation of the triplet objective function that improves metric learning without additional sample mining or overhead costs. Our approach aims to explicitly regularize the distance between the positive and negative samples in a triplet with respect to the anchor-negative distance. As an initial validation, we show that our method (called No Pairs Left Behind [NPLB]) improves upon the traditional and current state-of-the-art triplet objective formulations on standard benchmark datasets. To show the effectiveness and potentials of NPLB on real-world complex data, we evaluate our approach on a large-scale healthcare dataset (UK Biobank), demonstrating that the embeddings learned by our model significantly outperform all other current representations on tested downstream tasks. Additionally, we provide a new model-agnostic single-time health risk definition that, when used in tandem with the learned representations, achieves the most accurate prediction of subjects' future health complications. Our results indicate that NPLB is a simple, yet effective framework for improving existing deep metric learning models, showcasing the potential implications of metric learning in more complex applications, especially in the biological and healthcare domains.

READ FULL TEXT

page 18

page 20

page 21

page 23

page 24

research
04/05/2017

Smart Mining for Deep Metric Learning

To solve deep metric learning problems and producing feature embeddings,...
research
01/20/2022

Adaptive neighborhood Metric learning

In this paper, we reveal that metric learning would suffer from serious ...
research
12/04/2021

Construct Informative Triplet with Two-stage Hard-sample Generation

In this paper, we propose a robust sample generation scheme to construct...
research
06/17/2022

Improving Generalization of Metric Learning via Listwise Self-distillation

Most deep metric learning (DML) methods employ a strategy that forces al...
research
10/20/2022

Mathematical Justification of Hard Negative Mining via Isometric Approximation Theorem

In deep metric learning, the Triplet Loss has emerged as a popular metho...
research
09/16/2019

Visualizing How Embeddings Generalize

Deep metric learning is often used to learn an embedding function that c...
research
02/12/2018

Safe Triplet Screening for Distance Metric Learning

We study safe screening for metric learning. Distance metric learning ca...

Please sign up or login with your details

Forgot password? Click here to reset