Learning Deep Embeddings with Histogram Loss

11/02/2016
by   Evgeniya Ustinova, et al.
0

We suggest a loss for learning deep embeddings. The new loss does not introduce parameters that need to be tuned and results in very good embeddings across a range of datasets and problems. The loss is computed by estimating two distribution of similarities for positive (matching) and negative (non-matching) sample pairs, and then computing the probability of a positive pair to have a lower similarity score than a negative pair based on the estimated similarity distributions. We show that such operations can be performed in a simple and piecewise-differentiable manner using 1D histograms with soft assignment operations. This makes the proposed loss suitable for learning deep embeddings using stochastic optimization. In the experiments, the new loss performs favourably compared to recently proposed alternatives.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/06/2020

Continuous Histogram Loss: Beyond Neural Similarity

Similarity learning has gained a lot of attention from researches in rec...
research
01/31/2020

Symmetrical Synthesis for Deep Metric Learning

Deep metric learning aims to learn embeddings that contain semantic simi...
research
01/11/2019

Retrieving Similar E-Commerce Images Using Deep Learning

In this paper, we propose a deep convolutional neural network for learni...
research
03/22/2022

Unified Negative Pair Generation toward Well-discriminative Feature Space for Face Recognition

The goal of face recognition (FR) can be viewed as a pair similarity opt...
research
02/26/2020

A Quadruplet Loss for Enforcing Semantically Coherent Embeddings in Multi-output Classification Problems

This paper describes one objective function for learning semantically co...
research
02/25/2020

Circle Loss: A Unified Perspective of Pair Similarity Optimization

This paper provides a pair similarity optimization viewpoint on deep fea...
research
07/04/2017

Learning Deep Energy Models: Contrastive Divergence vs. Amortized MLE

We propose a number of new algorithms for learning deep energy models an...

Please sign up or login with your details

Forgot password? Click here to reset