Simple and Effective Balance of Contrastive Losses

12/22/2021
by   Arnaud Sors, et al.
0

Contrastive losses have long been a key ingredient of deep metric learning and are now becoming more popular due to the success of self-supervised learning. Recent research has shown the benefit of decomposing such losses into two sub-losses which act in a complementary way when learning the representation network: a positive term and an entropy term. Although the overall loss is thus defined as a combination of two terms, the balance of these two terms is often hidden behind implementation details and is largely ignored and sub-optimal in practice. In this work, we approach the balance of contrastive losses as a hyper-parameter optimization problem, and propose a coordinate descent-based search method that efficiently find the hyper-parameters that optimize evaluation performance. In the process, we extend existing balance analyses to the contrastive margin loss, include batch size in the balance, and explain how to aggregate loss elements from the batch to maintain near-optimal performance over a larger range of batch sizes. Extensive experiments with benchmarks from deep metric learning and self-supervised learning show that optimal hyper-parameters are found faster with our method than with other common search methods.

READ FULL TEXT

page 3

page 6

page 12

research
10/29/2020

Self-Supervised Video Representation Using Pretext-Contrastive Learning

Pretext tasks and contrastive learning have been successful in self-supe...
research
05/18/2023

Tuned Contrastive Learning

In recent times, contrastive learning based loss functions have become i...
research
01/08/2023

Learning the Relation between Similarity Loss and Clustering Loss in Self-Supervised Learning

Self-supervised learning enables networks to learn discriminative featur...
research
05/19/2023

Not All Semantics are Created Equal: Contrastive Self-supervised Learning with Automatic Temperature Individualization

In this paper, we aim to optimize a contrastive loss with individualized...
research
11/05/2020

Deep Metric Learning with Spherical Embedding

Deep metric learning has attracted much attention in recent years, due t...
research
02/24/2022

Provable Stochastic Optimization for Global Contrastive Learning: Small Batch Does Not Harm Performance

In this paper, we study contrastive learning from an optimization perspe...

Please sign up or login with your details

Forgot password? Click here to reset