In Defense of the Triplet Loss for Visual Recognition

01/24/2019
by   Ahmed Taha, et al.
0

We employ triplet loss as a space embedding regularizer to boost classification performance. Standard architectures, like ResNet and DesneNet, are extended to support both losses with minimal hyper-parameter tuning. This promotes generality while fine-tuning pretrained networks. Triplet loss is a powerful surrogate for recently proposed embedding regularizers. Yet, it is avoided for large batch-size requirement and high computational cost. Through our experiments, we re-assess these assumptions. During inference, our network supports both classification and embedding tasks without any computational overhead. Quantitative evaluation highlights how our approach compares favorably to the existing state of the art on multiple fine-grained recognition datasets. Further evaluation on an imbalanced video dataset achieves significant improvement (>7 efficiency, triplet loss brings retrieval and interpretability to classification models.

READ FULL TEXT

page 5

page 8

research
03/01/2017

Incorporating Intra-Class Variance to Fine-Grained Visual Recognition

Fine-grained visual recognition aims to capture discriminative character...
research
07/13/2021

Deep Ranking with Adaptive Margin Triplet Loss

We propose a simple modification from a fixed margin triplet loss to an ...
research
04/06/2017

Training Triplet Networks with GAN

Triplet networks are widely used models that are characterized by good p...
research
10/09/2021

Adversarial Training for Face Recognition Systems using Contrastive Adversarial Learning and Triplet Loss Fine-tuning

Though much work has been done in the domain of improving the adversaria...
research
09/22/2020

Beyond Triplet Loss: Person Re-identification with Fine-grained Difference-aware Pairwise Loss

Person Re-IDentification (ReID) aims at re-identifying persons from diff...
research
11/14/2022

Supervised Fine-tuning Evaluation for Long-term Visual Place Recognition

In this paper, we present a comprehensive study on the utility of deep c...
research
03/04/2021

SVMax: A Feature Embedding Regularizer

A neural network regularizer (e.g., weight decay) boosts performance by ...

Please sign up or login with your details

Forgot password? Click here to reset