End-to-End Supervised Multilabel Contrastive Learning

by   Ahmad Sajedi, et al.

Multilabel representation learning is recognized as a challenging problem that can be associated with either label dependencies between object categories or data-related issues such as the inherent imbalance of positive/negative samples. Recent advances address these challenges from model- and data-centric viewpoints. In model-centric, the label correlation is obtained by an external model designs (e.g., graph CNN) to incorporate an inductive bias for training. However, they fail to design an end-to-end training framework, leading to high computational complexity. On the contrary, in data-centric, the realistic nature of the dataset is considered for improving the classification while ignoring the label dependencies. In this paper, we propose a new end-to-end training framework – dubbed KMCL (Kernel-based Mutlilabel Contrastive Learning) – to address the shortcomings of both model- and data-centric designs. The KMCL first transforms the embedded features into a mixture of exponential kernels in Gaussian RKHS. It is then followed by encoding an objective loss that is comprised of (a) reconstruction loss to reconstruct kernel representation, (b) asymmetric classification loss to address the inherent imbalance problem, and (c) contrastive loss to capture label correlation. The KMCL models the uncertainty of the feature encoder while maintaining a low computational footprint. Extensive experiments are conducted on image classification tasks to showcase the consistent improvements of KMCL over the SOTA methods. PyTorch implementation is provided in <https://github.com/mahdihosseini/KMCL>.


page 9

page 17


Label Structure Preserving Contrastive Embedding for Multi-Label Learning with Missing Labels

Contrastive learning (CL) has shown impressive advances in image represe...

Research on the application of contrastive learning in multi-label text classification

The effective application of contrastive learning technology in natural ...

CaCo: Both Positive and Negative Samples are Directly Learnable via Cooperative-adversarial Contrastive Learning

As a representative self-supervised method, contrastive learning has ach...

Asymmetric Loss For Multi-Label Classification

Pictures of everyday life are inherently multi-label in nature. Hence, m...

Equalized Focal Loss for Dense Long-Tailed Object Detection

Despite the recent success of long-tailed object detection, almost all l...

Asymmetric Polynomial Loss For Multi-Label Classification

Various tasks are reformulated as multi-label classification problems, i...

End-to-end training of deep kernel map networks for image classification

Deep kernel map networks have shown excellent performances in various cl...

Please sign up or login with your details

Forgot password? Click here to reset