Multi-network Contrastive Learning Based on Global and Local Representations

06/28/2023
by   Weiquan Li, et al.
0

The popularity of self-supervised learning has made it possible to train models without relying on labeled data, which saves expensive annotation costs. However, most existing self-supervised contrastive learning methods often overlook the combination of global and local feature information. This paper proposes a multi-network contrastive learning framework based on global and local representations. We introduce global and local feature information for self-supervised contrastive learning through multiple networks. The model learns feature information at different scales of an image by contrasting the embedding pairs generated by multiple networks. The framework also expands the number of samples used for contrast and improves the training efficiency of the model. Linear evaluation results on three benchmark datasets show that our method outperforms several existing classical self-supervised learning methods.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/15/2022

SCD: Self-Contrastive Decorrelation for Sentence Embeddings

In this paper, we propose Self-Contrastive Decorrelation (SCD), a self-s...
research
06/10/2020

DisCont: Self-Supervised Visual Attribute Disentanglement using Context Vectors

Disentangling the underlying feature attributes within an image with no ...
research
11/24/2022

Pose-disentangled Contrastive Learning for Self-supervised Facial Representation

Self-supervised facial representation has recently attracted increasing ...
research
03/15/2022

InfoDCL: A Distantly Supervised Contrastive Learning Framework for Social Meaning

Existing supervised contrastive learning frameworks suffer from two majo...
research
06/29/2021

SCARF: Self-Supervised Contrastive Learning using Random Feature Corruption

Self-supervised contrastive representation learning has proved incredibl...
research
06/11/2021

A comprehensive solution to retrieval-based chatbot construction

In this paper we present the results of our experiments in training and ...
research
03/10/2023

Improving Domain-Invariance in Self-Supervised Learning via Batch Styles Standardization

The recent rise of Self-Supervised Learning (SSL) as one of the preferre...

Please sign up or login with your details

Forgot password? Click here to reset