Self-Supervised Representation Learning with Cross-Context Learning between Global and Hypercolumn Features

08/25/2023
by   Zheng Gao, et al.
0

Whilst contrastive learning yields powerful representations by matching different augmented views of the same instance, it lacks the ability to capture the similarities between different instances. One popular way to address this limitation is by learning global features (after the global pooling) to capture inter-instance relationships based on knowledge distillation, where the global features of the teacher are used to guide the learning of the global features of the student. Inspired by cross-modality learning, we extend this existing framework that only learns from global features by encouraging the global features and intermediate layer features to learn from each other. This leads to our novel self-supervised framework: cross-context learning between global and hypercolumn features (CGH), that enforces the consistency of instance relations between low- and high-level semantics. Specifically, we stack the intermediate feature maps to construct a hypercolumn representation so that we can measure instance relations using two contexts (hypercolumn and global feature) separately, and then use the relations of one context to guide the learning of the other. This cross-context learning allows the model to learn from the differences between the two contexts. The experimental results on linear classification and downstream tasks show that our method outperforms the state-of-the-art methods.

READ FULL TEXT
research
11/23/2022

Distilling Knowledge from Self-Supervised Teacher by Embedding Graph Alignment

Recent advances have indicated the strengths of self-supervised pre-trai...
research
07/29/2021

Hierarchical Self-supervised Augmented Knowledge Distillation

Knowledge distillation often involves how to define and transfer knowled...
research
10/02/2022

Pixel-global Self-supervised Learning with Uncertainty-aware Context Stabilizer

We developed a novel SSL approach to capture global consistency and pixe...
research
02/09/2022

Distillation with Contrast is All You Need for Self-Supervised Point Cloud Representation Learning

In this paper, we propose a simple and general framework for self-superv...
research
09/07/2021

Knowledge Distillation Using Hierarchical Self-Supervision Augmented Distribution

Knowledge distillation (KD) is an effective framework that aims to trans...
research
08/19/2023

Learning Multiscale Consistency for Self-supervised Electron Microscopy Instance Segmentation

Instance segmentation in electron microscopy (EM) volumes poses a signif...
research
06/07/2022

Spatial Cross-Attention Improves Self-Supervised Visual Representation Learning

Unsupervised representation learning methods like SwAV are proved to be ...

Please sign up or login with your details

Forgot password? Click here to reset