EnSiam: Self-Supervised Learning With Ensemble Representations

05/22/2023
by   Kyoungmin Han, et al.
0

Recently, contrastive self-supervised learning, where the proximity of representations is determined based on the identities of samples, has made remarkable progress in unsupervised representation learning. SimSiam is a well-known example in this area, known for its simplicity yet powerful performance. However, it is known to be sensitive to changes in training configurations, such as hyperparameters and augmentation settings, due to its structural characteristics. To address this issue, we focus on the similarity between contrastive learning and the teacher-student framework in knowledge distillation. Inspired by the ensemble-based knowledge distillation approach, the proposed method, EnSiam, aims to improve the contrastive learning procedure using ensemble representations. This can provide stable pseudo labels, providing better performance. Experiments demonstrate that EnSiam outperforms previous state-of-the-art methods in most cases, including the experiments on ImageNet, which shows that EnSiam is capable of learning high-quality representations.

READ FULL TEXT
research
10/03/2022

Attention Distillation: self-supervised vision transformer students need more guidance

Self-supervised learning has been widely applied to train high-quality v...
research
07/21/2022

KD-MVS: Knowledge Distillation Based Self-supervised Learning for MVS

Supervised multi-view stereo (MVS) methods have achieved remarkable prog...
research
04/20/2021

Distill on the Go: Online knowledge distillation in self-supervised learning

Self-supervised learning solves pretext prediction tasks that do not req...
research
11/23/2021

Domain-Agnostic Clustering with Self-Distillation

Recent advancements in self-supervised learning have reduced the gap bet...
research
09/30/2022

Slimmable Networks for Contrastive Self-supervised Learning

Self-supervised learning makes great progress in large model pre-trainin...
research
10/07/2022

SAICL: Student Modelling with Interaction-level Auxiliary Contrastive Tasks for Knowledge Tracing and Dropout Prediction

Knowledge tracing and dropout prediction are crucial for online educatio...
research
12/06/2022

Label-free Knowledge Distillation with Contrastive Loss for Light-weight Speaker Recognition

Very deep models for speaker recognition (SR) have demonstrated remarkab...

Please sign up or login with your details

Forgot password? Click here to reset