Mixture of Self-Supervised Learning

07/27/2023
by   Aristo Renaldo Ruslim, et al.
0

Self-supervised learning is popular method because of its ability to learn features in images without using its labels and is able to overcome limited labeled datasets used in supervised learning. Self-supervised learning works by using a pretext task which will be trained on the model before being applied to a specific task. There are some examples of pretext tasks used in self-supervised learning in the field of image recognition, namely rotation prediction, solving jigsaw puzzles, and predicting relative positions on image. Previous studies have only used one type of transformation as a pretext task. This raises the question of how it affects if more than one pretext task is used and to use a gating network to combine all pretext tasks. Therefore, we propose the Gated Self-Supervised Learning method to improve image classification which use more than one transformation as pretext task and uses the Mixture of Expert architecture as a gating network in combining each pretext task so that the model automatically can study and focus more on the most useful augmentations for classification. We test performance of the proposed method in several scenarios, namely CIFAR imbalance dataset classification, adversarial perturbations, Tiny-Imagenet dataset classification, and semi-supervised learning. Moreover, there are Grad-CAM and T-SNE analysis that are used to see the proposed method for identifying important features that influence image classification and representing data for each class and separating different classes properly. Our code is in https://github.com/aristorenaldo/G-SSL

READ FULL TEXT

page 6

page 17

research
01/14/2023

Gated Self-supervised Learning For Improving Supervised Learning

In past research on self-supervised learning for image classification, t...
research
04/21/2022

Self-Supervised Learning to Guide Scientifically Relevant Categorization of Martian Terrain Images

Automatic terrain recognition in Mars rover images is an important probl...
research
09/08/2023

Representation Synthesis by Probabilistic Many-Valued Logic Operation in Self-Supervised Learning

Self-supervised learning (SSL) using mixed images has been studied to le...
research
07/05/2021

UCSL : A Machine Learning Expectation-Maximization framework for Unsupervised Clustering driven by Supervised Learning

Subtype Discovery consists in finding interpretable and consistent sub-p...
research
12/04/2021

Ablation study of self-supervised learning for image classification

This project focuses on the self-supervised training of convolutional ne...
research
06/20/2022

Visualizing and Understanding Self-Supervised Vision Learning

Self-Supervised vision learning has revolutionized deep learning, becomi...
research
09/02/2022

BinImg2Vec: Augmenting Malware Binary Image Classification with Data2Vec

Rapid digitalisation spurred by the Covid-19 pandemic has resulted in mo...

Please sign up or login with your details

Forgot password? Click here to reset