Survey on Self-supervised Representation Learning Using Image Transformations

02/17/2022
by   Muhammad Ali, et al.
0

Deep neural networks need huge amount of training data, while in real world there is a scarcity of data available for training purposes. To resolve these issues, self-supervised learning (SSL) methods are used. SSL using geometric transformations (GT) is a simple yet powerful technique used in unsupervised representation learning. Although multiple survey papers have reviewed SSL techniques, there is none that only focuses on those that use geometric transformations. Furthermore, such methods have not been covered in depth in papers where they are reviewed. Our motivation to present this work is that geometric transformations have shown to be powerful supervisory signals in unsupervised representation learning. Moreover, many such works have found tremendous success, but have not gained much attention. We present a concise survey of SSL approaches that use geometric transformations. We shortlist six representative models that use image transformations including those based on predicting and autoencoding transformations. We review their architecture as well as learning methodologies. We also compare the performance of these models in the object recognition task on CIFAR-10 and ImageNet datasets. Our analysis indicates the AETv2 performs the best in most settings. Rotation with feature decoupling also performed well in some settings. We then derive insights from the observed results. Finally, we conclude with a summary of the results and insights as well as highlighting open problems to be addressed and indicating various future directions.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/08/2022

TransformNet: Self-supervised representation learning through predicting geometric transformations

Deep neural networks need a big amount of training data, while in the re...
research
09/21/2023

A Study of Forward-Forward Algorithm for Self-Supervised Learning

Self-supervised representation learning has seen remarkable progress in ...
research
11/16/2019

AETv2: AutoEncoding Transformations for Self-Supervised Representation Learning by Minimizing Geodesic Distances in Lie Groups

Self-supervised learning by predicting transformations has demonstrated ...
research
03/21/2022

Towards Self-Supervised Gaze Estimation

Recent joint embedding-based self-supervised methods have surpassed stan...
research
10/18/2021

Self-Supervised Representation Learning: Introduction, Advances and Challenges

Self-supervised representation learning methods aim to provide powerful ...
research
02/18/2020

Data Transformation Insights in Self-supervision with Clustering Tasks

Self-supervision is key to extending use of deep learning for label scar...
research
05/21/2022

Equivariant Mesh Attention Networks

Equivariance to symmetries has proven to be a powerful inductive bias in...

Please sign up or login with your details

Forgot password? Click here to reset