Ablation study of self-supervised learning for image classification

12/04/2021
by   Ilias Papastratis, et al.
18

This project focuses on the self-supervised training of convolutional neural networks (CNNs) and transformer networks for the task of image recognition. A simple siamese network with different backbones is used in order to maximize the similarity of two augmented transformed images from the same source image. In this way, the backbone is able to learn visual information without supervision. Finally, the method is evaluated on three image recognition datasets.

READ FULL TEXT
research
08/01/2020

Distilling Visual Priors from Self-Supervised Learning

Convolutional Neural Networks (CNNs) are prone to overfit small training...
research
04/04/2019

Siamese Encoding and Alignment by Multiscale Learning with Self-Supervision

We propose a method of aligning a source image to a target image, where ...
research
07/27/2023

Mixture of Self-Supervised Learning

Self-supervised learning is popular method because of its ability to lea...
research
03/21/2022

Towards Self-Supervised Gaze Estimation

Recent joint embedding-based self-supervised methods have surpassed stan...
research
07/25/2022

Jigsaw-ViT: Learning Jigsaw Puzzles in Vision Transformer

The success of Vision Transformer (ViT) in various computer vision tasks...
research
03/31/2021

Joint Learning of Neural Transfer and Architecture Adaptation for Image Recognition

Current state-of-the-art visual recognition systems usually rely on the ...
research
02/09/2021

Improving Visual Reasoning by Exploiting The Knowledge in Texts

This paper presents a new framework for training image-based classifiers...

Please sign up or login with your details

Forgot password? Click here to reset