Multiple Pretext-Task for Self-Supervised Learning via Mixing Multiple Image Transformations

12/25/2019
by   Shin'ya Yamaguchi, et al.
0

Self-supervised learning is one of the most promising approaches to learn representations capturing semantic features in images without any manual annotation cost. To learn useful representations, a self-supervised model solves a pretext-task, which is defined by data itself. Among a number of pretext-tasks, the rotation prediction task (Rotation) achieves better representations for solving various target tasks despite its simplicity of the implementation. However, we found that Rotation can fail to capture semantic features related to image textures and colors. To tackle this problem, we introduce a learning technique called multiple pretext-task for self-supervised learning (MP-SSL), which solves multiple pretext-task in addition to Rotation simultaneously. In order to capture features of textures and colors, we employ the transformations of image enhancements (e.g., sharpening and solarizing) as the additional pretext-tasks. MP-SSL efficiently trains a model by leveraging a Frank-Wolfe based multi-task training algorithm. Our experimental results show MP-SSL models outperform Rotation on multiple standard benchmarks and achieve state-of-the-art performance on Places-205.

READ FULL TEXT
research
12/04/2019

Self-Supervised Learning of Pretext-Invariant Representations

The goal of self-supervised learning from images is to construct image r...
research
01/09/2020

Semi-supervised Learning via Conditional Rotation Angle Estimation

Self-supervised learning (SlfSL), aiming at learning feature representat...
research
07/28/2022

Self-supervised learning with rotation-invariant kernels

A major paradigm for learning image representations in a self-supervised...
research
04/07/2022

Using Multiple Self-Supervised Tasks Improves Model Robustness

Deep networks achieve state-of-the-art performance on computer vision ta...
research
05/29/2021

Orienting Novel 3D Objects Using Self-Supervised Learning of Rotation Transforms

Orienting objects is a critical component in the automation of many pack...
research
08/09/2023

Self-supervised Learning of Rotation-invariant 3D Point Set Features using Transformer and its Self-distillation

Invariance against rotations of 3D objects is an important property in a...
research
05/19/2023

S-JEA: Stacked Joint Embedding Architectures for Self-Supervised Visual Representation Learning

The recent emergence of Self-Supervised Learning (SSL) as a fundamental ...

Please sign up or login with your details

Forgot password? Click here to reset