DeepAI AI Chat
Log In Sign Up

Multiple Pretext-Task for Self-Supervised Learning via Mixing Multiple Image Transformations

12/25/2019
by   Shin'ya Yamaguchi, et al.
NIPPON TELEGRAPH AND TELEPHONE CORPORATION
0

Self-supervised learning is one of the most promising approaches to learn representations capturing semantic features in images without any manual annotation cost. To learn useful representations, a self-supervised model solves a pretext-task, which is defined by data itself. Among a number of pretext-tasks, the rotation prediction task (Rotation) achieves better representations for solving various target tasks despite its simplicity of the implementation. However, we found that Rotation can fail to capture semantic features related to image textures and colors. To tackle this problem, we introduce a learning technique called multiple pretext-task for self-supervised learning (MP-SSL), which solves multiple pretext-task in addition to Rotation simultaneously. In order to capture features of textures and colors, we employ the transformations of image enhancements (e.g., sharpening and solarizing) as the additional pretext-tasks. MP-SSL efficiently trains a model by leveraging a Frank-Wolfe based multi-task training algorithm. Our experimental results show MP-SSL models outperform Rotation on multiple standard benchmarks and achieve state-of-the-art performance on Places-205.

READ FULL TEXT
12/04/2019

Self-Supervised Learning of Pretext-Invariant Representations

The goal of self-supervised learning from images is to construct image r...
01/09/2020

Semi-supervised Learning via Conditional Rotation Angle Estimation

Self-supervised learning (SlfSL), aiming at learning feature representat...
07/28/2022

Self-supervised learning with rotation-invariant kernels

A major paradigm for learning image representations in a self-supervised...
02/20/2021

Self-Supervised Learning via multi-Transformation Classification for Action Recognition

Self-supervised tasks have been utilized to build useful representations...
05/29/2021

Orienting Novel 3D Objects Using Self-Supervised Learning of Rotation Transforms

Orienting objects is a critical component in the automation of many pack...
06/09/2021

Self-supervised Feature Enhancement: Applying Internal Pretext Task to Supervised Learning

Traditional self-supervised learning requires CNNs using external pretex...
06/22/2020

Don't Wait, Just Weight: Improving Unsupervised Representations by Learning Goal-Driven Instance Weights

In the absence of large labelled datasets, self-supervised learning tech...