Boosting Self-Supervised Learning via Knowledge Transfer

05/01/2018
by   Mehdi Noroozi, et al.
0

In self-supervised learning, one trains a model to solve a so-called pretext task on a dataset without the need for human annotation. The main objective, however, is to transfer this model to a target domain and task. Currently, the most effective transfer strategy is fine-tuning, which restricts one to use the same model or parts thereof for both pretext and target tasks. In this paper, we present a novel framework for self-supervised learning that overcomes limitations in designing and comparing different tasks, models, and data domains. In particular, our framework decouples the structure of the self-supervised model from the final task-specific fine-tuned model. This allows us to: 1) quantitatively assess previously incompatible models including handcrafted features; 2) show that deeper neural network models can learn better representations from the same pretext task; 3) transfer knowledge learned with a deep model to a shallower one and thus boost its learning. We use this framework to design a novel self-supervised task, which achieves state-of-the-art performance on the common benchmarks in PASCAL VOC 2007, ILSVRC12 and Places by a significant margin. Our learned features shrink the mAP gap between models trained via self-supervised learning and supervised learning from 5.9

READ FULL TEXT

page 3

page 4

page 8

research
09/19/2021

A Study of the Generalizability of Self-Supervised Representations

Recent advancements in self-supervised learning (SSL) made it possible t...
research
03/11/2021

Self-supervised Text-to-SQL Learning with Header Alignment Training

Since we can leverage a large amount of unlabeled data without any human...
research
12/16/2022

Toward Improved Generalization: Meta Transfer of Self-supervised Knowledge on Graphs

Despite the remarkable success achieved by graph convolutional networks ...
research
10/20/2022

SS-VAERR: Self-Supervised Apparent Emotional Reaction Recognition from Video

This work focuses on the apparent emotional reaction recognition (AERR) ...
research
02/06/2018

Learning Image Representations by Completing Damaged Jigsaw Puzzles

In this paper, we explore methods of complicating self-supervised tasks ...
research
10/21/2021

Self-Supervised Visual Representation Learning Using Lightweight Architectures

In self-supervised learning, a model is trained to solve a pretext task,...
research
06/30/2022

Improving the Generalization of Supervised Models

We consider the problem of training a deep neural network on a given cla...

Please sign up or login with your details

Forgot password? Click here to reset