Aggregative Self-Supervised Feature Learning

12/14/2020
by   Jiuwen Zhu, et al.
0

Self-supervised learning (SSL) is an efficient approach that addresses the issue of annotation shortage. The key part in SSL is its proxy task that defines the supervisory signals and drives the learning toward effective feature representations. However, most SSL approaches usually focus on a single proxy task, which greatly limits the expressive power of the learned features and therefore deteriorates the network generalization capacity. In this regard, we hereby propose three strategies of aggregation in terms of complementarity of various forms to boost the robustness of self-supervised learned features. In spatial context aggregative SSL, we contribute a heuristic SSL method that integrates two ad-hoc proxy tasks with spatial context complementarity, modeling global and local contextual features, respectively. We then propose a principled framework of multi-task aggregative self-supervised learning to form a unified representation, with an intent of exploiting feature complementarity among different tasks. Finally, in self-aggregative SSL, we propose to self-complement an existing proxy task with an auxiliary loss function based on a linear centered kernel alignment metric, which explicitly promotes the exploring of where are uncovered by the features learned from a proxy task at hand to further boost the modeling capability. Our extensive experiments on 2D natural image and 3D medical image classification tasks under limited annotation scenarios confirm that the proposed aggregation strategies successfully boost the classification accuracy.

READ FULL TEXT

page 2

page 3

page 8

research
03/30/2020

Improving out-of-distribution generalization via multi-task self-supervised pretraining

Self-supervised feature representations have been shown to be useful for...
research
02/01/2023

Image-Based Vehicle Classification by Synergizing Features from Supervised and Self-Supervised Learning Paradigms

This paper introduces a novel approach to leverage features learned from...
research
06/10/2020

Embedding Task Knowledge into 3D Neural Networks via Self-supervised Learning

Deep learning highly relies on the amount of annotated data. However, an...
research
01/02/2020

Video Cloze Procedure for Self-Supervised Spatio-Temporal Learning

We propose a novel self-supervised method, referred to as Video Cloze Pr...
research
03/06/2021

Imbalance-Aware Self-Supervised Learning for 3D Radiomic Representations

Radiomic representations can quantify properties of regions of interest ...
research
11/15/2022

A Point in the Right Direction: Vector Prediction for Spatially-aware Self-supervised Volumetric Representation Learning

High annotation costs and limited labels for dense 3D medical imaging ta...
research
08/19/2021

Concurrent Discrimination and Alignment for Self-Supervised Feature Learning

Existing self-supervised learning methods learn representation by means ...

Please sign up or login with your details

Forgot password? Click here to reset