Multi-Pretext Attention Network for Few-shot Learning with Self-supervision

03/10/2021
by   Hainan Li, et al.
13

Few-shot learning is an interesting and challenging study, which enables machines to learn from few samples like humans. Existing studies rarely exploit auxiliary information from large amount of unlabeled data. Self-supervised learning is emerged as an efficient method to utilize unlabeled data. Existing self-supervised learning methods always rely on the combination of geometric transformations for the single sample by augmentation, while seriously neglect the endogenous correlation information among different samples that is the same important for the task. In this work, we propose a Graph-driven Clustering (GC), a novel augmentation-free method for self-supervised learning, which does not rely on any auxiliary sample and utilizes the endogenous correlation information among input samples. Besides, we propose Multi-pretext Attention Network (MAN), which exploits a specific attention mechanism to combine the traditional augmentation-relied methods and our GC, adaptively learning their optimized weights to improve the performance and enabling the feature extractor to obtain more universal representations. We evaluate our MAN extensively on miniImageNet and tieredImageNet datasets and the results demonstrate that the proposed method outperforms the state-of-the-art (SOTA) relevant methods.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/12/2019

Boosting Few-Shot Visual Learning with Self-Supervision

Few-shot learning and self-supervised learning address different facets ...
research
10/06/2020

Shot in the Dark: Few-Shot Learning with No Base-Class Labels

Few-shot learning aims to learn classifiers for new objects from a small...
research
07/21/2020

Exploiting Temporal Coherence for Self-Supervised One-shot Video Re-identification

While supervised techniques in re-identification are extremely effective...
research
03/10/2023

Self-supervised Facial Action Unit Detection with Region and Relation Learning

Facial action unit (AU) detection is a challenging task due to the scarc...
research
06/07/2023

GPT Self-Supervision for a Better Data Annotator

The task of annotating data into concise summaries poses a significant c...
research
08/31/2023

Improving Small Footprint Few-shot Keyword Spotting with Supervision on Auxiliary Data

Few-shot keyword spotting (FS-KWS) models usually require large-scale an...
research
11/18/2022

Weighted Ensemble Self-Supervised Learning

Ensembling has proven to be a powerful technique for boosting model perf...

Please sign up or login with your details

Forgot password? Click here to reset