Attribute Surrogates Learning and Spectral Tokens Pooling in Transformers for Few-shot Learning

03/17/2022
by   Yangji He, et al.
1

This paper presents new hierarchically cascaded transformers that can improve data efficiency through attribute surrogates learning and spectral tokens pooling. Vision transformers have recently been thought of as a promising alternative to convolutional neural networks for visual recognition. But when there is no sufficient data, it gets stuck in overfitting and shows inferior performance. To improve data efficiency, we propose hierarchically cascaded transformers that exploit intrinsic image structures through spectral tokens pooling and optimize the learnable parameters through latent attribute surrogates. The intrinsic image structure is utilized to reduce the ambiguity between foreground content and background noise by spectral tokens pooling. And the attribute surrogate learning scheme is designed to benefit from the rich visual information in image-label pairs instead of simple visual concepts assigned by their labels. Our Hierarchically Cascaded Transformers, called HCTransformers, is built upon a self-supervised learning framework DINO and is tested on several popular few-shot learning benchmarks. In the inductive setting, HCTransformers surpass the DINO baseline by a large margin of 9.7 miniImageNet, which demonstrates HCTransformers are efficient to extract discriminative features. Also, HCTransformers show clear advantages over SOTA few-shot classification methods in both 5-way 1-shot and 5-way 5-shot settings on four popular benchmark datasets, including miniImageNet, tieredImageNet, FC100, and CIFAR-FS. The trained weights and codes are available at https://github.com/StomachCold/HCTransformers.

READ FULL TEXT

page 5

page 16

page 17

research
03/25/2023

Supervised Masked Knowledge Distillation for Few-Shot Transformers

Vision Transformers (ViTs) emerge to achieve impressive performance on m...
research
06/08/2023

Improving Visual Prompt Tuning for Self-supervised Vision Transformers

Visual Prompt Tuning (VPT) is an effective tuning method for adapting pr...
research
03/27/2023

Learning Expressive Prompting With Residuals for Vision Transformers

Prompt learning is an efficient approach to adapt transformers by insert...
research
03/07/2022

MSDN: Mutually Semantic Distillation Network for Zero-Shot Learning

The key challenge of zero-shot learning (ZSL) is how to infer the latent...
research
07/08/2022

Boosting Zero-shot Learning via Contrastive Optimization of Attribute Representations

Zero-shot learning (ZSL) aims to recognize classes that do not have samp...
research
12/17/2020

Few-shot Sequence Learning with Transformers

Few-shot algorithms aim at learning new tasks provided only a handful of...
research
03/29/2023

Visually Wired NFTs: Exploring the Role of Inspiration in Non-Fungible Tokens

The fervor for Non-Fungible Tokens (NFTs) attracted countless creators, ...

Please sign up or login with your details

Forgot password? Click here to reset