Mask-guided Vision Transformer (MG-ViT) for Few-Shot Learning

05/20/2022
by   Yuzhong Chen, et al.
6

Learning with little data is challenging but often inevitable in various application scenarios where the labeled data is limited and costly. Recently, few-shot learning (FSL) gained increasing attention because of its generalizability of prior knowledge to new tasks that contain only a few samples. However, for data-intensive models such as vision transformer (ViT), current fine-tuning based FSL approaches are inefficient in knowledge generalization and thus degenerate the downstream task performances. In this paper, we propose a novel mask-guided vision transformer (MG-ViT) to achieve an effective and efficient FSL on ViT model. The key idea is to apply a mask on image patches to screen out the task-irrelevant ones and to guide the ViT to focus on task-relevant and discriminative patches during FSL. Particularly, MG-ViT only introduces an additional mask operation and a residual connection, enabling the inheritance of parameters from pre-trained ViT without any other cost. To optimally select representative few-shot samples, we also include an active learning based sample selection method to further improve the generalizability of MG-ViT based FSL. We evaluate the proposed MG-ViT on both Agri-ImageNet classification task and ACFR apple detection task with gradient-weighted class activation mapping (Grad-CAM) as the mask. The experimental results show that the MG-ViT model significantly improves the performance when compared with general fine-tuning based ViT models, providing novel insights and a concrete approach towards generalizing data-intensive and large-scale deep learning models for FSL.

READ FULL TEXT
research
08/22/2023

Unsupervised Prototype Adapter for Vision-Language Models

Recently, large-scale pre-trained vision-language models (e.g. CLIP and ...
research
03/06/2023

Masked Images Are Counterfactual Samples for Robust Fine-tuning

Deep learning models are challenged by the distribution shift between th...
research
04/04/2023

Strong Baselines for Parameter Efficient Few-Shot Fine-tuning

Few-shot classification (FSC) entails learning novel classes given only ...
research
07/27/2023

Regularized Mask Tuning: Uncovering Hidden Knowledge in Pre-trained Vision-Language Models

Prompt tuning and adapter tuning have shown great potential in transferr...
research
05/25/2022

Eye-gaze-guided Vision Transformer for Rectifying Shortcut Learning

Learning harmful shortcuts such as spurious correlations and biases prev...
research
06/17/2022

Rectify ViT Shortcut Learning by Visual Saliency

Shortcut learning is common but harmful to deep learning models, leading...
research
04/06/2019

Few-Shot Learning via Saliency-guided Hallucination of Samples

Learning new concepts from a few of samples is a standard challenge in c...

Please sign up or login with your details

Forgot password? Click here to reset