APPLeNet: Visual Attention Parameterized Prompt Learning for Few-Shot Remote Sensing Image Generalization using CLIP

04/12/2023
by   Mainak Singha, et al.
0

In recent years, the success of large-scale vision-language models (VLMs) such as CLIP has led to their increased usage in various computer vision tasks. These models enable zero-shot inference through carefully crafted instructional text prompts without task-specific supervision. However, the potential of VLMs for generalization tasks in remote sensing (RS) has not been fully realized. To address this research gap, we propose a novel image-conditioned prompt learning strategy called the Visual Attention Parameterized Prompts Learning Network (APPLeNet). APPLeNet emphasizes the importance of multi-scale feature learning in RS scene classification and disentangles visual style and content primitives for domain generalization tasks. To achieve this, APPLeNet combines visual content features obtained from different layers of the vision encoder and style properties obtained from feature statistics of domain-specific batches. An attention-driven injection module is further introduced to generate visual tokens from this information. We also introduce an anti-correlation regularizer to ensure discrimination among the token embeddings, as this visual information is combined with the textual tokens. To validate APPLeNet, we curated four available RS benchmarks and introduced experimental protocols and datasets for three domain generalization tasks. Our results consistently outperform the relevant literature and code is available at https://github.com/mainaksingha01/APPLeNet

READ FULL TEXT

page 1

page 3

page 7

research
02/18/2023

StyLIP: Multi-Scale Style-Conditioned Prompt Learning for CLIP-based Domain Generalization

Large-scale foundation models (e.g., CLIP) have shown promising zero-sho...
research
06/20/2023

RS5M: A Large Scale Vision-Language Dataset for Remote Sensing Vision-Language Foundation Model

Pre-trained Vision-Language Foundation Models utilizing extensive image-...
research
04/20/2023

Text2Seg: Remote Sensing Image Semantic Segmentation via Text-Guided Visual Foundation Models

Recent advancements in foundation models (FMs), such as GPT-4 and LLaMA,...
research
08/08/2022

Txt2Img-MHN: Remote Sensing Image Generation from Text Using Modern Hopfield Networks

The synthesis of high-resolution remote sensing images based on text des...
research
09/04/2023

Adapting Segment Anything Model for Change Detection in HR Remote Sensing Images

Vision Foundation Models (VFMs) such as the Segment Anything Model (SAM)...
research
09/16/2023

RingMo-lite: A Remote Sensing Multi-task Lightweight Network with CNN-Transformer Hybrid Framework

In recent years, remote sensing (RS) vision foundation models such as Ri...
research
03/01/2023

Progressive Scale-aware Network for Remote sensing Image Change Captioning

Remote sensing (RS) images contain numerous objects of different scales,...

Please sign up or login with your details

Forgot password? Click here to reset