RPLKG: Robust Prompt Learning with Knowledge Graph

04/21/2023
by   Yewon Kim, et al.
0

Large-scale pre-trained models have been known that they are transferable, and they generalize well on the unseen dataset. Recently, multimodal pre-trained models such as CLIP show significant performance improvement in diverse experiments. However, when the labeled dataset is limited, the generalization of a new dataset or domain is still challenging. To improve the generalization performance on few-shot learning, there have been diverse efforts, such as prompt learning and adapter. However, the current few-shot adaptation methods are not interpretable, and they require a high computation cost for adaptation. In this study, we propose a new method, robust prompt learning with knowledge graph (RPLKG). Based on the knowledge graph, we automatically design diverse interpretable and meaningful prompt sets. Our model obtains cached embeddings of prompt sets after one forwarding from a large pre-trained model. After that, model optimizes the prompt selection processes with GumbelSoftmax. In this way, our model is trained using relatively little memory and learning time. Also, RPLKG selects the optimal interpretable prompt automatically, depending on the dataset. In summary, RPLKG is i) interpretable, ii) requires small computation resources, and iii) easy to incorporate prior human knowledge. To validate the RPLKG, we provide comprehensive experimental results on few-shot learning, domain generalization and new class generalization setting. RPLKG shows a significant performance improvement compared to zero-shot learning and competitive performance against several prompt learning methods using much lower resources.

READ FULL TEXT
research
10/15/2020

Multi-label Few/Zero-shot Learning with Knowledge Aggregated from Multiple Label Graphs

Few/Zero-shot learning is a big challenge of many classifications tasks,...
research
09/25/2022

Collaboration of Pre-trained Models Makes Better Few-shot Learner

Few-shot classification requires deep neural networks to learn generaliz...
research
10/13/2022

MAPL: Parameter-Efficient Adaptation of Unimodal Pre-Trained Models for Vision-Language Few-Shot Prompting

Large pre-trained models have proved to be remarkable zero- and (prompt-...
research
05/13/2020

A Biologically Inspired Feature Enhancement Framework for Zero-Shot Learning

Most of the Zero-Shot Learning (ZSL) algorithms currently use pre-traine...
research
03/23/2023

Exploring Visual Prompts for Whole Slide Image Classification with Multiple Instance Learning

Multiple instance learning (MIL) has emerged as a popular method for cla...
research
11/16/2022

On Measuring the Intrinsic Few-Shot Hardness of Datasets

While advances in pre-training have led to dramatic improvements in few-...
research
03/09/2023

Knowledge-augmented Few-shot Visual Relation Detection

Visual Relation Detection (VRD) aims to detect relationships between obj...

Please sign up or login with your details

Forgot password? Click here to reset