DeepAI AI Chat
Log In Sign Up

Language-Aware Soft Prompting for Vision Language Foundation Models

by   Adrian Bulat, et al.
Queen Mary University of London

This paper is on soft prompt learning for Vision & Language (V L) models. Similarly to their NLP counterparts, V&L models can be adapted to a downstream task by learning soft continuous prompts using a few training examples. Current methods learn the soft prompts by minimizing a cross-entropy loss using as class weights the features obtained by passing the prompts plus the class names through the text encoder. Such methods, however, significantly overfit the training data suffering from large accuracy degradation when tested on unseen classes from the same domain. Our main contribution, in this paper, is a surprisingly simple approach to alleviate this problem: we use a second cross entropy loss to minimize the distance between the learned soft prompts and a set of hand-engineered manual prompts (obtained by prompt engineering). The proposed loss can be interpreted in multiple ways including as a regularizer, as a means for language-based augmentation, and as a way of learning more discriminative class centroids. Importantly, our formulation is inherently amenable to including, during training, virtual classes, i.e. class names for which no visual samples are available, further increasing the robustness of the learned prompts. Through extensive evaluations on 11 datasets, we show that our approach (a) significantly outperforms all prior works on soft prompting, and (b) matches and surpasses, for the first time, the accuracy on novel classes obtained by hand-crafted prompts and CLIP for the majority of the test datasets. Code will be made available.


page 1

page 2

page 3

page 4


Striking the Right Balance: Recall Loss for Semantic Segmentation

Class imbalance is a fundamental problem in computer vision applications...

Learning Domain Invariant Prompt for Vision-Language Models

Prompt learning is one of the most effective and trending ways to adapt ...

Imbalanced Image Classification with Complement Cross Entropy

Recently, deep learning models have achieved great success in computer v...

Leveraging Class Similarity to Improve Deep Neural Network Robustness

Traditionally artificial neural networks (ANNs) are trained by minimizin...

Conditional Prompt Learning for Vision-Language Models

With the rise of powerful pre-trained vision-language models like CLIP, ...

Learning Hierarchy Aware Features for Reducing Mistake Severity

Label hierarchies are often available apriori as part of biological taxo...

Lower Bounds on Cross-Entropy Loss in the Presence of Test-time Adversaries

Understanding the fundamental limits of robust supervised learning has e...