ConES: Concept Embedding Search for Parameter Efficient Tuning Large Vision Language Models

05/30/2023
by   Huahui Yi, et al.
0

Large pre-trained vision-language models have shown great prominence in transferring pre-acquired knowledge to various domains and downstream tasks with appropriate prompting or tuning. Existing prevalent tuning methods can be generally categorized into three genres: 1) prompt engineering by creating suitable prompt texts, which is time-consuming and requires domain expertise; 2) or simply fine-tuning the whole model, which is extremely inefficient; 3) prompt tuning through parameterized prompt embeddings with the text encoder. Nevertheless, all methods rely on the text encoder for bridging the modality gap between vision and language. In this work, we question the necessity of the cumbersome text encoder for a more lightweight and efficient tuning paradigm as well as more representative prompt embeddings closer to the image representations. To achieve this, we propose a Concept Embedding Search (ConES) approach by optimizing prompt embeddings – without the need of the text encoder – to capture the 'concept' of the image modality through a variety of task objectives. By dropping the text encoder, we are able to significantly speed up the learning process, , from about an hour to just ten minutes in our experiments for personalized text-to-image generation without impairing the generation quality. Moreover, our proposed approach is orthogonal to current existing tuning methods since the searched concept embeddings can be further utilized in the next stage of fine-tuning the pre-trained large models for boosting performance. Extensive experiments show that our approach can beat the prompt tuning and textual inversion methods in a variety of downstream tasks including objection detection, instance segmentation, and image generation. Our approach also shows better generalization capability for unseen concepts in specialized domains, such as the medical domain.

READ FULL TEXT

page 7

page 15

research
07/28/2022

Pro-tuning: Unified Prompt Tuning for Vision Tasks

In computer vision, fine-tuning is the de-facto approach to leverage pre...
research
08/17/2022

Class-Aware Visual Prompt Tuning for Vision-Language Pre-Trained Model

With the emergence of large pre-trained vison-language model like CLIP, ...
research
11/25/2022

CLIP-ReID: Exploiting Vision-Language Model for Image Re-Identification without Concrete Text Labels

Pre-trained vision-language models like CLIP have recently shown superio...
research
05/14/2023

Parameter-Efficient Fine-Tuning for Medical Image Analysis: The Missed Opportunity

We present a comprehensive evaluation of Parameter-Efficient Fine-Tuning...
research
08/18/2023

VL-PET: Vision-and-Language Parameter-Efficient Tuning via Granularity Control

As the model size of pre-trained language models (PLMs) grows rapidly, f...
research
01/05/2023

CiT: Curation in Training for Effective Vision-Language Data

Large vision-language models are generally applicable to many downstream...
research
05/26/2023

PIP: Parse-Instructed Prefix for Syntactically Controlled Paraphrase Generation

Syntactically controlled paraphrase generation requires language models ...

Please sign up or login with your details

Forgot password? Click here to reset