Equivariant Similarity for Vision-Language Foundation Models

03/25/2023
by   Tan Wang, et al.
0

This study explores the concept of equivariance in vision-language foundation models (VLMs), focusing specifically on the multimodal similarity function that is not only the major training objective but also the core delivery to support downstream tasks. Unlike the existing image-text similarity objective which only categorizes matched pairs as similar and unmatched pairs as dissimilar, equivariance also requires similarity to vary faithfully according to the semantic changes. This allows VLMs to generalize better to nuanced and unseen multimodal compositions. However, modeling equivariance is challenging as the ground truth of semantic change is difficult to collect. For example, given an image-text pair about a dog, it is unclear to what extent the similarity changes when the pixel is changed from dog to cat? To this end, we propose EqSim, a regularization loss that can be efficiently calculated from any two matched training pairs and easily pluggable into existing image-text retrieval fine-tuning. Meanwhile, to further diagnose the equivariance of VLMs, we present a new challenging benchmark EqBen. Compared to the existing evaluation sets, EqBen is the first to focus on "visual-minimal change". Extensive experiments show the lack of equivariance in current VLMs and validate the effectiveness of EqSim. Code is available at <https://github.com/Wangt-CN/EqBen>.

READ FULL TEXT

page 4

page 11

page 12

page 13

page 14

page 17

research
06/15/2023

COSA: Concatenated Sample Pretrained Vision-Language Foundation Model

Due to the limited scale and quality of video-text training corpus, most...
research
08/29/2022

Efficient Vision-Language Pretraining with Visual Concepts and Hierarchical Alignment

Vision and Language Pretraining has become the prevalent approach for ta...
research
07/13/2023

Leveraging Vision-Language Foundation Models for Fine-Grained Downstream Tasks

Vision-language foundation models such as CLIP have shown impressive zer...
research
11/01/2022

Training Vision-Language Models with Less Bimodal Supervision

Standard practice in pretraining multimodal models, such as vision-langu...
research
01/12/2023

Toward Building General Foundation Models for Language, Vision, and Vision-Language Understanding Tasks

Foundation models or pre-trained models have substantially improved the ...
research
06/01/2023

Differential Diffusion: Giving Each Pixel Its Strength

Text-based image editing has advanced significantly in recent years. Wit...
research
04/06/2022

UIGR: Unified Interactive Garment Retrieval

Interactive garment retrieval (IGR) aims to retrieve a target garment im...

Please sign up or login with your details

Forgot password? Click here to reset