One does not fit all! On the Complementarity of Vision Encoders for Vision and Language Tasks

10/12/2022
by   Gregor Geigle, et al.
36

Current multimodal models, aimed at solving Vision and Language (V+L) tasks, predominantly repurpose Vision Encoders (VE) as feature extractors. While many VEs – of different architectures, trained on different data and objectives – are publicly available, they are not designed for the downstream V+L tasks. Nonetheless, most current work assumes that a single pre-trained VE can serve as a general-purpose encoder. In this work, we evaluate whether the information stored within different VEs is complementary, i.e. if providing the model with features from multiple VEs can improve the performance on a target task. We exhaustively experiment with three popular VEs on six downstream V+L tasks and analyze the attention and VE-dropout patterns. Our results and analyses suggest that diverse VEs complement each other, resulting in improved downstream V+L task performance, where the improvements are not due to simple ensemble effects (i.e. the performance does not always improve when increasing the number of encoders). We demonstrate that future VEs, which are not repurposed, but explicitly designed for V+L tasks, have the potential of improving performance on the target V+L tasks.

READ FULL TEXT

page 6

page 13

page 14

page 18

page 19

research
01/15/2022

StolenEncoder: Stealing Pre-trained Encoders

Pre-trained encoders are general-purpose feature extractors that can be ...
research
01/20/2022

Watermarking Pre-trained Encoders in Contrastive Learning

Contrastive learning has become a popular technique to pre-train image e...
research
08/09/2023

SSL-Auth: An Authentication Framework by Fragile Watermarking for Pre-trained Encoders in Self-supervised Learning

Self-supervised learning (SSL), utilizing unlabeled datasets for trainin...
research
08/14/2023

AdvCLIP: Downstream-agnostic Adversarial Examples in Multimodal Contrastive Learning

Multimodal contrastive learning aims to train a general-purpose feature ...
research
12/16/2022

Feature Dropout: Revisiting the Role of Augmentations in Contrastive Learning

What role do augmentations play in contrastive learning? Recent work sug...
research
10/06/2022

MaPLe: Multi-modal Prompt Learning

Pre-trained vision-language (V-L) models such as CLIP have shown excelle...
research
10/25/2019

Mockingjay: Unsupervised Speech Representation Learning with Deep Bidirectional Transformer Encoders

We present Mockingjay as a new speech representation learning approach, ...

Please sign up or login with your details

Forgot password? Click here to reset