How to Adapt Pre-trained Vision-and-Language Models to a Text-only Input?

09/19/2022
by   Lovisa Hagström, et al.
0

Current language models have been criticised for learning language from text alone without connection between words and their meaning. Consequently, multimodal training has been proposed as a way for creating models with better language understanding by providing the lacking connection. We focus on pre-trained multimodal vision-and-language (VL) models for which there already are some results on their language understanding capabilities. An unresolved issue with evaluating the linguistic skills of these models, however, is that there is no established method for adapting them to text-only input without out-of-distribution uncertainty. To find the best approach, we investigate and compare seven possible methods for adapting three different pre-trained VL models to text-only input. Our evaluations on both GLUE and Visual Property Norms (VPN) show that care should be put into adapting VL models to zero-shot text-only tasks, while the models are less sensitive to how we adapt them to non-zero-shot tasks. We also find that the adaptation methods perform differently for different models and that unimodal model counterparts perform on par with the VL models regardless of adaptation, indicating that current VL models do not necessarily gain better language understanding from their multimodal training.

READ FULL TEXT
research
12/14/2022

Pre-trained Language Models can be Fully Zero-Shot Learners

How can we extend a pre-trained model to many language understanding tas...
research
03/21/2023

Is BERT Blind? Exploring the Effect of Vision-and-Language Pretraining on Visual Language Understanding

Most humans use visual imagination to understand and reason about langua...
research
05/28/2023

KAFA: Rethinking Image Ad Understanding with Knowledge-Augmented Feature Adaptation of Vision-Language Models

Image ad understanding is a crucial task with wide real-world applicatio...
research
06/03/2023

Benchmarking Robustness of Adaptation Methods on Pre-trained Vision-Language Models

Various adaptation methods, such as LoRA, prompts, and adapters, have be...
research
06/29/2023

A negation detection assessment of GPTs: analysis with the xNot360 dataset

Negation is a fundamental aspect of natural language, playing a critical...
research
12/13/2022

A fine-grained comparison of pragmatic language understanding in humans and language models

Pragmatics is an essential part of communication, but it remains unclear...
research
05/23/2022

Sample Efficient Approaches for Idiomaticity Detection

Deep neural models, in particular Transformer-based pre-trained language...

Please sign up or login with your details

Forgot password? Click here to reset