Teaching Structured Vision Language Concepts to Vision Language Models

11/21/2022
by   Sivan Doveh, et al.
11

Vision and Language (VL) models have demonstrated remarkable zero-shot performance in a variety of tasks. However, some aspects of complex language understanding still remain a challenge. We introduce the collective notion of Structured Vision Language Concepts (SVLC) which includes object attributes, relations, and states which are present in the text and visible in the image. Recent studies have shown that even the best VL models struggle with SVLC. A possible way of fixing this issue is by collecting dedicated datasets for teaching each SVLC type, yet this might be expensive and time-consuming. Instead, we propose a more elegant data-driven approach for enhancing VL models' understanding of SVLCs that makes more effective use of existing VL pre-training datasets and does not require any additional data. While automatic understanding of image structure still remains largely unsolved, language structure is much better modeled and understood, allowing for its effective utilization in teaching VL models. In this paper, we propose various techniques based on language structure understanding that can be used to manipulate the textual part of off-the-shelf paired VL datasets. VL models trained with the updated data exhibit a significant improvement of up to 15 understanding with only a mild degradation in their zero-shot capabilities both when training from scratch or fine-tuning a pre-trained model.

READ FULL TEXT

page 2

page 7

page 18

page 19

page 20

research
05/10/2023

Incorporating Structured Representations into Pretrained Vision Language Models Using Scene Graphs

Vision and Language (VL) models have demonstrated remarkable zero-shot p...
research
03/30/2023

Going Beyond Nouns With Vision Language Models Using Synthetic Data

Large-scale pre-trained Vision Language (VL) models have shown remar...
research
11/17/2022

ConStruct-VL: Data-Free Continual Structured VL Concepts Learning

Recently, large-scale pre-trained Vision-and-Language (VL) foundation mo...
research
08/16/2023

Painter: Teaching Auto-regressive Language Models to Draw Sketches

Large language models (LLMs) have made tremendous progress in natural la...
research
06/07/2023

UniBoost: Unsupervised Unimodal Pre-training for Boosting Zero-shot Vision-Language Tasks

Large-scale joint training of multimodal models, e.g., CLIP, have demons...
research
02/23/2023

Teaching CLIP to Count to Ten

Large vision-language models (VLMs), such as CLIP, learn rich joint imag...
research
09/08/2022

FETA: Towards Specializing Foundation Models for Expert Task Applications

Foundation Models (FMs) have demonstrated unprecedented capabilities inc...

Please sign up or login with your details

Forgot password? Click here to reset