SITTA: A Semantic Image-Text Alignment for Image Captioning

07/10/2023
by   Fabian Paischer, et al.
0

Textual and semantic comprehension of images is essential for generating proper captions. The comprehension requires detection of objects, modeling of relations between them, an assessment of the semantics of the scene and, finally, representing the extracted knowledge in a language space. To achieve rich language capabilities while ensuring good image-language mappings, pretrained language models (LMs) were conditioned on pretrained multi-modal (image-text) models that allow for image inputs. This requires an alignment of the image representation of the multi-modal model with the language representations of a generative LM. However, it is not clear how to best transfer semantics detected by the vision encoder of the multi-modal model to the LM. We introduce two novel ways of constructing a linear mapping that successfully transfers semantics between the embedding spaces of the two pretrained models. The first aligns the embedding space of the multi-modal language encoder with the embedding space of the pretrained LM via token correspondences. The latter leverages additional data that consists of image-text pairs to construct the mapping directly from vision to language space. Using our semantic mappings, we unlock image captioning for LMs without access to gradient information. By using different sources of data we achieve strong captioning performance on MS-COCO and Flickr30k datasets. Even in the face of limited data, our method partly exceeds the performance of other zero-shot and even finetuned competitors. Our ablation studies show that even LMs at a scale of merely 250M parameters can generate decent captions employing our semantic mappings. Our approach makes image captioning more accessible for institutions with restricted computational resources.

READ FULL TEXT

page 4

page 5

page 20

research
03/06/2023

DeCap: Decoding CLIP Latents for Zero-Shot Captioning via Text-Only Training

Large-scale pre-trained multi-modal models (e.g., CLIP) demonstrate stro...
research
12/22/2016

Understanding Image and Text Simultaneously: a Dual Vision-Language Machine Comprehension Task

We introduce a new multi-modal task for computer systems, posed as a com...
research
02/16/2023

Retrieval-augmented Image Captioning

Inspired by retrieval-augmented language generation and pretrained Visio...
research
03/19/2023

Multi-modal reward for visual relationships-based image captioning

Deep neural networks have achieved promising results in automatic image ...
research
05/24/2023

Exploring Diverse In-Context Configurations for Image Captioning

After discovering that Language Models (LMs) can be good in-context few-...
research
08/16/2023

Pro-Cap: Leveraging a Frozen Vision-Language Model for Hateful Meme Detection

Hateful meme detection is a challenging multimodal task that requires co...
research
07/01/2023

ProbVLM: Probabilistic Adapter for Frozen Vison-Language Models

Large-scale vision-language models (VLMs) like CLIP successfully find co...

Please sign up or login with your details

Forgot password? Click here to reset