Visual Clues: Bridging Vision and Language Foundations for Image Paragraph Captioning

06/03/2022
by   Yujia Xie, et al.
0

People say, "A picture is worth a thousand words". Then how can we get the rich information out of the image? We argue that by using visual clues to bridge large pretrained vision foundation models and language models, we can do so without any extra cross-modal training. Thanks to the strong zero-shot capability of foundation models, we start by constructing a rich semantic representation of the image (e.g., image tags, object attributes / locations, captions) as a structured textual prompt, called visual clues, using a vision foundation model. Based on visual clues, we use large language model to produce a series of comprehensive descriptions for the visual content, which is then verified by the vision model again to select the candidate that aligns best with the image. We evaluate the quality of generated descriptions by quantitative and qualitative measurement. The results demonstrate the effectiveness of such a structured semantic representation.

READ FULL TEXT

page 1

page 5

page 9

page 15

page 16

page 17

page 18

research
08/16/2023

Pro-Cap: Leveraging a Frozen Vision-Language Model for Hateful Meme Detection

Hateful meme detection is a challenging multimodal task that requires co...
research
06/01/2023

CapText: Large Language Model-based Caption Generation From Image Context and Description

While deep-learning models have been shown to perform well on image-to-t...
research
05/28/2023

FuseCap: Leveraging Large Language Models to Fuse Visual Data into Enriched Image Captions

Image captioning is a central task in computer vision which has experien...
research
05/04/2023

LLM2Loss: Leveraging Language Models for Explainable Model Diagnostics

Trained on a vast amount of data, Large Language models (LLMs) have achi...
research
10/12/2018

Embedding Geographic Locations for Modelling the Natural Environment using Flickr Tags and Structured Data

Meta-data from photo-sharing websites such as Flickr can be used to obta...
research
06/24/2023

DesCo: Learning Object Recognition with Rich Language Descriptions

Recent development in vision-language approaches has instigated a paradi...
research
06/09/2023

Aladdin: Zero-Shot Hallucination of Stylized 3D Assets from Abstract Scene Descriptions

What constitutes the "vibe" of a particular scene? What should one find ...

Please sign up or login with your details

Forgot password? Click here to reset