CapOnImage: Context-driven Dense-Captioning on Image

04/27/2022
by   Yiqi Gao, et al.
0

Existing image captioning systems are dedicated to generating narrative captions for images, which are spatially detached from the image in presentation. However, texts can also be used as decorations on the image to highlight the key points and increase the attractiveness of images. In this work, we introduce a new task called captioning on image (CapOnImage), which aims to generate dense captions at different locations of the image based on contextual information. To fully exploit the surrounding visual context to generate the most suitable caption for each location, we propose a multi-modal pre-training model with multi-level pre-training tasks that progressively learn the correspondence between texts and image locations from easy to difficult. Since the model may generate redundant captions for nearby locations, we further enhance the location embedding with neighbor locations as context. For this new task, we also introduce a large-scale benchmark called CapOnImage2M, which contains 2.1 million product images, each with an average of 4.8 spatially localized captions. Compared with other image captioning model variants, our model achieves the best results in both captioning accuracy and diversity aspects. We will make code and datasets public to facilitate future research.

READ FULL TEXT

page 1

page 4

page 8

page 12

page 13

research
11/24/2021

Scaling Up Vision-Language Pre-training for Image Captioning

In recent years, we have witnessed significant performance boost in the ...
research
08/08/2022

Distinctive Image Captioning via CLIP Guided Group Optimization

Image captioning models are usually trained according to human annotated...
research
03/14/2019

Dense Relational Captioning: Triple-Stream Networks for Relationship-Based Captioning

Our goal in this work is to train an image captioning model that generat...
research
10/08/2020

Dense Relational Image Captioning via Multi-task Triple-Stream Networks

We introduce dense relational captioning, a novel image captioning task ...
research
09/21/2022

Show, Interpret and Tell: Entity-aware Contextualised Image Captioning in Wikipedia

Humans exploit prior knowledge to describe images, and are able to adapt...
research
11/25/2022

Aesthetically Relevant Image Captioning

Image aesthetic quality assessment (AQA) aims to assign numerical aesthe...
research
03/08/2021

Multiple Instance Captioning: Learning Representations from Histopathology Textbooks and Articles

We present ARCH, a computational pathology (CP) multiple instance captio...

Please sign up or login with your details

Forgot password? Click here to reset