Beyond Generic: Enhancing Image Captioning with Real-World Knowledge using Vision-Language Pre-Training Model

08/02/2023
by   Kanzhi Cheng, et al.
0

Current captioning approaches tend to generate correct but "generic" descriptions that lack real-world knowledge, e.g., named entities and contextual information. Considering that Vision-Language Pre-Training (VLP) models master massive such knowledge from large-scale web-harvested data, it is promising to utilize the generalizability of VLP models to incorporate knowledge into image descriptions. However, using VLP models faces challenges: zero-shot inference suffers from knowledge hallucination that leads to low-quality descriptions, but the generic bias in downstream task fine-tuning hinders the VLP model from expressing knowledge. To address these concerns, we propose a simple yet effective method called Knowledge-guided Replay (K-Replay), which enables the retention of pre-training knowledge during fine-tuning. Our approach consists of two parts: (1) a knowledge prediction task on automatically collected replay exemplars to continuously awaken the VLP model's memory about knowledge, thus preventing the model from collapsing into the generic pattern; (2) a knowledge distillation constraint to improve the faithfulness of generated descriptions hence alleviating the knowledge hallucination. To evaluate knowledge-enhanced descriptions, we construct a novel captioning benchmark KnowCap, containing knowledge of landmarks, famous brands, special foods and movie characters. Experimental results show that our approach effectively incorporates knowledge into descriptions, outperforming strong VLP baseline by 20.9 points (78.7->99.6) in CIDEr score and 20.5 percentage points (34.0 and data is available at https://github.com/njucckevin/KnowCap.

READ FULL TEXT

page 2

page 3

page 8

page 11

page 12

research
11/28/2022

SuS-X: Training-Free Name-Only Transfer of Vision-Language Models

Contrastive Language-Image Pre-training (CLIP) has emerged as a simple y...
research
07/31/2023

Transferable Decoding with Visual Entities for Zero-Shot Image Captioning

Image-to-text generation aims to describe images using natural language....
research
04/21/2018

Entity-aware Image Caption Generation

Image captioning approaches currently generate descriptions which lack s...
research
04/06/2022

Knowledge Infused Decoding

Pre-trained language models (LMs) have been shown to memorize a substant...
research
09/05/2022

RLIP: Relational Language-Image Pre-training for Human-Object Interaction Detection

The task of Human-Object Interaction (HOI) detection targets fine-graine...
research
11/17/2022

CapEnrich: Enriching Caption Semantics for Web Images via Cross-modal Pre-trained Knowledge

Automatically generating textual descriptions for massive unlabeled imag...
research
04/04/2021

FixMyPose: Pose Correctional Captioning and Retrieval

Interest in physical therapy and individual exercises such as yoga/dance...

Please sign up or login with your details

Forgot password? Click here to reset