Unpaired Image Captioning by Language Pivoting

by   Jiuxiang Gu, et al.

Image captioning is a multimodal task involving computer vision and natural language processing, where the goal is to learn a mapping from the image to its natural language description. In general, the mapping function is learned from a training set of image-caption pairs. However, for some language, large scale image-caption paired corpus might not be available. We present an approach to this unpaired image captioning problem by language pivoting. Our method can effectively capture the characteristics of an image captioner from the pivot language (Chinese) and align it to the target language (English) using another pivot-target (Chinese-English) parallel corpus. We evaluate our method on two image-to-English benchmark datasets: MSCOCO and Flickr30K. Quantitative comparisons against several baseline approaches demonstrate the effectiveness of our method.


page 10

page 14


UIT-ViIC: A Dataset for the First Evaluation on Vietnamese Image Captioning

Image Captioning, the task of automatic generation of image captions, ha...

Fluency-Guided Cross-Lingual Image Captioning

Image captioning has so far been explored mostly in English, as most ava...

Image Captioning using Deep Neural Architectures

Automatically creating the description of an image using any natural lan...

Bringing back simplicity and lightliness into neural image captioning

Neural Image Captioning (NIC) or neural caption generation has attracted...

Object-Centric Unsupervised Image Captioning

Training an image captioning model in an unsupervised manner without uti...

Multi-modal Feature Fusion with Feature Attention for VATEX Captioning Challenge 2020

This report describes our model for VATEX Captioning Challenge 2020. Fir...

Unsupervised Cross-lingual Image Captioning

Most recent image captioning works are conducted in English as the major...