Log In Sign Up

STAIR Captions: Constructing a Large-Scale Japanese Image Caption Dataset

by   Yuya Yoshikawa, et al.

In recent years, automatic generation of image descriptions (captions), that is, image captioning, has attracted a great deal of attention. In this paper, we particularly consider generating Japanese captions for images. Since most available caption datasets have been constructed for English language, there are few datasets for Japanese. To tackle this problem, we construct a large-scale Japanese image caption dataset based on images from MS-COCO, which is called STAIR Captions. STAIR Captions consists of 820,310 Japanese captions for 164,062 images. In the experiment, we show that a neural network trained using STAIR Captions can generate more natural and better Japanese captions, compared to those generated using English-Japanese machine translation after generating English captions.


UIT-ViIC: A Dataset for the First Evaluation on Vietnamese Image Captioning

Image Captioning, the task of automatic generation of image captions, ha...

Iconographic Image Captioning for Artworks

Image captioning implies automatically generating textual descriptions o...

#PraCegoVer: A Large Dataset for Image Captioning in Portuguese

Automatically describing images using natural sentences is an important ...

Alleviating Noisy Data in Image Captioning with Cooperative Distillation

Image captioning systems have made substantial progress, largely due to ...

3M: Multi-style image caption generation using Multi-modality features under Multi-UPDOWN model

In this paper, we build a multi-style generative model for stylish image...

Concadia: Tackling image accessibility with context

Images have become an integral part of online media. This has enhanced s...

ChatPainter: Improving Text to Image Generation using Dialogue

Synthesizing realistic images from text descriptions on a dataset like M...