Microsoft COCO Captions: Data Collection and Evaluation Server

04/01/2015
by   Xinlei Chen, et al.
0

In this paper we describe the Microsoft COCO Caption dataset and evaluation server. When completed, the dataset will contain over one and a half million captions describing over 330,000 images. For the training and validation images, five independent human generated captions will be provided. To ensure consistency in evaluation of automatic caption generation algorithms, an evaluation server is used. The evaluation server receives candidate captions and scores them using several popular metrics, including BLEU, METEOR, ROUGE and CIDEr. Instructions for using the evaluation server are provided.

READ FULL TEXT

page 1

page 2

research
09/04/2019

TIGEr: Text-to-Image Grounding for Image Caption Evaluation

This paper presents a new metric called TIGEr for the automatic evaluati...
research
02/22/2018

ChatPainter: Improving Text to Image Generation using Dialogue

Synthesizing realistic images from text descriptions on a dataset like M...
research
05/22/2023

Evaluating Pragmatic Abilities of Image Captioners on A3DS

Evaluating grounded neural language model performance with respect to pr...
research
07/29/2016

SPICE: Semantic Propositional Image Caption Evaluation

There is considerable interest in the task of automatically generating i...
research
10/27/2019

Memeify: A Large-Scale Meme Generation System

Interest in the research areas related to meme propagation and generatio...
research
07/13/2021

Between Flexibility and Consistency: Joint Generation of Captions and Subtitles

Speech translation (ST) has lately received growing interest for the gen...
research
02/07/2022

Inference of captions from histopathological patches

Computational histopathology has made significant strides in the past fe...

Please sign up or login with your details

Forgot password? Click here to reset