Microsoft COCO Captions: Data Collection and Evaluation Server

04/01/2015
by   Xinlei Chen, et al.
0

In this paper we describe the Microsoft COCO Caption dataset and evaluation server. When completed, the dataset will contain over one and a half million captions describing over 330,000 images. For the training and validation images, five independent human generated captions will be provided. To ensure consistency in evaluation of automatic caption generation algorithms, an evaluation server is used. The evaluation server receives candidate captions and scores them using several popular metrics, including BLEU, METEOR, ROUGE and CIDEr. Instructions for using the evaluation server are provided.

READ FULL TEXT

page 1

page 2

09/04/2019

TIGEr: Text-to-Image Grounding for Image Caption Evaluation

This paper presents a new metric called TIGEr for the automatic evaluati...
02/22/2018

ChatPainter: Improving Text to Image Generation using Dialogue

Synthesizing realistic images from text descriptions on a dataset like M...
05/22/2023

Evaluating Pragmatic Abilities of Image Captioners on A3DS

Evaluating grounded neural language model performance with respect to pr...
07/29/2016

SPICE: Semantic Propositional Image Caption Evaluation

There is considerable interest in the task of automatically generating i...
10/27/2019

Memeify: A Large-Scale Meme Generation System

Interest in the research areas related to meme propagation and generatio...
07/13/2021

Between Flexibility and Consistency: Joint Generation of Captions and Subtitles

Speech translation (ST) has lately received growing interest for the gen...
02/07/2022

Inference of captions from histopathological patches

Computational histopathology has made significant strides in the past fe...

Code Repositories

caption-eval

Sentence/Caption evaluation using automated metrics


view repo

Please sign up or login with your details

Forgot password? Click here to reset