Technical Report: Image Captioning with Semantically Similar Images

06/12/2015
by   Martin Kolář, et al.
0

This report presents our submission to the MS COCO Captioning Challenge 2015. The method uses Convolutional Neural Network activations as an embedding to find semantically similar images. From these images, the most typical caption is selected based on unigram frequencies. Although the method received low scores with automated evaluation metrics and in human assessed average correctness, it is competitive in the ratio of captions which pass the Turing test and which are assessed as better or equal to human captions.

READ FULL TEXT
research
12/14/2020

Intrinsic Image Captioning Evaluation

The image captioning task is about to generate suitable descriptions fro...
research
06/20/2023

Improving Image Captioning Descriptiveness by Ranking and LLM-based Fusion

State-of-The-Art (SoTA) image captioning models often rely on the Micros...
research
12/01/2016

Improved Image Captioning via Policy Gradient optimization of SPIDEr

Current image captioning methods are usually trained via (penalized) max...
research
06/29/2021

Contrastive Semantic Similarity Learning for Image Captioning Evaluation with Intrinsic Auto-encoder

Automatically evaluating the quality of image captions can be very chall...
research
09/08/2020

Towards Unique and Informative Captioning of Images

Despite considerable progress, state of the art image captioning models ...
research
10/02/2020

CAPTION: Correction by Analyses, POS-Tagging and Interpretation of Objects using only Nouns

Recently, Deep Learning (DL) methods have shown an excellent performance...
research
05/31/2016

Attention Correctness in Neural Image Captioning

Attention mechanisms have recently been introduced in deep learning for ...

Please sign up or login with your details

Forgot password? Click here to reset