CAPTION: Correction by Analyses, POS-Tagging and Interpretation of Objects using only Nouns

Recently, Deep Learning (DL) methods have shown an excellent performance in image captioning and visual question answering. However, despite their performance, DL methods do not learn the semantics of the words that are being used to describe a scene, making it difficult to spot incorrect words used in captions or to interchange words that have similar meanings. This work proposes a combination of DL methods for object detection and natural language processing to validate image's captions. We test our method in the FOIL-COCO data set, since it provides correct and incorrect captions for various images using only objects represented in the MS-COCO image data set. Results show that our method has a good overall performance, in some cases similar to the human performance.


Are metrics measuring what they should? An evaluation of image captioning task metrics

Image Captioning is a current research task to describe the image conten...

#PraCegoVer: A Large Dataset for Image Captioning in Portuguese

Automatically describing images using natural sentences is an important ...

ChatPainter: Improving Text to Image Generation using Dialogue

Synthesizing realistic images from text descriptions on a dataset like M...

Diverse and Styled Image Captioning Using SVD-Based Mixture of Recurrent Experts

With great advances in vision and natural language processing, the gener...

What is not where: the challenge of integrating spatial representations into deep learning architectures

This paper examines to what degree current deep learning architectures f...

Technical Report: Image Captioning with Semantically Similar Images

This report presents our submission to the MS COCO Captioning Challenge ...

RefineCap: Concept-Aware Refinement for Image Captioning

Automatically translating images to texts involves image scene understan...

Please sign up or login with your details

Forgot password? Click here to reset