Speaking the Same Language: Matching Machine to Human Captions by Adversarial Training

03/30/2017
by   Rakshith Shetty, et al.
0

While strong progress has been made in image captioning over the last years, machine and human captions are still quite distinct. A closer look reveals that this is due to the deficiencies in the generated word distribution, vocabulary size, and strong bias in the generators towards frequent captions. Furthermore, humans -- rightfully so -- generate multiple, diverse captions, due to the inherent ambiguity in the captioning task which is not considered in today's systems. To address these challenges, we change the training objective of the caption generator from reproducing groundtruth captions to generating a set of captions that is indistinguishable from human generated captions. Instead of handcrafting such a learning target, we employ adversarial training in combination with an approximate Gumbel sampler to implicitly match the generated distribution to the human one. While our method achieves comparable performance to the state-of-the-art in terms of the correctness of the captions, we generate a set of diverse captions, that are significantly less biased and match the word statistics better in several aspects.

READ FULL TEXT

page 1

page 2

page 12

page 13

page 14

page 15

page 16

research
08/08/2019

Towards Generating Stylized Image Captions via Adversarial Training

While most image captioning aims to generate objective descriptions of i...
research
04/03/2018

Generating Diverse and Accurate Visual Captions by Comparative Adversarial Learning

We study how to generate captions that are not only accurate in describi...
research
12/05/2022

Towards Generating Diverse Audio Captions via Adversarial Training

Automated audio captioning is a cross-modal translation task for describ...
research
08/28/2021

Goal-driven text descriptions for images

A big part of achieving Artificial General Intelligence(AGI) is to build...
research
10/13/2021

Diverse Audio Captioning via Adversarial Training

Audio captioning aims at generating natural language descriptions for au...
research
10/20/2022

Communication breakdown: On the low mutual intelligibility between human and neural captioning

We compare the 0-shot performance of a neural caption-based image retrie...
research
05/16/2018

Defoiling Foiled Image Captions

We address the task of detecting foiled image captions, i.e. identifying...

Please sign up or login with your details

Forgot password? Click here to reset