Self-critical Sequence Training for Image Captioning

12/02/2016
by   Steven J. Rennie, et al.
0

Recently it has been shown that policy-gradient methods for reinforcement learning can be utilized to train deep end-to-end systems directly on non-differentiable metrics for the task at hand. In this paper we consider the problem of optimizing image captioning systems using reinforcement learning, and show that by carefully optimizing our systems using the test metrics of the MSCOCO task, significant gains in performance can be realized. Our systems are built using a new optimization approach that we call self-critical sequence training (SCST). SCST is a form of the popular REINFORCE algorithm that, rather than estimating a "baseline" to normalize the rewards and reduce variance, utilizes the output of its own test-time inference algorithm to normalize the rewards it experiences. Using this approach, estimating the reward signal (as actor-critic methods must do) and estimating normalization (as REINFORCE algorithms typically do) is avoided, while at the same time harmonizing the model with respect to its test-time inference procedure. Empirically we find that directly optimizing the CIDEr metric with SCST and greedy decoding at test-time is highly effective. Our results on the MSCOCO evaluation sever establish a new state-of-the-art on the task, improving the best result in terms of CIDEr from 104.9 to 114.7.

READ FULL TEXT

page 9

page 13

page 14

page 15

page 16

research
06/29/2017

Actor-Critic Sequence Training for Image Captioning

Generating natural language descriptions of images is an important capab...
research
04/15/2019

Self-critical n-step Training for Image Captioning

Existing methods for image captioning are usually trained by cross entro...
research
04/06/2020

B-SCST: Bayesian Self-Critical Sequence Training for Image Captioning

Bayesian deep neural networks (DNN) provide a mathematically grounded fr...
research
09/11/2017

Stack-Captioning: Coarse-to-Fine Learning for Image Captioning

The existing image captioning approaches typically train a one-stage sen...
research
03/22/2020

A Better Variant of Self-Critical Sequence Training

In this work, we present a simple yet better variant of Self-Critical Se...
research
09/09/2019

Transfer Reward Learning for Policy Gradient-Based Text Generation

Task-specific scores are often used to optimize for and evaluate the per...
research
02/28/2019

Evaluating Rewards for Question Generation Models

Recent approaches to question generation have used modifications to a Se...

Please sign up or login with your details

Forgot password? Click here to reset