Show, Attend and Tell: Neural Image Caption Generation with Visual Attention

02/10/2015
by   Kelvin Xu, et al.
0

Inspired by recent work in machine translation and object detection, we introduce an attention based model that automatically learns to describe the content of images. We describe how we can train this model in a deterministic manner using standard backpropagation techniques and stochastically by maximizing a variational lower bound. We also show through visualization how the model is able to automatically learn to fix its gaze on salient objects while generating the corresponding words in the output sequence. We validate the use of attention with state-of-the-art performance on three benchmark datasets: Flickr8k, Flickr30k and MS COCO.

READ FULL TEXT

page 11

page 12

page 13

page 14

page 15

page 18

page 21

page 22

research
06/08/2017

Image Captioning with Object Detection and Localization

Automatically generating a natural language description of an image is a...
research
02/19/2019

Augmentation for small object detection

In recent years, object detection has experienced impressive progress. D...
research
12/05/2018

Visual Attention for Behavioral Cloning in Autonomous Driving

The goal of our work is to use visual attention to enhance autonomous dr...
research
04/02/2021

Attention Forcing for Machine Translation

Auto-regressive sequence-to-sequence models with attention mechanisms ha...
research
06/09/2021

Bayesian Attention Belief Networks

Attention-based neural networks have achieved state-of-the-art results o...
research
05/06/2022

Multitask AET with Orthogonal Tangent Regularity for Dark Object Detection

Dark environment becomes a challenge for computer vision algorithms owin...

Please sign up or login with your details

Forgot password? Click here to reset