Show, Attend and Tell: Neural Image Caption Generation with Visual Attention

02/10/2015 ∙ by Kelvin Xu, et al. ∙ 0

Inspired by recent work in machine translation and object detection, we introduce an attention based model that automatically learns to describe the content of images. We describe how we can train this model in a deterministic manner using standard backpropagation techniques and stochastically by maximizing a variational lower bound. We also show through visualization how the model is able to automatically learn to fix its gaze on salient objects while generating the corresponding words in the output sequence. We validate the use of attention with state-of-the-art performance on three benchmark datasets: Flickr8k, Flickr30k and MS COCO.

READ FULL TEXT
POST COMMENT

Comments

There are no comments yet.

Authors

page 11

page 12

page 13

page 14

page 15

page 18

page 21

page 22

Code Repositories

show_attend_and_tell.tensorflow

None


view repo

action-recognition-visual-attention

Action recognition using soft attention based deep recurrent neural networks


view repo

nmt-keras

Neural Machine Translation with Keras (Theano/Tensorflow)


view repo
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.