Supervising Neural Attention Models for Video Captioning by Human Gaze Data

07/19/2017
by   Youngjae Yu, et al.
0

The attention mechanisms in deep neural networks are inspired by human's attention that sequentially focuses on the most relevant parts of the information over time to generate prediction output. The attention parameters in those models are implicitly trained in an end-to-end manner, yet there have been few trials to explicitly incorporate human gaze tracking to supervise the attention models. In this paper, we investigate whether attention models can benefit from explicit human gaze labels, especially for the task of video captioning. We collect a new dataset called VAS, consisting of movie clips, and corresponding multiple descriptive sentences along with human gaze tracking data. We propose a video captioning model named Gaze Encoding Attention Network (GEAN) that can leverage gaze tracking information to provide the spatial and temporal attention for sentence generation. Through evaluation of language similarity metrics and human assessment via Amazon mechanical Turk, we demonstrate that spatial attentions guided by human gaze data indeed improve the performance of multiple captioning methods. Moreover, we show that the proposed approach achieves the state-of-the-art performance for both gaze prediction and video captioning not only in our VAS dataset but also in standard datasets (e.g. LSMDC and Hollywood2).

READ FULL TEXT

page 11

page 12

page 13

page 14

page 15

page 16

page 17

page 18

research
08/18/2016

Seeing with Humans: Gaze-Assisted Neural Image Captioning

Gaze reflects how humans process visual scenes and is therefore increasi...
research
11/09/2020

Generating Image Descriptions via Sequential Cross-Modal Alignment Guided by Human Gaze

When speakers describe an image, they tend to look at objects before men...
research
06/01/2018

AGIL: Learning Attention from Human for Visuomotor Tasks

When intelligent agents learn visuomotor behaviors from human demonstrat...
research
03/24/2018

Predicting Gaze in Egocentric Video by Learning Task-dependent Attention Transition

We present a new computational model for gaze prediction in egocentric v...
research
01/01/2019

Not All Words are Equal: Video-specific Information Loss for Video Captioning

An ideal description for a given video should fix its gaze on salient an...
research
07/04/2022

GazBy: Gaze-Based BERT Model to Incorporate Human Attention in Neural Information Retrieval

This paper is interested in investigating whether human gaze signals can...
research
03/27/2023

Gazeformer: Scalable, Effective and Fast Prediction of Goal-Directed Human Attention

Predicting human gaze is important in Human-Computer Interaction (HCI). ...

Please sign up or login with your details

Forgot password? Click here to reset