Connecting What to Say With Where to Look by Modeling Human Attention Traces

05/12/2021
by   Zihang Meng, et al.
9

We introduce a unified framework to jointly model images, text, and human attention traces. Our work is built on top of the recent Localized Narratives annotation framework [30], where each word of a given caption is paired with a mouse trace segment. We propose two novel tasks: (1) predict a trace given an image and caption (i.e., visual grounding), and (2) predict a caption and a trace given only an image. Learning the grounding of each word is challenging, due to noise in the human-provided traces and the presence of words that cannot be meaningfully visually grounded. We present a novel model architecture that is jointly trained on dual tasks (controlled trace generation and controlled caption generation). To evaluate the quality of the generated traces, we propose a local bipartite matching (LBM) distance metric which allows the comparison of two traces of different lengths. Extensive experiments show our model is robust to the imperfect training data and outperforms the baselines by a clear margin. Moreover, we demonstrate that our model pre-trained on the proposed tasks can be also beneficial to the downstream task of COCO's guided image captioning. Our code and project page are publicly available.

READ FULL TEXT

page 4

page 8

page 9

page 13

page 14

page 15

page 16

page 17

research
12/06/2019

Connecting Vision and Language with Localized Narratives

We propose Localized Narratives, an efficient way to collect image capti...
research
09/26/2020

Neural Twins Talk

Inspired by how the human brain employs more neural pathways when increa...
research
11/07/2020

Text-to-Image Generation Grounded by Fine-Grained User Attention

Localized Narratives is a dataset with detailed natural language descrip...
research
07/19/2017

Learning Visually Grounded Sentence Representations

We introduce a variety of models, trained on a supervised image captioni...
research
01/17/2023

GLIGEN: Open-Set Grounded Text-to-Image Generation

Large-scale text-to-image diffusion models have made amazing advances. H...
research
03/06/2020

Teaching Temporal Logics to Neural Networks

We show that a deep neural network can learn the semantics of linear-tim...
research
02/22/2023

Connecting Vision and Language with Video Localized Narratives

We propose Video Localized Narratives, a new form of multimodal video an...

Please sign up or login with your details

Forgot password? Click here to reset