Contrastive Language-Image Pretrained Models are Zero-Shot Human Scanpath Predictors

05/21/2023
by   Dario Zanca, et al.
0

Understanding the mechanisms underlying human attention is a fundamental challenge for both vision science and artificial intelligence. While numerous computational models of free-viewing have been proposed, less is known about the mechanisms underlying task-driven image exploration. To address this gap, we present CapMIT1003, a database of captions and click-contingent image explorations collected during captioning tasks. CapMIT1003 is based on the same stimuli from the well-known MIT1003 benchmark, for which eye-tracking data under free-viewing conditions is available, which offers a promising opportunity to concurrently study human attention under both tasks. We make this dataset publicly available to facilitate future research in this field. In addition, we introduce NevaClip, a novel zero-shot method for predicting visual scanpaths that combines contrastive language-image pretrained (CLIP) models with biologically-inspired neural visual attention (NeVA) algorithms. NevaClip simulates human scanpaths by aligning the representation of the foveated visual stimulus and the representation of the associated caption, employing gradient-driven visual exploration to generate scanpaths. Our experimental results demonstrate that NevaClip outperforms existing unsupervised computational models of human visual attention in terms of scanpath plausibility, for both captioning and free-viewing tasks. Furthermore, we show that conditioning NevaClip with incorrect or misleading captions leads to random behavior, highlighting the significant impact of caption guidance in the decision-making process. These findings contribute to a better understanding of mechanisms that guide human attention and pave the way for more sophisticated computational approaches to scanpath prediction that can integrate direct top-down guidance of downstream tasks.

READ FULL TEXT

page 2

page 4

page 6

page 13

page 14

page 15

page 16

page 17

research
06/13/2023

Visual Language Pretrained Multiple Instance Zero-Shot Transfer for Histopathology Images

Contrastive visual language pretraining has emerged as a powerful method...
research
01/22/2022

Visual Information Guided Zero-Shot Paraphrase Generation

Zero-shot paraphrase generation has drawn much attention as the large-sc...
research
03/24/2023

VILA: Learning Image Aesthetics from User Comments with Vision-Language Pretraining

Assessing the aesthetics of an image is challenging, as it is influenced...
research
11/22/2022

Simulating Human Gaze with Neural Visual Attention

Existing models of human visual attention are generally unable to incorp...
research
05/04/2022

CoCa: Contrastive Captioners are Image-Text Foundation Models

Exploring large-scale pretrained foundation models is of significant int...
research
04/19/2022

Behind the Machine's Gaze: Biologically Constrained Neural Networks Exhibit Human-like Visual Attention

By and large, existing computational models of visual attention tacitly ...
research
07/05/2016

A probabilistic tour of visual attention and gaze shift computational models

In this paper a number of problems are considered which are related to t...

Please sign up or login with your details

Forgot password? Click here to reset