Evidence of Human-Like Visual-Linguistic Integration in Multimodal Large Language Models During Predictive Language Processing

08/11/2023
by   Viktor Kewenig, et al.
0

The advanced language processing abilities of large language models (LLMs) have stimulated debate over their capacity to replicate human-like cognitive processes. One differentiating factor between language processing in LLMs and humans is that language input is often grounded in more than one perceptual modality, whereas most LLMs process solely text-based information. Multimodal grounding allows humans to integrate - e.g. visual context with linguistic information and thereby place constraints on the space of upcoming words, reducing cognitive load and improving perception and comprehension. Recent multimodal LLMs (mLLMs) combine visual and linguistic embedding spaces with a transformer type attention mechanism for next-word prediction. To what extent does predictive language processing based on multimodal input align in mLLMs and humans? To answer this question, 200 human participants watched short audio-visual clips and estimated the predictability of an upcoming verb or noun. The same clips were processed by the mLLM CLIP, with predictability scores based on a comparison of image and text feature vectors. Eye-tracking was used to estimate what visual features participants attended to, and CLIP's visual attention weights were recorded. We find that human estimates of predictability align significantly with CLIP scores, but not for a unimodal LLM of comparable parameter size. Further, alignment vanished when CLIP's visual attention weights were perturbed, and when the same input was fed to a multimodal model without attention. Analysing attention patterns, we find a significant spatial overlap between CLIP's visual attention weights and human eye-tracking data. Results suggest that comparable processes of integrating multimodal information, guided by attention to relevant visual features, supports predictive language processing in mLLMs and humans.

READ FULL TEXT

page 3

page 6

research
06/17/2018

Multimodal Grounding for Language Processing

This survey discusses how recent developments in multimodal processing f...
research
03/20/2023

Multimodal Shannon Game with Images

The Shannon game has long been used as a thought experiment in linguisti...
research
07/14/2023

Are words equally surprising in audio and audio-visual comprehension?

We report a controlled study investigating the effect of visual informat...
research
05/24/2023

Exploring the Grounding Issues in Image Caption

This paper explores the grounding issue concerning multimodal semantic r...
research
05/19/2020

Human-like general language processing

Using language makes human beings surpass animals in wisdom. To let mach...
research
08/26/2023

Fixating on Attention: Integrating Human Eye Tracking into Vision Transformers

Modern transformer-based models designed for computer vision have outper...
research
10/22/2022

A Visual Tour Of Current Challenges In Multimodal Language Models

Transformer models trained on massive text corpora have become the de fa...

Please sign up or login with your details

Forgot password? Click here to reset