MERLOT: Multimodal Neural Script Knowledge Models

06/04/2021
by   Rowan Zellers, et al.
0

As humans, we understand events in the visual world contextually, performing multimodal reasoning across time to make inferences about the past, present, and future. We introduce MERLOT, a model that learns multimodal script knowledge by watching millions of YouTube videos with transcribed speech – in an entirely label-free, self-supervised manner. By pretraining with a mix of both frame-level (spatial) and video-level (temporal) objectives, our model not only learns to match images to temporally corresponding words, but also to contextualize what is happening globally over time. As a result, MERLOT exhibits strong out-of-the-box representations of temporal commonsense, and achieves state-of-the-art performance on 12 different video QA datasets when finetuned. It also transfers well to the world of static images, allowing models to reason about the dynamic context behind visual scenes. On Visual Commonsense Reasoning, MERLOT answers questions correctly with 80.6 outperforming state-of-the-art models of similar size by over 3 that make heavy use of auxiliary supervised data (like object bounding boxes). Ablation analyses demonstrate the complementary importance of: 1) training on videos versus static images; 2) scaling the magnitude and diversity of the pretraining video corpus; and 3) using diverse objectives that encourage full-stack multimodal reasoning, from the recognition to cognition level.

READ FULL TEXT

page 1

page 22

page 23

page 25

page 26

page 29

research
01/02/2021

KM-BART: Knowledge Enhanced Multimodal BART for Visual Commonsense Generation

We present Knowledge Enhanced Multimodal BART (KM-BART), which is a Tran...
research
04/22/2020

Visual Commonsense Graphs: Reasoning about the Dynamic Context of a Still Image

Even from a single frame of a still image, people can reason about the d...
research
03/21/2022

PACS: A Dataset for Physical Audiovisual CommonSense Reasoning

In order for AI to be safely deployed in real-world scenarios such as ho...
research
08/05/2021

Hybrid Reasoning Network for Video-based Commonsense Captioning

The task of video-based commonsense captioning aims to generate event-wi...
research
12/09/2022

VindLU: A Recipe for Effective Video-and-Language Pretraining

The last several years have witnessed remarkable progress in video-and-l...
research
09/19/2019

Procedural Reasoning Networks for Understanding Multimodal Procedures

This paper addresses the problem of comprehending procedural commonsense...
research
05/12/2021

Designing Multimodal Datasets for NLP Challenges

In this paper, we argue that the design and development of multimodal da...

Please sign up or login with your details

Forgot password? Click here to reset