Socratic Models: Composing Zero-Shot Multimodal Reasoning with Language

04/01/2022
by   Andy Zeng, et al.
1

Large foundation models can exhibit unique capabilities depending on the domain of data they are trained on. While these domains are generic, they may only barely overlap. For example, visual-language models (VLMs) are trained on Internet-scale image captions, but large language models (LMs) are further trained on Internet-scale text with no images (e.g. from spreadsheets, to SAT questions). As a result, these models store different forms of commonsense knowledge across different domains. In this work, we show that this model diversity is symbiotic, and can be leveraged to build AI systems with structured Socratic dialogue – in which new multimodal tasks are formulated as a guided language-based exchange between different pre-existing foundation models, without additional finetuning. In the context of egocentric perception, we present a case study of Socratic Models (SMs) that can provide meaningful results for complex tasks such as generating free-form answers to contextual questions about egocentric video, by formulating video Q A as short story Q A, i.e. summarizing the video into a short story, then answering questions about it. Additionally, SMs can generate captions for Internet images, and are competitive with state-of-the-art on zero-shot video-to-text retrieval with 42.8 R@1 on MSR-VTT 1k-A. SMs demonstrate how to compose foundation models zero-shot to capture new multimodal functionalities, without domain-specific data collection. Prototypes are available at socraticmodels.github.io.

READ FULL TEXT

page 2

page 5

page 7

page 9

page 12

research
07/04/2023

Garbage in, garbage out: Zero-shot detection of crime using Large Language Models

This paper proposes exploiting the common sense knowledge learned by lar...
research
10/21/2022

Z-LaVI: Zero-Shot Language Solver Fueled by Visual Imagination

Large-scale pretrained language models have made significant advances in...
research
01/31/2023

Grounding Language Models to Images for Multimodal Generation

We propose an efficient method to ground pretrained text-only language m...
research
05/23/2023

i-Code Studio: A Configurable and Composable Framework for Integrative AI

Artificial General Intelligence (AGI) requires comprehensive understandi...
research
02/13/2023

"Correct answers" from the psychology of artificial intelligence

Large Language Models have vastly grown in capabilities. One proposed ap...
research
05/13/2023

On the Hidden Mystery of OCR in Large Multimodal Models

Large models have recently played a dominant role in natural language pr...
research
05/23/2023

Perception Test: A Diagnostic Benchmark for Multimodal Video Models

We propose a novel multimodal video benchmark - the Perception Test - to...

Please sign up or login with your details

Forgot password? Click here to reset