Kosmos-2: Grounding Multimodal Large Language Models to the World

06/26/2023
by   Zhiliang Peng, et al.
0

We introduce Kosmos-2, a Multimodal Large Language Model (MLLM), enabling new capabilities of perceiving object descriptions (e.g., bounding boxes) and grounding text to the visual world. Specifically, we represent refer expressions as links in Markdown, i.e., “[text span](bounding boxes)”, where object descriptions are sequences of location tokens. Together with multimodal corpora, we construct large-scale data of grounded image-text pairs (called GrIT) to train the model. In addition to the existing capabilities of MLLMs (e.g., perceiving general modalities, following instructions, and performing in-context learning), Kosmos-2 integrates the grounding capability into downstream applications. We evaluate Kosmos-2 on a wide range of tasks, including (i) multimodal grounding, such as referring expression comprehension, and phrase grounding, (ii) multimodal referring, such as referring expression generation, (iii) perception-language tasks, and (iv) language understanding and generation. This work lays out the foundation for the development of Embodiment AI and sheds light on the big convergence of language, multimodal perception, action, and world modeling, which is a key step toward artificial general intelligence. Data, demo, and pretrained models are available at https://aka.ms/kosmos-2.

READ FULL TEXT

page 1

page 2

page 15

page 16

page 17

page 18

page 19

page 20

research
02/27/2023

Language Is Not All You Need: Aligning Perception with Language Models

A big convergence of language, multimodal perception, action, and world ...
research
01/31/2023

Grounding Language Models to Images for Multimodal Generation

We propose an efficient method to ground pretrained text-only language m...
research
07/07/2021

LanguageRefer: Spatial-Language Model for 3D Visual Grounding

To realize robots that can understand human instructions and perform mea...
research
09/07/2018

Meteorologists and Students: A resource for language grounding of geographical descriptors

We present a data resource which can be useful for research purposes on ...
research
05/24/2023

Exploring the Grounding Issues in Image Caption

This paper explores the grounding issue concerning multimodal semantic r...
research
08/23/2023

RefEgo: Referring Expression Comprehension Dataset from First-Person Perception of Ego4D

Grounding textual expressions on scene objects from first-person views i...
research
03/02/2021

MultiSubs: A Large-scale Multimodal and Multilingual Dataset

This paper introduces a large-scale multimodal and multilingual dataset ...

Please sign up or login with your details

Forgot password? Click here to reset