Macaw-LLM: Multi-Modal Language Modeling with Image, Audio, Video, and Text Integration

06/15/2023
by   Chenyang Lyu, et al.
0

Although instruction-tuned large language models (LLMs) have exhibited remarkable capabilities across various NLP tasks, their effectiveness on other data modalities beyond text has not been fully studied. In this work, we propose Macaw-LLM, a novel multi-modal LLM that seamlessly integrates visual, audio, and textual information. Macaw-LLM consists of three main components: a modality module for encoding multi-modal data, a cognitive module for harnessing pretrained LLMs, and an alignment module for harmonizing diverse representations. Our novel alignment module seamlessly bridges multi-modal features to textual features, simplifying the adaptation process from the modality modules to the cognitive module. In addition, we construct a large-scale multi-modal instruction dataset in terms of multi-turn dialogue, including 69K image instances and 50K video instances. We have made our data, code and model publicly available, which we hope can pave the way for future research in multi-modal LLMs and expand the capabilities of LLMs to handle diverse data modalities and address complex real-world scenarios.

READ FULL TEXT

page 7

page 9

page 10

page 11

research
09/13/2023

Sight Beyond Text: Multi-Modal Training Enhances LLMs in Truthfulness and Ethics

Multi-modal large language models (MLLMs) are trained based on large lan...
research
08/14/2023

GIT-Mol: A Multi-modal Large Language Model for Molecular Science with Graph, Image, and Text

Large language models have made significant strides in natural language ...
research
06/11/2023

LAMM: Language-Assisted Multi-Modal Instruction-Tuning Dataset, Framework, and Benchmark

Large language models have become a potential pathway toward achieving a...
research
07/19/2023

(Ab)using Images and Sounds for Indirect Instruction Injection in Multi-Modal LLMs

We demonstrate how images and sounds can be used for indirect prompt and...
research
05/12/2022

A Generalist Agent

Inspired by progress in large-scale language modeling, we apply a simila...
research
07/17/2023

BuboGPT: Enabling Visual Grounding in Multi-Modal LLMs

LLMs have demonstrated remarkable abilities at interacting with humans t...
research
07/05/2022

Multi-modal Robustness Analysis Against Language and Visual Perturbations

Joint visual and language modeling on large-scale datasets has recently ...

Please sign up or login with your details

Forgot password? Click here to reset