HighMMT: Towards Modality and Task Generalization for High-Modality Representation Learning

03/02/2022
by   Paul Pu Liang, et al.
10

Learning multimodal representations involves discovering correspondences and integrating information from multiple heterogeneous sources of data. While recent research has begun to explore the design of more general-purpose multimodal models (contrary to prior focus on domain and modality-specific architectures), these methods are still largely focused on a small set of modalities in the language, vision, and audio space. In order to accelerate generalization towards diverse and understudied modalities, we investigate methods for high-modality (a large set of diverse modalities) and partially-observable (each task only defined on a small subset of modalities) scenarios. To tackle these challenges, we design a general multimodal model that enables multitask and transfer learning: multitask learning with shared parameters enables stable parameter counts (addressing scalability), and cross-modal transfer learning enables information sharing across modalities and tasks (addressing partial observability). Our resulting model generalizes across text, image, video, audio, time-series, sensors, tables, and set modalities from different research areas, improves the tradeoff between performance and efficiency, transfers to new modalities and tasks, and reveals surprising insights on the nature of information sharing in multitask models. We release our code and benchmarks which we hope will present a unified platform for subsequent theoretical and empirical analysis: https://github.com/pliang279/HighMMT.

READ FULL TEXT

page 8

page 16

page 24

research
03/12/2023

Accommodating Audio Modality in CLIP for Multimodal Processing

Multimodal processing has attracted much attention lately especially wit...
research
08/14/2020

MLM: A Benchmark Dataset for Multitask Learning with Multiple Languages and Modalities

In this paper, we introduce the MLM (Multiple Languages and Modalities) ...
research
05/18/2023

ONE-PEACE: Exploring One General Representation Model Toward Unlimited Modalities

In this work, we explore a scalable way for building a general represent...
research
01/12/2023

Multimodal Deep Learning

This book is the result of a seminar in which we reviewed multimodal app...
research
07/30/2023

Unified Model for Image, Video, Audio and Language Tasks

Large Language Models (LLMs) have made the ambitious quest for generalis...
research
06/28/2023

MultiZoo MultiBench: A Standardized Toolkit for Multimodal Deep Learning

Learning multimodal representations involves integrating information fro...
research
11/07/2022

Generalized Product-of-Experts for Learning Multimodal Representations in Noisy Environments

A real-world application or setting involves interaction between differe...

Please sign up or login with your details

Forgot password? Click here to reset