Vision+X: A Survey on Multimodal Learning in the Light of Data

10/05/2022
by   Ye Zhu, et al.
1

We are perceiving and communicating with the world in a multisensory manner, where different information sources are sophisticatedly processed and interpreted by separate parts of the human brain to constitute a complex, yet harmonious and unified sensing system. To endow the machines with true intelligence, the multimodal machine learning that incorporates data from various modalities has become an increasingly popular research area with emerging technical advances in recent years. In this paper, we present a survey on multimodal machine learning from a novel perspective considering not only the purely technical aspects but also the nature of different data modalities. We analyze the commonness and uniqueness of each data format ranging from vision, audio, text and others, and then present the technical development categorized by the combination of Vision+X, where the vision data play a fundamental role in most multimodal learning works. We investigate the existing literature on multimodal learning from both the representation learning and downstream application levels, and provide an additional comparison in the light of their technical connections with the data nature, e.g., the semantic consistency between image objects and textual descriptions, or the rhythm correspondence between video dance moves and musical beats. The exploitation of the alignment, as well as the existing gap between the intrinsic nature of data modality and the technical designs, will benefit future research studies to better address and solve a specific challenge related to the concrete multimodal task, and to prompt a unified multimodal machine learning framework closer to a real human intelligence system.

READ FULL TEXT
research
11/10/2019

Multimodal Intelligence: Representation Learning, Information Fusion, and Applications

Deep learning has revolutionized speech recognition, image recognition, ...
research
07/29/2021

Multimodal Co-learning: Challenges, Applications with Datasets, Recent Advances and Future Directions

Multimodal deep learning systems which employ multiple modalities like t...
research
10/18/2022

MMGA: Multimodal Learning with Graph Alignment

Multimodal pre-training breaks down the modality barriers and allows the...
research
03/16/2023

Lessons Learnt from a Multimodal Learning Analytics Deployment In-the-wild

Multimodal Learning Analytics (MMLA) innovations make use of rapidly evo...
research
02/18/2022

A Review on Methods and Applications in Multimodal Deep Learning

Deep Learning has implemented a wide range of applications and has becom...
research
03/10/2021

What is Multimodality?

The last years have shown rapid developments in the field of multimodal ...
research
07/29/2020

Presentation and Analysis of a Multimodal Dataset for Grounded LanguageLearning

Grounded language acquisition – learning how language-based interactions...

Please sign up or login with your details

Forgot password? Click here to reset