Beyond Just Vision: A Review on Self-Supervised Representation Learning on Multimodal and Temporal Data

by   Shohreh Deldari, et al.

Recently, Self-Supervised Representation Learning (SSRL) has attracted much attention in the field of computer vision, speech, natural language processing (NLP), and recently, with other types of modalities, including time series from sensors. The popularity of self-supervised learning is driven by the fact that traditional models typically require a huge amount of well-annotated data for training. Acquiring annotated data can be a difficult and costly process. Self-supervised methods have been introduced to improve the efficiency of training data through discriminative pre-training of models using supervisory signals that have been freely obtained from the raw data. Unlike existing reviews of SSRL that have pre-dominately focused upon methods in the fields of CV or NLP for a single modality, we aim to provide the first comprehensive review of multimodal self-supervised learning methods for temporal data. To this end, we 1) provide a comprehensive categorization of existing SSRL methods, 2) introduce a generic pipeline by defining the key components of a SSRL framework, 3) compare existing models in terms of their objective function, network architecture and potential applications, and 4) review existing multimodal techniques in each category and various modalities. Finally, we present existing weaknesses and future opportunities. We believe our work develops a perspective on the requirements of SSRL in domains that utilise multimodal and/or temporal data


page 1

page 2

page 3

page 4


Self-Supervised Multimodal Learning: A Survey

Multimodal learning, which aims to understand and analyze information fr...

Self-Supervised Representation Learning: Introduction, Advances and Challenges

Self-supervised representation learning methods aim to provide powerful ...

Multimodal Intelligence: Representation Learning, Information Fusion, and Applications

Deep learning has revolutionized speech recognition, image recognition, ...

A Survey of Self-Supervised Learning from Multiple Perspectives: Algorithms, Theory, Applications and Future Trends

Deep supervised learning algorithms generally require large numbers of l...

A Brief Overview of Unsupervised Neural Speech Representation Learning

Unsupervised representation learning for speech processing has matured g...

Learning Symbolic Representations Through Joint GEnerative and DIscriminative Training

We introduce GEDI, a Bayesian framework that combines existing self-supe...

A Survey on Masked Autoencoder for Self-supervised Learning in Vision and Beyond

Masked autoencoders are scalable vision learners, as the title of MAE <c...

Please sign up or login with your details

Forgot password? Click here to reset