Triangular Contrastive Learning on Molecular Graphs

05/26/2022
by   MinGyu Choi, et al.
0

Recent contrastive learning methods have shown to be effective in various tasks, learning generalizable representations invariant to data augmentation thereby leading to state of the art performances. Regarding the multifaceted nature of large unlabeled data used in self-supervised learning while majority of real-word downstream tasks use single format of data, a multimodal framework that can train single modality to learn diverse perspectives from other modalities is an important challenge. In this paper, we propose TriCL (Triangular Contrastive Learning), a universal framework for trimodal contrastive learning. TriCL takes advantage of Triangular Area Loss, a novel intermodal contrastive loss that learns the angular geometry of the embedding space through simultaneously contrasting the area of positive and negative triplets. Systematic observation on embedding space in terms of alignment and uniformity showed that Triangular Area Loss can address the line-collapsing problem by discriminating modalities by angle. Our experimental results also demonstrate the outperformance of TriCL on downstream task of molecular property prediction which implies that the advantages of the embedding space indeed benefits the performance on downstream tasks.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/10/2022

Self-supervised learning of audio representations using angular contrastive loss

In Self-Supervised Learning (SSL), various pretext tasks are designed fo...
research
06/24/2023

Structuring Representation Geometry with Rotationally Equivariant Contrastive Learning

Self-supervised learning converts raw perceptual data such as images to ...
research
07/06/2021

Contrastive Multimodal Fusion with TupleInfoNCE

This paper proposes a method for representation learning of multimodal d...
research
10/18/2022

Rethinking Prototypical Contrastive Learning through Alignment, Uniformity and Correlation

Contrastive self-supervised learning (CSL) with a prototypical regulariz...
research
07/19/2022

Uncertainty in Contrastive Learning: On the Predictability of Downstream Performance

The superior performance of some of today's state-of-the-art deep learni...
research
09/14/2023

Hodge-Aware Contrastive Learning

Simplicial complexes prove effective in modeling data with multiway depe...
research
05/27/2022

Multimodal Masked Autoencoders Learn Transferable Representations

Building scalable models to learn from diverse, multimodal data remains ...

Please sign up or login with your details

Forgot password? Click here to reset