Unified Molecular Modeling via Modality Blending

07/12/2023
by   Qiying Yu, et al.
0

Self-supervised molecular representation learning is critical for molecule-based tasks such as AI-assisted drug discovery. Recent studies consider leveraging both 2D and 3D information for representation learning, with straightforward alignment strategies that treat each modality separately. In this work, we introduce a novel "blend-then-predict" self-supervised learning method (MoleBLEND), which blends atom relations from different modalities into one unified relation matrix for encoding, then recovers modality-specific information for both 2D and 3D structures. By treating atom relationships as anchors, seemingly dissimilar 2D and 3D manifolds are aligned and integrated at fine-grained relation-level organically. Extensive experiments show that MoleBLEND achieves state-of-the-art performance across major 2D/3D benchmarks. We further provide theoretical insights from the perspective of mutual-information maximization, demonstrating that our method unifies contrastive, generative (inter-modal prediction) and mask-then-predict (intra-modal prediction) objectives into a single cohesive blend-then-predict framework.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/11/2021

ChemRL-GEM: Geometry Enhanced Molecular Representation Learning for Property Prediction

Effective molecular representation learning is of great importance to fa...
research
06/03/2021

TVDIM: Enhancing Image Self-Supervised Pretraining via Noisy Text Data

Among ubiquitous multimodal data in the real world, text is the modality...
research
06/21/2022

Probing Visual-Audio Representation for Video Highlight Detection via Hard-Pairs Guided Contrastive Learning

Video highlight detection is a crucial yet challenging problem that aims...
research
03/10/2023

Understanding and Constructing Latent Modality Structures in Multi-modal Representation Learning

Contrastive loss has been increasingly used in learning representations ...
research
03/09/2020

Multi-modal Self-Supervision from Generalized Data Transformations

Self-supervised learning has advanced rapidly, with several results beat...
research
03/28/2022

S2-Net: Self-supervision Guided Feature Representation Learning for Cross-Modality Images

Combining the respective advantages of cross-modality images can compens...
research
10/29/2022

Self-supervised predictive coding and multimodal fusion advance patient deterioration prediction in fine-grained time resolution

In the Emergency Department (ED), accurate prediction of critical events...

Please sign up or login with your details

Forgot password? Click here to reset