Dual-view Molecule Pre-training

06/17/2021
by   Jinhua Zhu, et al.
0

Inspired by its success in natural language processing and computer vision, pre-training has attracted substantial attention in cheminformatics and bioinformatics, especially for molecule based tasks. A molecule can be represented by either a graph (where atoms are connected by bonds) or a SMILES sequence (where depth-first-search is applied to the molecular graph with specific rules). Existing works on molecule pre-training use either graph representations only or SMILES representations only. In this work, we propose to leverage both the representations and design a new pre-training algorithm, dual-view molecule pre-training (briefly, DMP), that can effectively combine the strengths of both types of molecule representations. The model of DMP consists of two branches: a Transformer branch that takes the SMILES sequence of a molecule as input, and a GNN branch that takes a molecular graph as input. The training of DMP contains three tasks: (1) predicting masked tokens in a SMILES sequence by the Transformer branch, (2) predicting masked atoms in a molecular graph by the GNN branch, and (3) maximizing the consistency between the two high-level representations output by the Transformer and GNN branches separately. After pre-training, we can use either the Transformer branch (this one is recommended according to empirical results), the GNN branch, or both for downstream tasks. DMP is tested on nine molecular property prediction tasks and achieves state-of-the-art performances on seven of them. Furthermore, we test DMP on three retrosynthesis tasks and achieve state-of-the-result on the USPTO-full dataset. Our code will be released soon.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/21/2021

Survey: Transformer based Video-Language Pre-training

Inspired by the success of transformer-based pre-training methods on nat...
research
06/18/2020

Multi-branch Attentive Transformer

While the multi-branch architecture is one of the key ingredients to the...
research
07/06/2022

Pre-training Transformers for Molecular Property Prediction Using Reaction Prediction

Molecular property prediction is essential in chemistry, especially for ...
research
07/14/2022

Unified 2D and 3D Pre-Training of Molecular Representations

Molecular representation learning has attracted much attention recently....
research
05/31/2022

Pre-training via Denoising for Molecular Property Prediction

Many important problems involving molecular property prediction from 3D ...
research
05/18/2023

MolXPT: Wrapping Molecules with Text for Generative Pre-training

Generative pre-trained Transformer (GPT) has demonstrates its great succ...
research
08/17/2023

On Data Imbalance in Molecular Property Prediction with Pre-training

Revealing and analyzing the various properties of materials is an essent...

Please sign up or login with your details

Forgot password? Click here to reset