ChemBERTa: Large-Scale Self-Supervised Pretraining for Molecular Property Prediction

10/19/2020
by   Seyone Chithrananda, et al.
13

GNNs and chemical fingerprints are the predominant approaches to representing molecules for property prediction. However, in NLP, transformers have become the de-facto standard for representation learning thanks to their strong downstream task transfer. In parallel, the software ecosystem around transformers is maturing rapidly, with libraries like HuggingFace and BertViz enabling streamlined training and introspection. In this work, we make one of the first attempts to systematically evaluate transformers on molecular property prediction tasks via our ChemBERTa model. ChemBERTa scales well with pretraining dataset size, offering competitive downstream performance on MoleculeNet and useful attention-based visualization modalities. Our results suggest that transformers offer a promising avenue of future work for molecular representation learning and property prediction. To facilitate these efforts, we release a curated dataset of 77M SMILES from PubChem suitable for large-scale self-supervised pretraining.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/05/2022

ChemBERTa-2: Towards Chemical Foundation Models

Large pretrained models such as GPT-3 have had tremendous impact on mode...
research
09/08/2023

3D Denoisers are Good 2D Teachers: Molecular Pretraining via Denoising and Cross-Modal Distillation

Pretraining molecular representations from large unlabeled data is essen...
research
09/26/2022

Taking a Respite from Representation Learning for Molecular Property Prediction

Artificial intelligence (AI) has been widely applied in drug discovery w...
research
06/13/2023

Is Anisotropy Inherent to Transformers?

The representation degeneration problem is a phenomenon that is widely o...
research
12/26/2021

Stepping Back to SMILES Transformers for Fast Molecular Representation Inference

In the intersection of molecular science and deep learning, tasks like v...
research
06/03/2021

When Vision Transformers Outperform ResNets without Pretraining or Strong Data Augmentations

Vision Transformers (ViTs) and MLPs signal further efforts on replacing ...
research
11/08/2022

Reducing Down(stream)time: Pretraining Molecular GNNs using Heterogeneous AI Accelerators

The demonstrated success of transfer learning has popularized approaches...

Please sign up or login with your details

Forgot password? Click here to reset