Explainability Techniques for Chemical Language Models

05/25/2023
by   Stefan Hödl, et al.
0

Explainability techniques are crucial in gaining insights into the reasons behind the predictions of deep learning models, which have not yet been applied to chemical language models. We propose an explainable AI technique that attributes the importance of individual atoms towards the predictions made by these models. Our method backpropagates the relevance information towards the chemical input string and visualizes the importance of individual atoms. We focus on self-attention Transformers operating on molecular string representations and leverage a pretrained encoder for finetuning. We showcase the method by predicting and visualizing solubility in water and organic solvents. We achieve competitive model performance while obtaining interpretable predictions, which we use to inspect the pretrained model.

READ FULL TEXT

page 1

page 7

research
05/09/2023

Language models can generate molecules, materials, and protein binding sites directly in three dimensions as XYZ, CIF, and PDB files

Language models are powerful tools for molecular design. Currently, the ...
research
11/23/2022

Group SELFIES: A Robust Fragment-Based Molecular String Representation

We introduce Group SELFIES, a molecular string representation that lever...
research
08/16/2023

Atom-by-atom protein generation and beyond with language models

Protein language models learn powerful representations directly from seq...
research
08/19/2019

Encoder-Agnostic Adaptation for Conditional Language Generation

Large pretrained language models have changed the way researchers approa...
research
07/24/2020

Named entity recognition in chemical patents using ensemble of contextual language models

Chemical patent documents describe a broad range of applications holding...
research
04/05/2021

Explainability-aided Domain Generalization for Image Classification

Traditionally, for most machine learning settings, gaining some degree o...
research
11/07/2022

How Much Does Attention Actually Attend? Questioning the Importance of Attention in Pretrained Transformers

The attention mechanism is considered the backbone of the widely-used Tr...

Please sign up or login with your details

Forgot password? Click here to reset