Retrosynthesis Prediction with Conditional Graph Logic Network

01/06/2020 ∙ by Hanjun Dai, et al. ∙ Google MIT Georgia Institute of Technology 6

Retrosynthesis is one of the fundamental problems in organic chemistry. The task is to identify reactants that can be used to synthesize a specified product molecule. Recently, computer-aided retrosynthesis is finding renewed interest from both chemistry and computer science communities. Most existing approaches rely on template-based models that define subgraph matching rules, but whether or not a chemical reaction can proceed is not defined by hard decision rules. In this work, we propose a new approach to this task using the Conditional Graph Logic Network, a conditional graphical model built upon graph neural networks that learns when rules from reaction templates should be applied, implicitly considering whether the resulting reaction would be both chemically feasible and strategic. We also propose an efficient hierarchical sampling to alleviate the computation cost. While achieving a significant improvement of 8.1% over current state-of-the-art methods on the benchmark dataset, our model also offers interpretations for the prediction.



There are no comments yet.


page 14

Code Repositories


Implementation of Retrosynthesis Prediction with Conditional Graph Logic Network

view repo
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Retrosynthesis planning is the procedure of identifying a series of reactions that lead to the synthesis of target product. It is first formalized by E. J. Corey [1] and now becomes one of the fundamental problems in organic chemistry. Such problem of “working backwards from the target” is challenging, due to the size of the search space–the vast numbers of theoretically-possible transformations–and thus requires the skill and creativity from experienced chemists. Recently, various computer algorithms [2] work in assistance to experienced chemists and save them tremendous time and effort.

The simplest formulation of retrosynthesis is to take the target product as input and predict possible reactants 111We will focus on this “single step” version of retrosynthesis in our paper.. It is essentially the “reverse problem” of reaction prediction. In reaction prediction, the reactants (sometimes reagents as well) are given as the input and the desired outputs are possible products. In this case, atoms of desired products are the subset of reactants atoms, since the side products are often ignored (see Fig 1). Thus models are essentially designed to identify this subset in reactant atoms and reassemble them to be the product. This can be treated as a deductive reasoning process. In sharp contrast, retrosynthesis is to identify the superset of atoms in target products, and thus is an abductive reasoning process and requires “creativity” to be solved, making it a harder problem. Although recent advances in graph neural networks have led to superior performance in reaction prediction [3, 4, 5], such advances do not transfer to retrosynthesis.

Computer-aided retrosynthesis designs have been deployed over the past years since [6]

. Some of them are completely rule-based systems 

[7] and do not scale well due to high computation cost and incomplete coverage of the rules, especially when rules are expert-defined and not algorithmically extracted [2]. Despite these limitations, they are very useful for encoding chemical transformations and easy to interpret. Based on this, the retrosim [8] uses molecule and reaction fingerprint similarities to select the rules to apply for retrosynthesis. Other approaches have used neural classification models for this selection task [9]. On the other hand, recently there have also been attempts to use the sequence-to-sequence model to directly predict SMILES 222 representation of reactants [10, 11] (and for the forward prediction problem, products [12, 13]). Albeit simple and expressive, these approaches ignore the rich chemistry knowledge and thus require huge amount of training. Also such models lack interpretable reasoning behind their predictions.

The current landscape of computer-aided synthesis planning motivated us to pursue an algorithm that shares the interpretability of template-based methods while taking advantage of the scalability and expressiveness of neural networks to learn when such rules apply. In this paper, we propose Conditional Graph Logic Network towards this direction, where chemistry knowledge about reaction templates are treated as logic rules and a conditional graphical model is introduced to tolerate the noise in these rules. In this model, the variables are molecules while the synthetic relationships to be inferred are defined among groups of molecules. Furthermore, to handle the potentially infinite number of possible molecule entities, we exploit the neural graph embedding in this model.

Our contribution can be summarized as follows:

  • [leftmargin=*,nolistsep,nosep]

  • We propose a new graphical model for the challenging retrosynthesis task. Our model brings both the benefit of the capacity from neural embeddings, and the interpretability from tight integration of probabilistic models and chemical rules.

  • We propose an efficient hierarchical sampling method for approximate learning by exploiting the structure of rules. Such algorithm not only makes the training feasible, but also provides interpretations for predictions.

  • Experiments on the benchmark datasets show a significant improvement over existing state-of-the-art methods in top-one accuracy.

Other related work:

Recently there have been works using machine learning to enhance the rule systems. Most of them treat the rule selection as multi-class classification 

[9] or hierarchical classification [14]

where similar rules are grouped into subcategories. One potential issue is that the model size grows with the number of rules. Our work directly models the conditional joint probability of both rules and the reactants using embeddings, where the model size is invariant to the rules.

On the other hand, researchers have tried to tackle the even harder problem of multi-step retrosynthesis [15, 16] using single-step retrosynthesis as a subroutine. So our improvement in single-step retrosynthesis could directly transfer into improvement of multi-step retrosynthesis [8].

2 Background

Figure 1: Chemical reactions and the retrosynthesis templates. The reaction centers are highlighted in each participant of the reaction. These centers are then extracted to form the corresponding template. Note that the atoms belong to the reaction side products (the dashed box in figure) are missing.

A chemical reaction can be seen as a transformation from set of reactant molecules to an outcome molecule . Without loss of generality, we work with single-outcome reactions in this paper, as this is a standard formulation of the retrosynthetic problem and multi-outcome reactions can be split into multiple single-outcome ones. We refer to the set of atoms changed (e.g., bond being added or deleted) during the reaction as reaction centers. Given a reaction, the corresponding retrosynthesis template is represented by a subgraph pattern rewriting rule 333Commonly encoded using SMARTS/SMIRKS patterns


where represents the number of reactant subgraphs in the template, as illustrated in Figure. 1. Generally we can treat the subgraph pattern as the extracted reaction center from , and as the corresponding pattern inside -th reactant, though practically this will include neighboring structures of reaction centers as well.

We first introduce the notations to represent these chemical entities:

  • [leftmargin=*, noitemsep]

  • Subgraph patterns: we use lower case letters to represent the subgraph patterns.

  • Molecule: we use capital letters to represent the molecule graphs. By default, we use for an outcome molecule, and for a reactant molecule, or for any molecule in general.

  • Set: sets are represented by calligraphic letters. We use to denote the full set of possible molecules, to denote all extracted retrosynthetic templates, and to denote all the subgraph patterns that are involved in the known templates. We further use to denote the subgraphs appearing in reaction outcomes, and to denote those appearing in reactants, with .

Task: Given a production or target molecule , the goal of a one-step retrosynthetic analysis is to identify a set of reactant molecules that can be used to synthesize the target . Here is the power set of all molecules .

3 Conditional Graph Logic Network

Let be the predicate that indicates whether subgraph pattern is a subgraph inside molecule . This can be checked via subgraph matching. Then the use of a retrosynthetic template for reasoning about a reaction can be decomposed into two-step logic. First,


where the subgraph pattern from the reaction template is matched against the product , , is a subgraph of the product . Second,


where the set of subgraph patterns from the reaction template are matched against the set of reactants . The logic is that the size of the set of reactant has to match the number of patterns in the reaction template , and there exists a permutation of the elements in the reactant set such that each reactant matches a corresponding subgraph pattern in the template.

Since there will still be uncertainty in whether the reaction is possible from a chemical perspective even when the template matches, we want to capture such uncertainty by allowing each template/or logic reasoning rule to have a different confidence score. More specifically, we will use a template score function given the product , and the reactant score function given the template and the product . Thus the overall probabilistic models for the reaction template and the set of molecules are designed as

I. Match template: (4)
II. Match reactants: (5)

Given the above two step probabilistic reasoning models, the joint probability of a single-step retrosythetic proposal using reaction template and reactant set can be written as


In this energy-based model, whether the graphical model (GM) is directed or undirected is a design choice. We will present our directed GM design and the corresponding partition function in Sec 

4 shortly. We name our model as Conditional Graph Logic Network (GLN) (Fig. 2), as it is a conditional graphical model defined with logic rules, where the logic variables are graph structures (, molecules, subgraph patterns, ). In this model, we assume that satisfying the templates is a necessary condition for the retrosynthesis, , only if and are nonzero. Such restriction provides sparse structures into the model, and makes this abductive type of reasoning feasible.

Reaction type conditional model: In some situations when performing the retrosynthetic analysis, the human expert may already have a certain type of reaction in mind. In this case, our model can be easily adapted to incorporate this as well:


where is the set of retrosynthesis templates that belong to reaction type .

GLN is related but significantly different from Markov Logic Network (MLN, which also uses graphical model to model uncertainty in logic rules). MLN treats the predicates of logic rules as latent variables, and the inference task is to get the posterior for them. While in GLN, the task is the structured prediction, and the predicates are implemented with subgraph matching. We show more details on this connection in Appendix A.

4 Model Design

Figure 2: Retrosynthesis pipeline with GLN. The three dashed boxes from top to bottom represent set of templates , subgraphs and molecules . Different colors represent retrosynthesis routes with different templates. The dashed lines represent potentially possible routes that are not observed. Reaction centers in products are highlighted.

Although the model we defined so far has some nice properties, the design of the components plays a critical role in capturing the uncertainty in the retrosynthesis. We first describe a decomposable design of in Sec. 4.1, for learning and sampling efficiency consideration; then in Sec. 4.2 we describe the parameterization of the scoring functions in detail.

4.1 Decomposable design of

Depending on how specific the reaction rules are, the template set could be as large as the total number of reactions in extreme case. Thus directly model can lead to difficulties in learning and inference. By revisiting the logic rule defined in Eq. eq:rule1, we can see the subgraph pattern plays a critical role in choosing the template. Since we represent the templates as , it is natural to decompose the energy function in Eq. eq:pt_o as . Meanwhile, recall the template matching rule is also decomposable, so we obtain the resulting template probability model as:


where the partition function is defined as:


Here we abuse the notation a bit to denote the set of subgraph patterns as .

With such decomposition, we can further speed up both the training and inference for , since the number of valid reaction centers per molecule and number of templates per reaction center are much smaller than total number of templates. Specifically, we can sample by first sampling reaction center and then choosing the subgraph patterns for reactants . In the end we obtain the templated represented as .

In the literature there have been several attempts for modeling and learning , , multi-class classification [9] or multiscale model with human defined template hierarchy [14]. The proposed decomposable design follows the template specification naturally, and thus has nice graph structure parameterization and interpretation as will be covered in the next subsection.

Finally the directed graphical model design of Eq. eq:joint_dist is written as


where sums over all subsets of molecules.

4.2 Graph Neuralization for and

Since the arguments of the energy functions are molecules, which can be represented by graphs, one natural choice is to design the parameterization based on the recent advances in graph neural networks (GNN) [17, 18, 19, 20, 21, 22]. Here we first present a brief review of the general form of GNNs, and then explain how we can utilize them to design the energy functions.

The graph embedding is a function that maps a graph into

-dimensional vector. We denote

as the graph representation of some molecule or subgraph pattern, where is the set of atoms (nodes) and is the set of bonds (edges). We represent each undirected bond as two directional edges. Generally, the embedding of the graph is computed through the node embeddings that are computed in an iterative fashion. Specifically, let initially, where is a vector of node features, like the atomic number, aromaticity, of the corresponding atom. Then the following update operator is applied recursively:


This procedure repeats for steps. While there are many design choices for the so-called message passing operator , we use the structure2vec [21] due to its simplicity and efficient c++ binding with RDKit. Finally we have the parameterization



is some nonlinear activation function, ,

or , and are the learnable parameters. Let the node embedding be the last output of , then the final graph embedding is obtained via averaging over node embeddings: . Note that attention [23] or other order invariant aggregation can also be used for such aggregation.

With the knowledge of GNN, we introduce the concrete parametrization for each component:

  • [wide,noitemsep,topsep=0.5pt,parsep=0pt,partopsep=0.5pt]

  • Parameterizing : Given a molecule , can be viewed as a scoring function of possible reaction centers inside . Since the subgraph pattern is also a graph, we parameterize it with inner product, , . Such form can be treated as computing the compatibility between and . Note that due to our design choice, can be written as . Such form allows us to see the contribution of compatibility from each atom in .

  • Parameterizing : The size of set of subgraph patterns varies for different template . Inspired by the DeepSet [24], we use average pooling over the embeddings of each subgraph pattern to represent this set. Specifically,

  • Parameterizing : This energy function also needs to take the set as input. Following the same design as , we have


Note that our GLN framework isn’t limited to the specific parameterization above and is compatible with other parametrizations. For example, one can use condensed graph of reaction [25] to represent as a single graph. Other chemistry specialized GNNs [3, 26] can also be easily applied here. For the ablation study on these design choices, please refer to Appendix C.1.

5 MLE with Efficient Inference

Given dataset with reactions, we denote the parameters in as

, respectively. The maximum log-likelihood estimation (MLE) is a natural choice for parameter estimation. Since

, and , we have the MLE optimization as

The gradient of w.r.t. can be derived444We adopt the conventions  [27], which is justified by continuity since as . as

where and stand for the expectation w.r.t. current model and , respectively. With the gradient estimator (5

), we can apply the stochastic gradient descent (SGD) algorithm for optimizing (


Efficient inference for gradient approximation: Since is a combinatorial space, generally the expensive MCMC algorithm is required for sampling from to approximate (5). However, this can be largely accelerated by scrutinizing the logic property in the proposed model. Recall that the matching between template and reactants is the necessary condition for by design. On the other hand, given , only a few templates with reactants have nonzero and . Then, we can sample and by importance sampling on restricted supported templates instead of MCMC over . Rigorously, given , we denote the matched templates as and the matched reactants based on as , where

1:  Input , and .
2:  Construct according to .
3:  Sample in hierarchical way, as in Sec. 4.1.
4:  Construct according to .
5:  Sample .
6:  Compute stochastic approximation with sample by (5).
Algorithm 1 Importance Sampling for

Then, the importance sampling leads to an unbiased gradient approximation as illustrated in alg:imp_sampling. To make the algorithm more efficient in practice, we have adopted the following accelerations:

  • [leftmargin=*,nolistsep,nosep]

  • 1) Decomposable modeling of as described in Sec. 4.1;

  • 2) Cache the computed and in advance.

In a dataset with reactions, is about 80 and is roughly 10 on average. Therefore, we reduce the actual computational cost to a manageable constant. We further reduce the computation cost of sampling by generating the and uniformly from the support. Although these samples only cover the support of the model, we avoid the calculation of the forward pass of neural networks, achieving better computational complexity. In our experiment, such an approximation already achieves state-of-the-art results. We would expect recent advances in energy based models would further boost the performance, which we leave as future work to investigate.

Remark on : Note that to get all possible sets of reactants that match the reaction template and product , we can efficiently use graph edit tools without limiting the reactants to be known in the dataset. This procedure works as follows: given a template ,

  1. [leftmargin=*,nolistsep,nosep, label=0)]

  2. Enumerate all matches between subgraph pattern and target product .

  3. Instantiate a copy of the reactant atoms according to for each match.

  4. Copy over all of the connected atoms and atom properties from .

This process is a routine in most Cheminformatics packages. In our paper we use runReactants from RDKit with the improvement of stereochemistry handling 555 to realize this.

Further acceleration via beam search: Given a product , the prediction involves finding the pair that maximizes . One possibility is to first enumerate and then . This is acceptable by exploiting the sparse support property induced by logic rules.

A more efficient way is to use beam search with size . Firstly we find reaction centers with top . Next for each we score the corresponding . In this stage the top pairs (, the templates) that maximize are kept. Finally using these templates, we choose the best that maximizes total score . Fig. 2 provides a visual explanation.

6 Experiment

Dataset: We mainly evaluate our method on a benchmark dataset named USPTO-50k, which contains 50k reactions of 10 different types in the US patent literature. We use exactly the same training/validation/test splits as Coley et al. [8], which contain 80%/10%/10% of the total 50k reactions. Table 3 contains the detailed information about the benchmark. Additionally, we also build a dataset from the entire USPTO 1976-2016 to verify the scalability of our method.

Baselines: Baseline algorithms consist of rule-based ones and neural network-based ones, or both. The expertSys is an expert system based on retrosynthetic reaction rules, where the rule is selected according to the popularity of the corresponding reaction type. The seq2seq [10] and transformer [11] are neural sequence-to-sequence-based learning model [28] implemented with LSTM [29] or Transformer [30]. These models encode the canonicalized SMILES representation of the target compound as input, and directly output canonical SMILES of reactants. We also include some data-driven template-based models. The retrosim [8] uses direct calculation of molecular similarities to rank the rules and resulting reactants. The neuralsym [9] models

as multi-class classification using MLP. All the results except neuralsym are obtained from their original reports, since we have the same experiment setting. Since neuralsym is not open-source, we reimplemented it using their best reported ELU512 model with the same method for parameter tuning.

Evaluation metric:

The evaluation metric we used is the top-

exact match accuracy, which is commonly used in the literature. This metric compares whether the predicted set of reactants are exactly the same as ground truth reactants. The comparison is performed between canonical SMILES strings generated by RDKit.

Setup of GLN: We use rdchiral [31] to extract the retrosynthesis templates from the training set. After removing duplicates, we obtained 11,647 unique template rules in total for USPTO-50k. These rules represent 93.3% coverage of the test set. That is to say, for each test instance we try to apply these rules and see if any of the rules gives exact match. Thus this is the theoretical upper bound of the rule-based approach using this particular degree of specificity, which is high enough for now. For more information about the statistics of these rules, please refer to Table 3.

We train our model for up to 150k updates with batch size of 64. It takes about 12 hours to train with a single GTX 1080Ti GPU. We tune embedding sizes in , GNN layers and GNN aggregation in using validation set. Our code is released at More details are included in Appendix B.

USPTO 50k # train 40,008 # val 5,001 # test 5,007 # rules 11,647 # reaction types 10
Table 1: Dataset information.
Rule coverage 93.3%
# unique centers 9,078
Avg. # centers per mol 29.31
Avg. # rules per mol 83.85
Avg. # reactants 1.71
Table 2: Reaction and template set information.
Top- accuracy % methods 1 3 5 10 20 50 Reaction class unknown transformer[11] 37.9 57.3 62.7 / / / retrosim[8] 37.3 54.7 63.3 74.1 82.0 85.3 neuralsym[9] 44.4 65.3 72.4 78.9 82.2 83.1 GLN 52.5 69.0 75.6 83.7 89.0 92.4 Reaction class given as prior expertSys[10] 35.4 52.3 59.1 65.1 68.6 69.5 seq2seq[10] 37.4 52.4 57.0 61.7 65.9 70.7 retrosim[8] 52.9 73.8 81.2 88.1 91.8 92.9 neuralsym[9] 55.3 76.0 81.4 85.1 86.5 86.9 GLN 64.2 79.1 85.2 90.0 92.3 93.2
Table 3: Top- exact match accuracy.

6.1 Main results

We present the top- exact match accuracy in Table 3, where ranges from . We evaluate both the reaction class unknown and class conditional settings. Using the reaction class as prior knowledge represents some situations where the chemists already have an idea of how they would like to synthesize the product.

In all settings, our proposed GLN outperforms the baseline algorithms. And particularly for top-1 accuracy, our model performs significantly better than the second best method, with 8.1% higher accuracy with unknown reaction class, and 8.9% higher with reaction class given. This demonstrates the advantage of our method in this difficult setting and potential applicability in reality.

Moreover, our performance in the reaction class unknown setting even outperforms expertSys and seq2seq in the reaction conditional setting. Since the transformer paper didn’t report top- performance for , we leave it as blank. Meanwhile, Karpov et al. [11] also reports the result when training using training+validation set and tuning on the test set. With this extra priviledge, the top-1 accuracy of transformer is 42.7% which is still worse than our performance. This shows the benefit of our logic powered deep neural network model comparing to purely neural models, especially when the amount of data is limited.

Since the theoretical upper bound of this rule-based implementation is 93.3%, the top-50 accuracy for our method in each setting is quite close to this limit. This shows the probabilistic model we built matches the actual retrosynthesis target well.

6.2 Interpret the predictions

Figure 3: Example successful predictions.
Figure 4: Example failed predictions.
Figure 5: Reaction center prediction visualization. Red atoms indicate positive match scores, while blue ones having negative scores. The darkness of the color shows the magnitude of the score. Green parts highlight the substructure match between molecules and center structures.

Visualizing the predicted synthesis: In Fig 4 and 4, we visualize the ground truth reaction and the top 3 predicted reactions (see Appendix C.6 for high resolution figures). For each reaction, we also highlight the corresponding reaction cores (, the set of atoms get changed). This is done by matching the subgraphs from predicted retrosynthesis template with the target compound and generated reactants, respectively. Fig 4 shows that our correct prediction also gets almost the same reaction cores predicted as the ground truth. In this particular case, the explanation of our prediction aligns with the existing reaction knowledge.

Fig 4 shows a failure mode where none of the top-3 prediction matches. In this case we calculated the similarity between predicted reactants and ground truth ones using Dice similarity from RDKit. We find these are still similar in the molecule fingerprint level, which suggests that these predictions could be the potentially valid but unknown ones in the literature.

Visualizing the reaction center prediction: Here we visualize the prediction of probabilistic modeling of reaction center. This is done by calculating the inner product of each atom embedding in target molecule with the subgraph pattern embedding. Fig 5 shows the visualization of scores on the atoms that are part of the reaction center. The top-1 prediction assigns positive scores to these atoms (red ones), while the bottom-1 prediction (, prediction with least probability) assigns large negative scores (blue ones). Note that although the reaction center in molecule and the corresponding subgraph pattern have the same structure, the matching scores differ a lot. This suggests that the model has learned to predict the activity of substructures inside molecule graphs.

6.3 Study of the performance

In addition to the overall numbers in Table 3, we provide detailed study of the performances. This includes per-category performance, the accuracy of each module in hierarchical sampling and also the effect of the beam size. Due to the space limit, please refer to Appendix C.

6.4 Large scale experiments on USPTO-full

retrosim neuralsym GLN top-1 32.8 35.8 39.3 top-10 56.1 60.8 63.7
Table 4: Top-k accuracy on USPTO-full.

To see how this method scales up with the dataset size, we create a large dataset from the entire set of reactions from USPTO 1976-2016. There are 1,808,937 raw reactions in total. For the reactions with multiple products, we duplicate them into multiple ones with one product each. After removing the duplications and reactions with wrong atom mappings, we obtain roughly 1M unique reactions, which are further divided into train/valid/test sets with size 800k/100k/100k.

We train on single GPU for 3 days and report with the model having best validation accuracy. The results are presented in Table 4. We compare with the best two baselines from previous sections. Despite the noisiness of the full USPTO set relative to the clean USPTO-50k, our method still outperforms the two best baselines in top- accuracies.

7 Discussion

Evaluation: Retrosynthesis usually does not have a single right answer. Evaluation in this work is to reproduce what is reported for single-step retrosynthesis. This is a good, but imperfect benchmark, since there are potentially many reasonable ways to synthesize a single product.

Limitations: We share the limitations of all template-based methods. In our method, the template designs, more specifically, their specificities, remain as a design art and are hard to decide beforehand. Also, the scalability is still an issue since we rely on subgraph isomorphism during preprocessing.

Future work: The subgraph isomorphism part can potentially be replaced with predictive model, while during inference the fast inner product search [32] can be used to reduce computation cost. Also actively building templates or even inducing new ones could enhance the capacity and robustness.


We would like to thank anonymous reviewers for providing constructive feedbacks. This project was supported in part by NSF grants CDS&E-1900017 D3SC, CCF-1836936 FMitF, IIS-1841351, CAREER IIS-1350983 to L.S.


  • Corey [1991] Elias JAMES Corey. The logic of chemical synthesis: multistep synthesis of complex carbogenic molecules (nobel lecture). Angewandte Chemie International Edition in English, 30(5):455–465, 1991.
  • Coley et al. [a] Connor W. Coley, William H. Green, and Klavs F. Jensen. Machine learning in computer-aided synthesis planning. 51(5):1281–1289, a. doi: 10.1021/acs.accounts.8b00087.
  • Jin et al. [2017] Wengong Jin, Connor Coley, Regina Barzilay, and Tommi Jaakkola. Predicting organic reaction outcomes with weisfeiler-lehman network. In Advances in Neural Information Processing Systems, pages 2607–2616, 2017.
  • Coley et al. [b] Connor W. Coley, Wengong Jin, Luke Rogers, Timothy F. Jamison, Tommi S. Jaakkola, William H. Green, Regina Barzilay, and Klavs F. Jensen.

    A graph-convolutional neural network model for the prediction of chemical reactivity.

    10(2):370–377, b. doi: 10.1039/C8SC04228D.
  • Bradshaw et al. [2018] John Bradshaw, Matt J Kusner, Brooks Paige, Marwin HS Segler, and José Miguel Hernández-Lobato. A generative model for electron paths. 2018.
  • Corey and Wipke [1969] EJ Corey and W Todd Wipke. Computer-assisted design of complex organic syntheses. Science, 166(3902):178–192, 1969.
  • [7] Sara Szymkuc, Ewa P. Gajewska, Tomasz Klucznik, Karol Molga, Piotr Dittwald, Michał Startek, Michał Bajczyk, and Bartosz A. Grzybowski. Computer-assisted synthetic planning: The end of the beginning. 55(20):5904–5937. doi: 10.1002/anie.201506101.
  • Coley et al. [2017] Connor W Coley, Luke Rogers, William H Green, and Klavs F Jensen. Computer-assisted retrosynthesis based on molecular similarity. ACS Central Science, 3(12):1237–1245, 2017.
  • [9] Marwin H. S. Segler and Mark P. Waller. Neural-symbolic machine learning for retrosynthesis and reaction prediction. 23(25):5966–5971. doi: 10.1002/chem.201605499.
  • Liu et al. [2017] Bowen Liu, Bharath Ramsundar, Prasad Kawthekar, Jade Shi, Joseph Gomes, Quang Luu Nguyen, Stephen Ho, Jack Sloane, Paul Wender, and Vijay Pande. Retrosynthetic reaction prediction using neural sequence-to-sequence models. ACS Central Science, 3(10):1103–1113, 2017.
  • Karpov et al. [2019] Pavel Karpov, Guillaume Godin, and I Tetko. A transformer model for retrosynthesis. 2019.
  • Schwaller et al. [2018] Philippe Schwaller, Theophile Gaudin, David Lanyi, Costas Bekas, and Teodoro Laino. “found in translation”: predicting outcomes of complex organic chemistry reactions using neural sequence-to-sequence models. Chemical science, 9(28):6091–6098, 2018.
  • [13] Philippe Schwaller, Teodoro Laino, Theophile Gaudin, Peter Bolgar, Costas Bekas, and Alpha A. Lee. Molecular transformer for chemical reaction prediction and uncertainty estimation. doi: 10.26434/chemrxiv.7297379.v1.
  • [14] Javier L. Baylon, Nicholas A. Cilfone, Jeffrey R. Gulcher, and Thomas W. Chittenden.

    Enhancing retrosynthetic reaction prediction with deep learning using multiscale reaction classification.

  • Segler et al. [2018] Marwin HS Segler, Mike Preuss, and Mark P Waller. Planning chemical syntheses with deep neural networks and symbolic ai. Nature, 555(7698):604, 2018.
  • Schreck et al. [2019] John S Schreck, Connor W Coley, and Kyle JM Bishop. Learning retrosynthetic planning through self-play. arXiv preprint arXiv:1901.06569, 2019.
  • Scarselli et al. [2008] Franco Scarselli, Marco Gori, Ah Chung Tsoi, Markus Hagenbuchner, and Gabriele Monfardini. The graph neural network model. IEEE Transactions on Neural Networks, 20(1):61–80, 2008.
  • Duvenaud et al. [2015] David K Duvenaud, Dougal Maclaurin, Jorge Iparraguirre, Rafael Bombarell, Timothy Hirzel, Alán Aspuru-Guzik, and Ryan P Adams. Convolutional networks on graphs for learning molecular fingerprints. In Advances in neural information processing systems, pages 2224–2232, 2015.
  • Lei et al. [2017] Tao Lei, Wengong Jin, Regina Barzilay, and Tommi Jaakkola. Deriving neural architectures from sequence and graph kernels. In Proceedings of the 34th International Conference on Machine Learning-Volume 70, pages 2024–2033. JMLR. org, 2017.
  • Li et al. [2015] Yujia Li, Daniel Tarlow, Marc Brockschmidt, and Richard Zemel. Gated graph sequence neural networks. arXiv preprint arXiv:1511.05493, 2015.
  • Dai et al. [2016] Hanjun Dai, Bo Dai, and Le Song. Discriminative embeddings of latent variable models for structured data. In International conference on machine learning, pages 2702–2711, 2016.
  • Hamilton et al. [2017] Will Hamilton, Zhitao Ying, and Jure Leskovec. Inductive representation learning on large graphs. In Advances in Neural Information Processing Systems, pages 1024–1034, 2017.
  • Veličković et al. [2017] Petar Veličković, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Lio, and Yoshua Bengio. Graph attention networks. arXiv preprint arXiv:1710.10903, 2017.
  • Zaheer et al. [2017] Manzil Zaheer, Satwik Kottur, Siamak Ravanbakhsh, Barnabas Poczos, Ruslan R Salakhutdinov, and Alexander J Smola. Deep sets. In Advances in neural information processing systems, pages 3391–3401, 2017.
  • [25] Frank Hoonakker, Nicolas Lachiche, Alexandre Varnek, and Alain Wagner. Condensed graph of reaction: considering a chemical reaction as one single pseudo molecule.
  • Gilmer et al. [2017] Justin Gilmer, Samuel S Schoenholz, Patrick F Riley, Oriol Vinyals, and George E Dahl. Neural message passing for quantum chemistry. In Proceedings of the 34th International Conference on Machine Learning-Volume 70, pages 1263–1272. JMLR. org, 2017.
  • Cover and Thomas [2012] Thomas M Cover and Joy A Thomas. Elements of information theory. John Wiley & Sons, 2012.
  • Sutskever et al. [2014] Ilya Sutskever, Oriol Vinyals, and Quoc V Le. Sequence to sequence learning with neural networks. In Advances in neural information processing systems, pages 3104–3112, 2014.
  • Hochreiter and Schmidhuber [1997] Sepp Hochreiter and Jürgen Schmidhuber. Long short-term memory. Neural computation, 9(8):1735–1780, 1997.
  • Vaswani et al. [2017] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. Attention is all you need. In Advances in neural information processing systems, pages 5998–6008, 2017.
  • Coley et al. [c] Connor W Coley, William H Green, and Klavs F Jensen. Rdchiral: An rdkit wrapper for handling stereochemistry in retrosynthetic template extraction and application. Journal of chemical information and modeling, c.
  • Guo et al. [2016] Ruiqi Guo, Sanjiv Kumar, Krzysztof Choromanski, and David Simcha. Quantization based fast inner product search. In Artificial Intelligence and Statistics, pages 482–490, 2016.
  • Richardson and Domingos [2006] Matthew Richardson and Pedro Domingos. Markov logic networks. Machine learning, 62(1-2):107–136, 2006.
  • Kingma and Ba [2014] Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.
  • Xu et al. [2018] Keyulu Xu, Weihua Hu, Jure Leskovec, and Stefanie Jegelka. How powerful are graph neural networks? arXiv preprint arXiv:1810.00826, 2018.

Appendix A Connection to Markov Logic Network

Similar to the proposed model, the Markov logic network (MLN) [33] is an alternative to introduce uncertainty into logic rules. However, there is significant difference in the way the retrosynthetic templates are treated. The proposed model considers the templates as separate variables that will be inferred for the target molecules together with the reactions. The explicit probabilistic modeling of templates makes it more straightforward to interpret the prediction. The MLN instead sets the logic rules (the templates) as features in the energy-based model, , , upon which the template inference is not well-defined. Moreover, our model will also lead to efficient sampling and inference, avoiding the MCMC on combinatorial space in the MLN, which accelerates the model learning.

So in summary:

  • [leftmargin=*, noitemsep, topsep=0pt]

  • GLN is a directed graphical model while MLN is undirected.

  • MLN treats the predicates of logic rules as latent variables, and the inference task is to get the posterior of them. While in GLN, the task is the structured prediction, and the predicates are implemented with subgraph matching.

  • Due to the above two, GLN can be implemented with efficient hierarchical sampling. However for MLN, generally the expensive MCMC in combinatorial space is needed for both training and inference.

Appendix B Details of setup

Dataset information
Figure 6: Distribution of reaction types.

Figure 6 shows the distribution of reactions over 10 types. We can see this dataset is highly unbalanced.

Implementation details

The preprocessing of and is relatively expensive, since theoretically the subgraph isomorphism check is NP-hard. However, since the processing is embarrassingly parallelizable, it took about 1 hour on a cluster with 48 CPU cores for 50k reactions.

We implement the entire model using pytorch. The optimizer we used is Adam 

[34] with a fixed learning rate of

and a gradient clip of


In all the experiments, the graph embedding module is implemented using s2v [21]. The best embedding size we used has size of 256 for representing each molecule or subgraph structure, and is used as nonlinear activation function.

For the aggregation used in , in DeepSet module used for representation of for a specific , or in DeepSet module for molecule set , we tried -pooling, and found the performance is about the same. We use -pooling since it offers the scoring of each node embedding within the graphs. The visualization in Fig 5 relies on this trick.

Appendix C More experiment results

c.1 Ablation study of design choices

Our GLN provides a general graphical model to retrosynthesis problem, which is compatible with many reasonable choices of the representation of graphs. In addition to structure2vec with 3 layers (s2v-3) we used in the paper, we provide more ablation studies using different widely used GNNs and different number of “message-passing” layers.

s2v-3 GGNN MPNN GIN ECFP s2v-0 s2v-1 s2v-2
top-1 52.6 51.6 50.4 51.8 51.9 40.7 47.0 51.3
top-10 83.1 81.8 83.2 83.3 81.5 78.1 80.4 82.2
Table 5: Ablation study on USPTO-50k with different representations.

The rationale behind the choices are: 1) the GNNs should be able to take both atom and bond features into consideration; 2) according to Xu et al. [35], the family of message-passing GNNs should have similar representation power as WL graph isomorphism check at best. We adopt the s2v in our paper since it satisfies these requirements. Meanwhile, it comes with efficient c++ binding of RDKit.

We use 2 layers of GNN by default, or use - after the name in Table 5 to denote -layer design. We can see that most variations of GNNs can achieve similar performances with enough number of message-passing like propagations. Based on this, for the experiment on the full USPTO dataset we simply use ECFP-2 provided by RDKit, as it is WL-isomorphism check based method with enough expressiveness [35] but faster to run.

Besides the choice of GNN, we also compare the choices of , and mentioned in Section 4.2. Basically all these functions are comparing the compatibility of two vectors . In the paper, we simply used inner-product . Here we also studied and bilinear . For top-1, the inner-prod, MLP and bilinear gets 52.6, 52.7 and 53.5, respectively. So our GLN could be further improved with better design choices.

c.2 Per-category performance

Class Fraction % 1 30.3 2 23.8 3 11.3 4 1.8 5 1.3 6 16.5 7 9.2 8 1.6 9 3.7 10 0.5
Figure 7: Reaction distribution over 10 types.
Figure 8: Top-10 accuracy per each reaction type.

We study the performance per each reaction category. Following the setting of baseline methods, we report the top-10 accuracy. As is shown in Table 8, the distribution of reaction types is highly unbalanced. From Fig 8 we can see our performances are better than retrosim in most classes, including the most common cases like class 1 and 2, or rare cases like class 4 or 8. This shows that our performance is not obtained by overfitting to one particular category of reactions. Such property is also important, as the retrosynthesis could involve rare reactions that haven’t been well studied in the literature.

For per category performance for reaction type conditional tasks, as well the effect of beam-size, please refer to Appendix C.

c.3 Reaction conditional performance

Figure 9: Top-10 accuracy per reaction class, when the reaction class is given during training.

In Figure 9 we show the per-class performance when the reaction type is given as prior. As is shown Figure 6, the distribution of reaction types is not uniform, where some reactions only get less than 5% of the total data. In this case, it is important to have a flexible model that can take the reaction type into account. Training one model per each reaction class is not a good idea in this case due to the imbalance of distribution.

From Figure 9 we can see our performances are comparable to retrosim in all classes, while being much better than expertSys and seq2seq. Even in rare classes like class 9 or 10, we can still get best or second best performance. This shows the effectiveness and the flexibility of our GLN.

c.4 Effect of beam size

Figure 10: Top- accuracy with different beam sizes.
Figure 11: Inference speed with different beam sizes.
Figure 12: Top- accuracy of reaction center and template.

Beam size In Section. 6.1 we reported the top- accuracy with beam size of 50, since is at most 50. Here we study the performance of GLN using different beam sizes. Figure 12 shows the top- accuracy for different and different beam sizes. Overall the performance gets consistently better with larger beam sizes, for all top- predictions. We can also see that the top-1 accuracy improved about 10% from beam size 1 (, greedy inference) to beam size 3. Note that the curve of beam size flattened after top- predictions, since generally it didn’t produce more predictions than .

We also report the speed for inference in Figure 12. Such information during inference is averaged over 5,007 test predictions. The majority of the time is spent during applying the template via the call to RDKit, thus the time required grows up linearly with the beam size, as the number of RDKit calls grows linearly with the beam size.

Accuracy of In Figure 12 we show the accuracy of , which decomposes into the reaction center identification accuracy and the template selection accuracy related to that reaction center. Here the beam size is fixed to 50. Predicting the reaction center is relatively easy and GLN achieves 99% top-20 accuracy. These results indicate that the current bottleneck in performance is in the template selection part, which is reasonably good now but can definitely be further improved by capturing more reaction features.

c.5 Generalize logic check

The logic function comes with our GLN can be potentially applied to any rule based systems. For example, when combined with neuralsym [9] (we denote the modified one as -neuralsym), it further reduces the space of template selection. -neuralsym gets top-1 accuracy of 46.9% and 57.7% in reaction type unconditional and conditional cases, respectively. This is about 2% improvement over its vanilla performance.

c.6 Visualization results

Figure 13: Example successful predictions.
Figure 14: Example failed predictions.

In Figure 13 and  13 we put examples of successful and failed predictions with better resolution.