"Found in Translation": Predicting Outcomes of Complex Organic Chemistry Reactions using Neural Sequence-to-Sequence Models

11/13/2017 ∙ by Philippe Schwaller, et al. ∙ ibm 0

There is an intuitive analogy of an organic chemist's understanding of a compound and a language speaker's understanding of a word. Consequently, it is possible to introduce the basic concepts and analyze potential impacts of linguistic analysis to the world of organic chemistry. In this work, we cast the reaction prediction task as a translation problem by introducing a template-free sequence-to-sequence model, trained end-to-end and fully data-driven. We propose a novel way of tokenization, which is arbitrarily extensible with reaction information. With this approach, we demonstrate results superior to the state-of-the-art solution by a significant margin on the top-1 accuracy. Specifically, our approach achieves an accuracy of 80.1 without relying on auxiliary knowledge such as reaction templates. Also, 66.4 accuracy is reached on a larger and noisier dataset.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

After nearly 200 years of documented research, the synthesis of organic molecules remains one of the most important tasks in organic chemistry. The construction of a target molecule from a set of existing reactants and reagents via chemical reactions is attracting much attention because of its economical implications. Multiple efforts have been made in the past 50 years to rationalize the large number of chemical compounds and reactions identified, which form the large knowledge bases for solving synthetic problems. In 1969, Corey and Wipke Corey1969 demonstrated that both synthesis and retrosynthesis could be performed by a machine. Their pioneering contribution involved the use of handcrafted rules made by experts, which are commonly known as reaction templates. The templates encode the local changes to the atoms’ connectivity under certain conditions accounting for various subtleties of retrosynthesis. A similar algorithm emerged in the late 1970s Salatin1978 which also requires a set of expert rules. Unfortunately, rules writing is a tedious task, both time and labor-intensive, and may not cover the entire domain for complex organic chemistry problems. In such cases, profound chemical expertise is still required, and the solutions are usually developed by trained organic chemists. However, it can be extremely challenging even for them to synthesize a relatively complex molecule, which may take several reaction steps to construct. In fact, navigating the chemical space of drug-related compounds by relying only on intuition may turn a synthesis into a nearly impossible task, especially if the problem is slightly outside the expert’s knowledge. Other approaches extract reaction templates directly from data Satoh1995 ; Satoh1996 ; Segler2017 ; Coley2017 . In this specific context, candidate products are generated from the templates and then are ranked according to their likelihood. Satoh and Funatsu Satoh1995 ; Satoh1996 used various hard-coded criterion to perform the ranking whereas more recent approaches Segler2017 ; Coley2017

used a deep neural network. However, these types of approaches are fundamentally dependent on the rule-based system component and thus inherit some of its major limitations. In particular, these approaches do not produce sufficiently accurate predictions outside of the training domain. Nevertheless, the class of algorithms

Corey1969 ; Salatin1978 ; Satoh1995 ; Satoh1996 ; Segler2017 ; Coley2017 that is based on rules manually encoded by human experts or automatically derived from a reaction database is not the only way to approach the problem of organic synthesis. A second approach to predicting reactions has been to leverage the advancements in computational chemistry to predict energy barriers of a reaction based on first-principle calculations Dolbier1996 ; Mondal2013 . Although it is possible to reach very accurate levels of predictions for small systems, it is still a very computationally intensive task. Therefore, it is limited to applications of purely academic interest. One way to view the reaction prediction task is to cast it as a translation problem, where the objective is to map a text sequence that represents the reactants to a text sequence representing the product. Molecules can equivalently be expressed as text sequences in line notation format, such as the simplified molecular-input line-entry system (SMILES) Weininger1988 . Intuitively, there is an analogy between a chemist’s understanding of a compound and a language speaker’s understanding of a word. No matter how imaginative such an analogy is, it was only very recently that a formal verification was proved Cadeddu2014 . Cadeddu et al. Cadeddu2014 showed that organic molecules contain fragments whose rank distribution is essentially identical to that of sentence fragments. The immediate consequence of this discovery is that the vocabulary of organic chemistry and human language follow very similar laws, thus introducing the basic concepts and potential impact of linguistics-based analyses to a general chemical audience. It has already been shown that a text representation of molecules has been effective in chemoinformatics Gomez-Bombarelli2016 ; Jastrzebski2016 ; Kusner2017 ; Bjerrum2017 ; Segler2017a

. This has strengthened our belief that the methods of computational linguistics can have an immense impact on the analysis of organic molecules and reactions. In this work, we build on the idea of relating organic chemistry to a language and explore the application of state-of-the-art neural machine translation methods, which are sequence-to-sequence (seq2seq) models. A similar approach was recently suggested, but its application was limited to textbook reactions

Nam2016 . Here, we intend to solve the forward-reaction prediction problem, where the starting materials are known and the interest is in generating the products. We propose a novel way of tokenization that is arbitrarily extensible with reaction information. The overall network architecture is simple, and the model is trained end-to-end, fully data-driven and without additional external information. With this approach, we outperform current solutions using their own training and test sets Jin2017 by achieving a top-1 accuracy of 80.3% and set a first score of 65.4% on a noisy single product reactions dataset extracted from US patents.

2 Related Work

2.1 Template-based reaction prediction

Template-based reaction prediction methods have been widely researched in the past couple of years Wei2016 ; Segler2017 ; Coley2017 . Wei et al. Wei2016

used a graph-convolution neural network proposed by Duvenaud et al.

Duvenaud2015 to infer fingerprints of the reactants and reagents. They trained a network on the fingerprints to predict which reaction templates to apply to the reactants. Segler and Waller Segler2017

built a knowledge graph using reaction templates and discovered novel reactions by searching for missing nodes in the graph. Coley et al.

Coley2017 generated for a given set of reactants all possible product candidates from a set of reaction templates extracted from US patents Lowe2012 and predicted the outcome of the reaction by ranking the candidates with a neural network. One major advancement by Segler and Waller Segler2017 and Coley et al. Coley2017 was to consider alternative products as negative examples. Recently, Segler and Waller Segler2017b introduced a neural-symbolic approach. They extracted reaction rules from the commercially available Reaxys database. Then, they trained a neural network on molecular fingerprints to prioritize rules and combined the network with a Monte Carlo tree search to overcome the scalability issues of other template-based methods. In any case, template-based methods have the limitation that they cannot predict anything outside the space covered by the previously extracted templates.

2.2 Template-free reaction prediction

A first template-free approach was introduced by Kayala et al. Kayala2012 . Using fingerprints and hand-crafted features, they predicted a series of mechanistic steps to obtain one reaction outcome. Owing to the sparsity of data on such mechanistic reaction steps, the dataset was self-generated with a template-based expert system. Recently, Jin et al. Jin2017 used a novel approach based on Weisfeiler–Lehman Networks (WLN). They trained two independent networks on a set of 400,000 reactions extracted from US patents. The first WLN scored the reactivity between atom pairs and predicted the reaction center. All possible bond configuration changes were enumerated to generate product candidates. The candidates that were not removed by hard-coded valence and connectivity rules are then ranked by a Weisfeiler–Lehman Difference Network (WLDN). Their method achieved a top-1 accuracy of 74.0% on a test set of 40,000 reactions. Jin et al. Jin2017 claimed to outperform template-based approaches by a margin of 10% after augmenting the model with the unknown products of the initial prediction to have a product coverage of 100% on the test set. Although the code is not yet public, the dataset with the exact training, validation and test split have been released111https://github.com/wengong-jin/nips17-rexgen. The complexity of the reaction prediction problem was significantly reduced by removing the stereochemical information.

2.3 Seq2seq models in organic reaction prediction and retrosynthesis

The closest work to ours is that of Nam and Kim Nam2016

, who also used a template-free seq2seq model to predict reaction outcomes. Whereas their network was trained end-to-end on patent data and self-generated reaction examples, they limited their predictions to textbook reactions. Their model was based on the Tensorflow translate model (v0.10.0)

Abadi2016

, from which they took the default values for most of the hyperparameters. Retrosynthesis is the opposite of reaction prediction. Given a product molecule, the goal is to find possible reactants. This is a considerably more difficult task for a seq2seq model and was approached by Liu et al.

Liu2017 . They used a set of 50,000 reactions extracted and curated by Schneider et al. Schneider2016

. Although the stereochemical information was included, the reactions were classified, which means that the dataset contained only common reaction types. Overall, none of the previous works was able to demonstrate the superiority of seq2seq models. What we observe in general is that there is always a tradeoff between coverage and accuracy. In fact, whenever reactions that do not work well with the model are removed under the assumption that they are erroneous, the model’s accuracy will improve. This calls for open datasets. The only fair way to compare models is to use datasets to which identical filtering was applied or where the reactions that the model is unable to predict are counted as false predictions.

3 Dataset

All the openly available chemical reaction datasets were derived in some form from the patent text-mining work of Daniel M. Lowe Lowe2012 . Lowe’s dataset has recently been updated and contains data extracted from US patents grants and applications dating from 1976 to September 2016 Lowe2017 . What makes the dataset particularly interesting is that the quality and noise correspond well to the data a chemical company might own. The portion of granted patents is made of 1,808,938 reactions, which are described using SMILES Weininger1988 . Looking at the original patent data, it is surprising that a complex chemical synthesis process consisting of multiple steps, performed over hours or days, can be summarized in a simple string. Such reaction strings are composed of three groups of molecules: the reactants, the reagents, and the products, which are separated by a ‘’ sign. The process actions and reaction conditions, for example, have been neglected so far. To date, there is no standard way of filtering duplicates, incomplete or erroneous reactions in Lowe’s dataset. We kept the filtering to a minimum to show that our network is able to handle noisy data. We removed 720,768 duplicates by comparing reaction strings without atom mapping and an additional 780 reactions, because the SMILES string could not be canonicalized with RDKit Landrum2017 , as the explicit number of valence electrons for one of the atoms was greater than permitted. We took only single product reactions, corresponding to 92% of the dataset, to have distinct prediction targets. Although this is a current limitation in the training procedure of our model, it could be easily overcome in the future, for example by defining a specific order for the product molecules. Finally, the dataset was randomly split into training, validation and test sets (18:1:1)222https://ibm.box.com/v/ReactionSeq2SeqDataset. Reactions with the same reactants, but different reagents and products were kept in the same set.

Reactions in train valid test total
Lowe’s grants set Lowe2017 1,808,938
   without duplicates 1,088,170
   with single product 902,581 50,131 50,258 1,002,970
Jin’s USPTO set Jin2017 409,035 30,000 40,000 479,035
   with single product 395,496 29,075 38,647 463,218
Table 1: Overview of the datasets used in this work. Jin’s is derived from Lowe’s grants dataset.

To compare our model and results with the current state of the art, we used the USPTO set recently published by Jin et al. Jin2017 . It was extracted from Lowe’s grants dataset Lowe2017 and contains 479,035 atom-mapped reactions without stereochemical information. We restricted ourselves to single product reactions, corresponding to 97% of the reactions in Jin’s USPTO set. An overview of the datasets taken as ground truths for this work is shown in Table 1.

Step Example (entry 23738, Jin’s USPTO test set Jin2017 )
reactants reagents products
1) Original string [Cl:1][c:2]1[cH:3][c:4]([CH3:8])[n:5][n:6]1[CH3:7].[OH:14][N+:15]
([O-:16])=[O:17].[S:9](=[O:10])(=[O:11])([OH:12])[OH:13][Cl:1]
[c:2]1[c:3]([N+:15](=[O:14])[O-:16])[c:4]([CH3:8])[n:5][n:6]1[CH3:7]
2) Reactants and [Cl:1][c:2]1[cH:3][c:4]([CH3:8])[n:5][n:6]1[CH3:7].[OH:14][N+:15]
reagent ([O-:16])=[O:17][S:9](=[O:10])(=[O:11])([OH:12])[OH:13][Cl:1]
separation [c:2]1[c:3]([N+:15](=[O:14])[O-:16])[c:4]([CH3:8])[n:5][n:6]1[CH3:7]
3) Atom-mapping Cc1cc(Cl)n(C)n1.O=[N+]([O-])O
removal and O=S(=O)(O)O
canonicalization Cc1nn(C)c(Cl)c1[N+](=O)[O-]
4) Reactants C c 1 c c ( Cl ) n ( C ) n 1 . O = [N+] ( [O-] ) O
and product O=S(=O)(O)O
tokenization C c 1 n n ( C ) c ( Cl ) c 1 [N+] ( = O ) [O-]
5) Reagent C c 1 c c ( Cl ) n ( C ) n 1 . O = [N+] ( [O-] ) O
tokenization A_O=S(=O)(O)O
C c 1 n n ( C ) c ( Cl ) c 1 [N+] ( = O ) [O-]
Source C c 1 c c ( Cl ) n ( C ) n 1 . O = [N+] ( [O-] ) O
A_O=S(=O)(O)O
Target C c 1 n n ( C ) c ( Cl ) c 1 [N+] ( = O ) [O-]
Table 2: Data preparation steps to obtain source and target sequences. The tokens are separated by a space and individual molecules by a point token.

3.1 Data preprocessing

To prepare the reactions, we first used the atom mappings to separate reagents from reactants. Input molecules with atoms appearing in the product were classified as reactants and the others without atoms in the product as reagents. Then, we removed the hydrogen atoms and the atom mappings from the reaction string, and canonicalized the molecules. Afterwards, we tokenized reactants and products atom-wise using the following regular expression:

token_regex = "(\[[^\]]+]|Br?|Cl?|N|O|S|P|F|I|b|c|n|o|s|p|\(|\)| \.|=|#|-|\+|\\\\|\/|:|~|@|\?|>|\*|\$|\%[0-9]{2}|[0-9])".

As reagent atoms are never mapped into product atoms, we employed a reagent-wise tokenization using a set of the 76 most common reagents, according to the analysis in Schneider2016 . Reagents belonging to this set were added as distinct tokens after the first ‘’ sign, ordered by occurrence. Other reagents, which were not in the set, were neglected and removed completely from the reaction string. The separate tokenization would allow us to extend the reaction information and add tokens for reaction conditions without changing the model architecture. The final source sequences were made up of tokenized “reactants > common reagents” and the target sequence of a tokenized “product”. The tokens were separated by space characters. The preprocessing steps together with examples are summarized in Table 2. The same preprocessing steps were applied to all datasets.

4 Model

To map the sequence of the reactants/reagents to the sequence of the products, we adapted an existing implementation Zhao2017

with minor modifications. Our model architecture consists of two distinct recurrent neural networks (RNN) working together: (1) an encoder that processes the input sequence and emits its context vector

, and (2) a decoder that uses this representation to output a probability over a prediction. For these two RNNs, we rely on specific variants of long short-term memory (LSTM)

Hochreiter1997 because they are able to handle long-range relations in sequences. An LSTM consists of units that process the input data sequentially. Each unit at each time step processes an element of the input and the network’s previous hidden state . The output and the hidden state transition is defined by

(1)
(2)
(3)
(4)
(5)

where , and are the input, forget, and output gates; is the cell state vector; , and are model parameters learned during training;

is the sigmoid function and

is the entry-wise product. For the encoder, we used a bidirectional LSTM (BLSTM) Graves2005 . A BLSTM processes the input sequence in both directions, so they have context not only from the past but also from the future. They comprise two LSTMs: one that processes the sequence forward and the other backward, with their forward and backward hidden states and for each time step. The hidden states of a BLSTM are defined as

(6)

Thus we can formalize our encoder as

(7)

where is a multilayered BLSTM; are the hidden states at time ; is an element of an input sequence

, which is a one-hot encoding of our vocabulary; and

are the learned embedding weights. Generally, is a simple concatenation of the encoder’s hidden states:

(8)

The second part of the model – the decoder – predicts the probability of observing a product :

(9)

and for a single token :

(10)

where is a stack of LSTM, which outputs the probability for a single token; are the decoder’s hidden states; and is a different context vector for each target token . Bahdanau et al. Bahdanau2015 and Luong et al. Luong2015 proposed attention mechanisms, i.e., different ways for computing the vector rather than taking the last hidden state of the encoder . We performed experiments using both models and describe Luong’s method, which yielded the best overall results.

4.1 Luong’s Attention Mechanism

To compute the context vector, we first have to compute the attention weights :

(11)
(12)

The attention vector is then defined by

(13)

Both and are learned weights. Then can be used to compute the probability for a particular token:

(14)

where are also the learned projection weights.

4.2 Training Details

During training, all parameters of the network were trained jointly using a stochastic gradient descent. The loss function was a cross-entropy function, expressed as

(15)

for a particular training sequence. The loss was computed over an entire minibatch and then normalized. The weights were initialized using a random uniform distribution ranging from

to 0.1. Every 3 epochs, the learning rate was multiplied by a decay factor. The minibatch size was 128. Gradient clipping was applied when the norm of the gradient exceeded 5.0. The teacher forcing method

Williams1989 was used during training.

5 Architecture & Hyperparameter Search

Finding the best-performing set of hyperparameters for a deep neural network is not trivial. As mentioned in section 4, our model has numerous parameters that can influence both its training and its architecture. Depending on those parameters, the performance of the model can vary notably. In order to select the best parameters efficiently, we build a framework around scikit-optimize Scikit2017

to perform a gradient-boosted-tree regression tree search on a hyperparameter space defined in

Table 4. In total, we trained 100 models for 30 epochs. Table 4 yields the set of best hyperparameters found with this method. This model has been further trained to 80 epochs to improve its final accuracy.

Parameter Possible Values
Number of Units 128, 256, 512 or 1024
Number of Layers 2, 4 or 6
Type of Encoder LSTM, BLSTM
Output Dropout 0 - 0.9
State Dropout 0 - 0.9
Variational Dropout Gal2016 True, False
Learning Rate 0.1 - 5
Decay Factor 0.85 - 0.99
Type of Attention “Luong” or
“Badhanau”
Table 4: Hyperparameter for the best model
Encoder Decoder
Number of Units 1024 2048
Number of Layers 2 2
RNN Cell Type BLSTM LSTM
Output Dropout 0.7676
State Dropout 0.5374
Variational Dropout True
Learning Rate 0.355
Decay Factor 0.854
Table 3: Hyperparameters Space

6 Experiments

6.1 Reaction prediction

We evaluated our model on two data sets and compared the performance with other state-of-the-art results. After the hyperparameter optimization, we continued to train our best model on the 395,496 reactions in Jin’s USPTO train set and tested the fully trained model on Jin’s USPTO test set. Additionally, we trained a second model with the same hyperparameters on 902,581 randomly chosen single-product reactions from the more complex and noisy Lowe dataset and tested it on a set of 50,258 reactions. As molecules are discrete data, changing a single character, such as in source code or arithmetic expressions, can lead to completely different meanings or even invalidate the entire string. Therefore we use full-sequence accuracy, the strictest criteria possible, as our evaluation metric by which a test prediction is considered correct only if all tokens are identical to the ground truth. The network has to solve three major challenges. First, it has to memorize the SMILES grammar to predict synthetically correct sequences. Second, because we trained it on canonicalized molecules, the network has to learn the canonical representation. Third, the network has to map the reactants plus reagents space to the product space. Although the training was performed without a beam search, we used a beam width of 10 without length penalty for the inference. Therefore the 10 most probable sequences were kept at every time step. This allowed us to know what probability the network assigned to each of the sequences. We used the top-1 probabilities to analyze the prediction confidence of the network. The final step was to canonicalize the network output. This simple and deterministic reordering of the tokens improved the accuracy by 1.5%. Thus, molecules that were correctly predicted, but whose tokens were not enumerated in the canonical order, were still counted as correct. The prediction accuracies of our model on different datasets are reported in Table

5. For single product reactions, we achieved an accuracy of 83.2% on Jin’s USPTO test dataset and 65.4% on Lowe’s test set.

Dataset Size Accuracies in [%]
BLEU Papineni2001 ROUGE Lin top-1 top-2 top-3
Jin’s USPTO test set Jin2017 38,648 95.9 96.0 83.2 87.7 89.2
Lowe’s test set Lowe2017 50,258 90.3 90.9 65.4 71.8 74.1
Table 5: Scores of our model on different single product datasets.

6.2 Comparison with the state of the art

To the best of our knowledge, no previous work has attempted to predict reactions on the complete US patent dataset of Lowe Lowe2017 . Table 6 shows a comparison with the Weisfeiler–Lehman difference networks (WLDN) from Jin et al. Jin2017 on their USPTO test set. To make a fair comparison, we count all the multiple product reactions in the test set as false predictions for our model because we trained only on the single product reactions. By achieving 80.3% top-1 accuracy, we outperform their model by a margin of 6.3%, which is even higher than for their augmented setup. As our model does not rank candidates, but was trained on accurately predicting the top-1 outcome, it is not surprising that the WLDN beats our model in the top-3 and top-5 accuracy. The decoding of the 38,648 USPTO test set reactions takes on average 25 ms per reaction, inferred with a beam search. Our model can therefore compete with the state of the art.

Jin’s USPTO test set Jin2017 , accuracies in [%]
Method top-1 top-2 top-3 top-5
WLDN Jin2017 74.0 86.7 89.5
Our model 80.3 84.7 86.2 87.5
Table 6: Comparison with Jin et al. Jin2017 . The 1,352 multiple product reactions (3.4% of the test set) are counted as false predictions for our model.

6.3 Prediction confidence

We analyzed the top-1 beam search probability to obtain information about prediction confidence and to observe how this probability was related to accuracy. Figure 0(a) illustrates the distribution of the top-1 probability for Lowe’s test set in cases where the top-1 prediction is correct (left) and where it is wrong (right). A clear difference can be observed and used to define a threshold under which we determine that the network does not know what to predict. Figure 0(b) shows the top-1 accuracy and coverage depending on the confidence threshold. For example, for a confidence threshold of 0.83 the model would predict the outcome of 70.2% of the reactions with an accuracy of 83.0% and for the remaining 29.8% of the reaction it would not know the outcome.

(a) Distribution of top1 probabilities
(b) Coverage / Accuracy plot
Figure 1: Top-1 prediction confidence plots for Lowe’s test set inferred with a beam search of 10.

6.4 Attention

Attention is the key to take into account complex long-range dependencies between multiple tokens. Specific functional groups, solvents or catalysts have an impact on the outcome of a reaction, even if they are far from the reaction center in the molecular graph and therefore also in the SMILES string. Figure 2 shows how the network learned to focus first on the C[O] molecule, to map the [O] in the input correctly to the O in the target, and to ignore the Br, which is replaced in the target.

(a) Attention weights
(b) Reaction plotted with RDKit Landrum2017
Figure 2: Reaction 120 from Jin’s USPTO test set. The atom mapping between reactants and product is highlighted. SMILES: Brc1cncc(Br)c1.C[O-]CN(C)C=O.[Na+]COc1cncc(Br)c1

6.5 Limitations

Our model is not without limitations. An obvious disadvantage compared to template-based methods is that the strings are not guaranteed to be a valid SMILES. Incorporating a context-free grammar layer, as was done in Gomez-Bombarelli2016 , could bring minor improvements. Fortunately, only 1.3% of the top-1 predictions are grammatically erroneous for our model. Another limitation of the training procedure are multiple product reactions. In contrast to words in a sentence, the exact order in which the molecules in the target string are enumerated does not matter. A viable option would be to include in the training set all possible permutations of the product molecules. Our hyperparameter space during optimization was restricted to a maximum of 1,024 units for the encoder. Using more units could have led to improvements. On Jin’s USPTO dataset, the training plateaued because an accuracy of 99.9% was reached and the network had memorized almost the entire training set. Even on Lowe’s noisier dataset, a training accuracy of 94.5% was observed. A hyperparameter optimization should be performed on Lowe’s dataset.

7 Conclusion

Predicting reaction outcomes is a routine task of many organic chemists trained to recognize structural and reactivity patterns reported in a wide number of publications. Not only did we show that a seq2seq model with correctly tuned hyperparameters can learn the language of organic chemistry, our approach also outperformed the current state-of-the-art in patent reaction outcome prediction by achieving 80.3% on Jin’s USPTO dataset and 65.4% on single product reactions of Lowe’s dataset. Compared to previous work, our approach is fully data driven and free of reaction templates. Also worth mentioning is the overall simplicity of our model that jointly trains the encoder, decoder and attention layers. Our hope is that, with this type of model, chemists can codify and perhaps one day fully automate the art of organic synthesis.

Acknowledgments

We thank Nadine Schneider, Greg Landrum and Roger Sayle for the helpful discussions on RDKit and the datasets. We also would like to acknowledge Marwin Segler and Hiroko Satoh for useful feedback on our approach.

References

  • (1) Corey, E. J. & Wipke, W. T. Computer-Assisted Design of Complex Organic Syntheses. Science 166, 178–192 (1969).
  • (2) Salatin, T. D. et al. Computer-Assisted Mechanistic Evaluation of Organic Reactions. 1. Overview. J. Org. Chem. 45113, 455771–1041 (1978).
  • (3) Satoh, H. & Funatsu, K. SOPHIA, a Knowledge Base-Guided Reaction Prediction System—Utilization of a Knowledge Base Derived from a Reaction Database. J. Chem. Inf. Comput. Sci. 35, 34–44 (1995).
  • (4) Satoh, H. & Funatsu, K. Further development of a reaction generator in the sophia system for organic reaction prediction. knowledge-guided addition of suitable atoms and/or atomic groups to product skeleton. J. Chem. Inf. Comput. Sci. 36, 173–184 (1996).
  • (5) Segler, M. H. S. & Waller, M. P. Modelling Chemical Reasoning to Predict and Invent Reactions. Chem. Eur. J. 23, 6118–6128 (2017).
  • (6) Coley, C. W., Barzilay, R., Jaakkola, T. S., Green, W. H. & Jensen, K. F.

    Prediction of Organic Reaction Outcomes Using Machine Learning.

    ACS Cent. Sci. 3, 434–443 (2017).
  • (7) Dolbier, W. R. J., Henryk, K., Houk, K. & Chimin, S. Electronic Control of Stereoselectivities of Electrocyclic Reactions of Cyclobutenes: A Triumph of Theory in the Prediction of Organic Reactions. Acc. Chem. Res. 29, 471–477 (1996).
  • (8) Mondal, D. et al. Stereoretentive chlorination of cyclic alcohols catalyzed by titanium(IV) tetrachloride: evidence for a front side attack mechanism. J. Org. Chem. 78, 2118–27 (2013).
  • (9) Weininger, D. SMILES, a Chemical Language and Information System. 1. Introduction to Methodology and Encoding Rules. J. Chem. Inf. Comput. Sci. 281413, 31–36 (1988).
  • (10) Cadeddu, A., Wylie, E. K., Jurczak, J., Wampler-Doty, M. & Grzybowski, B. A. Organic Chemistry as a Language and the Implications of Chemical Linguistics for Structural and Retrosynthetic Analyses. Angew. Chem. Int. Ed. 53, 8108–8112 (2014).
  • (11) Gómez-Bombarelli, R. et al. Automatic chemical design using a data-driven continuous representation of molecules (2016). arXiv:1610.02415.
  • (12) Jastrzȩbski, S., Leśniak, D. & Czarnecki, W. M. Learning to SMILE(S) (2016). arXiv:1602.06289.
  • (13) Kusner, M. J., Paige, B. & Hernández-Lobato, J. M.

    Grammar Variational Autoencoder.

    In ICML (2017).
  • (14) Bjerrum, E. J. SMILES Enumeration as Data Augmentation for Neural Network Modeling of Molecules (2017). arXiv:1703.07076.
  • (15) Segler, M. H. S., Kogej, T., Tyrchan, C. & Waller, M. P. Generating Focussed Molecule Libraries for Drug Discovery with Recurrent Neural Networks (2017). arXiv:1701.01329.
  • (16) Nam, J. & Kim, J. Linking the Neural Machine Translation and the Prediction of Organic Chemistry Reactions (2016). arXiv:1612.09529.
  • (17) Jin, W., Coley, C. W., Barzilay, R. & Jaakkola, T. Predicting Organic Reaction Outcomes with Weisfeiler-Lehman Network. In NIPS (2017). arXiv:1709.04555.
  • (18) Wei, J. N., Duvenaud, D. & Aspuru-Guzik, A. Neural Networks for the Prediction of Organic Chemistry Reactions. ACS Cent. Sci. 2, 725–723 (2016).
  • (19) Duvenaud, D. K. et al. Convolutional Networks on Graphs for Learning Molecular Fingerprints. In NIPS (2015).
  • (20) Lowe, D. M. Extraction of chemical structures and reactions from the literature (2012).
  • (21) Segler, M. H. S. & Waller, M. P. Neural-Symbolic Machine Learning for Retrosynthesis and Reaction Prediction. Chem. Eur. J. 23, 5966–5971 (2017).
  • (22) Kayala, M. A. & Baldi, P. ReactionPredictor: Prediction of Complex Chemical Reactions at the Mechanistic Level Using Machine Learning. J. Chem. Inf. Model. 52, 2526–2540 (2012).
  • (23) Abadi, M. et al. TensorFlow: A System for Large-Scale Machine Learning TensorFlow: A system for large-scale machine learning. In OSDI (2016).
  • (24) Liu, B. et al. Retrosynthetic Reaction Prediction Using Neural Sequence-to- Sequence Models. ACS Cent. Sci. (2017).
  • (25) Schneider, N., Stiefl, N. & Landrum, G. A. What’s What: The (Nearly) Definitive Guide to Reaction Role Assignment. J. Chem. Inf. Model. 56, 2336–2346 (2016).
  • (26) Lowe, D. Chemical reactions from US patents (1976-Sep2016) (2017). URL https://figshare.com/articles/Chemical_reactions_from_US_patents_1976-Sep2016_/5104873.
  • (27) Landrum, G. et al. Rdkit/Rdkit: 2017_09_1 (Q3 2017) Release (2017). URL https://zenodo.org/record/1004356#.Wd3LDY6l2EI.
  • (28) Zhao, R., Luong, T. & Brevdo, E. Neural Machine Translation (seq2seq) Tutorial (2017). URL https://github.com/tensorflow/nmt.
  • (29) Hochreiter, S. & Schmidhuber, J. Long Short-Term Memory. Neural Comput. 9, 1735–1780 (1997).
  • (30) Graves, A. & Schmidhuber, J. Framewise Phoneme Classification with Bidirectional LSTM and Other Neural Network Architectures. Neural Networks 18, 602–610 (2005).
  • (31) Bahdanau, D., Cho, K. & Bengio, Y. Neural Machine Translation By Jointly Learning To Align And Translate. In ICLR (2015).
  • (32) Luong, M.-T., Pham, H. & Manning, C. D. Effective Approaches to Attention-based Neural Machine Translation. In EMNLP (2015).
  • (33) Williams, R. J. & Zipser, D. A learning algorithm for continually running fully recurrent neural networks. Neural computation 1, 270–280 (1989).
  • (34) Head, T. et al. Scikit-Optimize (2017). URL http://scikit-optimize.github.io/.
  • (35) Gal, Y. & Ghahramani, Z. A theoretically grounded application of dropout in recurrent neural networks. In NIPS (2016).
  • (36) Papineni, K., Roukos, S., Ward, T. & Zhu, W.-J. BLEU. In ACL (2001).
  • (37) Lin, C.-Y. ROUGE: A Package for Automatic Evaluation of Summaries. In ACL (2004).