Predicting properties of molecules is an area of growing research in machine learning Wu2018-yg ; Mitchell2014-aw , particularly as models for learning from graph-valued inputs improve in sophistication and robustness Scarselli2009-nn ; Gilmer2017-oq . A molecular property prediction problem that has received comparatively little attention during this surge in research activity is building Quantitative Structure-Odor Relationships (QSOR) models (as opposed to Quantitative Structure-Activity Relationships, a term from medicinal chemistry). This is a 70+ year old problem straddling chemistry, physics, neuroscience and machine learning Rossiter1996-nc .
Odor perception in humans is the result of the activation of 300-400 different types of olfactory receptors (ORs), expressed in millions of olfactory sensory neurons (OSNs), embedded in a small 5patch of tissue called the olfactory epithelium. These OSNs send signals to the olfactory bulb, and then to further structures in the brain Su2009-bv ; McGann2017-ir
. Advances in deep learning for vision and audition suggest that we might be able to directly predict the end sensory result of an input stimulus. Progress in deep learning for olfaction would aid in the discovery of new synthetic odorants, thereby reducing the ecological impact of harvesting natural products. Additionally, new representations of molecules derived from a model trained on odor recognition tasks may contribute our understanding of sensory perception in the brainYamins2016-nf .
Here, we curated a dataset of molecules associated with expert-labeled odor descriptors (in QSOR, odor descriptors refer to the properties we wish to predict, as opposed to their usage in chemoinformatics, where they refer to the input features of a model). We trained Graph Neural Networks (GNNs) Gilmer2017-oq ; Duvenaud2015-ye to predict these odor descriptors using a molecule’s graph structure alone. We show that our model learned a representation of odor space that clusters molecules based on perceptual similarity rather than purely on structural similarity, on both a global and local scale. Further, we show that this representation is useful for making predictions on related tasks, which is a developing area in chemistry applications of machine learning Altae-Tran2017-yz ; Fare2018-br . These results indicate that our modeling approach has captured a general-purpose representation of the relationship between a molecule’s structure and odor, which we anticipate to be useful for rational molecular design and screening.
2 Prior Work in QSOR: A Decades-Long Pursuit
The problem of QSOR is ancient Sell2019-ti , but in the scientific literature, emerges with Amoore, Schiffman and Dyson, among others Amoore1964-ca ; Dyson1937-ym ; schiffman1974physicochemical . Modern attempts to solve this problem in a directly data-driven and statistical manner began a few decades ago Rossiter1996-nc , and even included early applications of neural networks Chastrette1996-rm . However, the number of odor descriptors used in these early studies waas small (less than ten, usually one), and the number of total stimuli was limited (usually 10s, rarely 100s of molecules) Sigma-Aldrich_Corporation2011-yk ; Dravnieks1985-ek ; Arctander1969-uf . This has remained an open problem for so long due to its difficulty—very small changes in a molecule’s structure can have dramatic effects on its odor, a phenomenon known in medicinal chemistry as an activity cliff Sell2006-ag ; Stumpfe2012-dc . A classic example is Lyral, which is a commercially successful molecule that smells of muguet (a floral scent often used in dryer sheets). Its structural neighbors are not always perceptual neighbors, and some of its perceptual neighbors share little structural similarity (Figure 1).
Recently, the DREAM Olfactory Challenge spurred applications of traditional machine learning approaches to QSOR prediction Keller2017-vs . This challenge presented a dataset where 49 untrained panelists rated 476 molecules on 21 odor attributes on an analog scale. The winning models of the DREAM challenge primarily relied on either the Dragon molecular features Mauri2006-sp or Morgan fingerprints Rogers2010-uj
as a featurization of molecules. These features were used by random forests to make predictions, an approach with a long track-record of success in chemoinformatics. We use these methods as baselines in this work.
We wish to highlight a few modern machine learning approaches to QSOR. Tran and colleagues Tran2018-nn
have revisited the use of neural networks for this task, and have developed a convolutional neural network taking as input a custom 3D spatial representation of molecules. Nozaki et al.Nozaki2018-yg
used the mass spectra of molecules and natural language processing tools to predict textual descriptions of odor. Gutierrez et al.Gutierrez2018-ob used word embeddings and chemoinformatics representations of molecules to predict odor properties.
2.1 Classic Approaches to Featurizing Molecules and Modeling Their Properties
QSOR has historically used many computational techniques from chemoinformatics and medicinal chemistry. For predicting molecular properties, molecules are typically transformed into fixed length vectors using hand-crafted features, and fed to a prediction model such as a random forest or fully-connected neural networkMitchell2014-aw ; Svetnik2003-ht . We describe the details of baseline approaches to featurizing molecules below.
2.1.1 Dragon and Mordred Features
There are several available hand-crafted featurizations for molecules, which are popular in the field of olfactory neuroscience. Both Dragon (closed source, Mauri2006-sp ) and MordredMoriwaki2018-hv ) are approaches that include many thousands of computed molecular features. They are an agglomeration of several types of molecular information and statistics, such as counts of atom types, graph topology statistics, and acid/base counts. Some of these features are easily interpretable (e.g. number of Carbon atoms) and some are not (e.g.
spectral moment of order 4 from distance/detour matrix). We use Mordred in the present work because it is open source, and we found no appreciable difference in predictive performance between it and Dragon (data not shown).
2.1.2 Molecular Fingerprints
Molecular fingerprints encode topological environments of a molecular graph into a fixed-length vector. An environment is a fragment of the molecular graph, and indicates the presence of a single atom type or a functional group, e.g. an alcohol or ester group. This approach to featurizing molecules is popular in the field of medicinal chemistry; traditionally, bit-based Morgan fingerprints have been used in chemoinformatics for retrieivng nearest neighbor molecules using Tanimoto similarity Maggiora2014-va . When these environments are atom-centered and constructed via adjacent atoms, they are called Extended-Connectivity Fingerprints, or Morgan fingerprints Rogers2010-wo and when they are constructed via paths through the graph they are path descriptor fingerprints Randic1999-fn . The more commonly used bit variant records the presence of a given environment (e.g., is there an ester in this molecule?), while the count variant records the number of instances of a given environment (e.g. how many ester groups are there in this molecule?). This information is hashed into a fixed-length vector. There are two tunable parameters: max topological radius and fingerprint vector size. The max topological radius determines the largest fragment which the fingerprint can represent. Fingerprint vector size controls how likely a hash collision is. We tune both of these parameters to maximize predictive performance.
In our baseline experiments, we explicitly compare bit-based path descriptors fingerprints (bFP) and count-based Morgan fingerprints (cFP). The cheminformatics package RDKit was used to generate both types of fingerprints rdkit
. Molecular properties are typically predicted using models such as random forests or support-vector machines, so we use random forests as the predictive model for each of the bFP and cFP features.
3 Graph Neural Networks
Most machine learning models require regularly-shaped input (e.g. a grid of pixels, or a vector of numbers) as input. Recently, Graph Neural Networks (GNNs) have enabled the use of irregularly-shaped inputs, such as graphs, to be used directly in machine learning applications Wu2019 . Fields of use include predicting friendships in social network graphs, citation networks in academic literature, and most germane for this work, classification and regression tasks in chemistry Wu2018-yg .
3.1 Graph Neural Networks for Predicting Molecular Properties
By viewing atoms as nodes, and bonds as edges, we can interpret a molecule as a graph. GNNs are learnable permutation-invariant transformations on nodes and edges, which produce fixed length vectors that are further processed by a fully-connected neural network. GNNs can be considered learnable featurizers specialized to a task, in contrast with expert-crafted general features Gilmer2017-oq ; Duvenaud2015-ye . GNNs have achieved state-of-the-art results in the prediction of biophysical, biological, physical, and electronic quantum properties of molecules Wu2018-yg , and thus we believe their use in QSOR to be promising.
The GNN consists of message passing layers, each followed by a reduce-sum operation, followed by several fully connected layers. Architectural details can be found in the the appendix, Table Hyperparameter Tunning and GNN Architecture. The final fully-connected layer has a number of outputs equal to the number of odor descriptors being predicted. Figure 2
illustrates our model. We implement these GNN models using the TensorFlow software packageabadi2016tensorflow .
3.2 Learned Graph Neural Network Embeddings
All deep neural network architectures build representations of input data at their intermediate layers. The success of deep neural networks in prediction tasks relies on the quality of their learned representations, often referred to as embeddings Bengio2013-gf
. For instance, ImageNet embeddings are often used as-is to make predictions on unrelated image tasksdonahue2014decaf ; sharif2014cnn , and with the advent of the BERT model and its cousins, this ability to use pre-trained embeddings is becoming common in natural language processing Devlin2018-rp . The structure of a learned embedding can even lead to insights on the task or problem area, and the embedding can even be an object of study itself Yamins2016-nf ; coenen2019visualizing .
We save the activations of the penultimate fully connected layer as a fixed-dimension “odor embedding”. The GNN model must transform a molecule’s graph structure into a fixed-length representation that is useful for classification. Although the utility of learned neural network embeddings of molecules is still young and relatively unproven Gomez-Bombarelli2018-ar ; Zhavoronkov2019-pw , we still anticipate that a learned GNN embedding on an odor prediction task may include a semantically meaningful and useful organization of odorant molecules. We explicitly test the utility of this odor embedding in later sections in this work.
4 A Curated QSOR Dataset
We assembled an expert-labeled set of 5030 molecules from two separate sources: the GoodScents perfume materials database (, noauthor_undated-uu ) and the Leffingwell PMP 2001 database (, Leffingwell2005-uv ). The datasets share overlapping molecules. Molecules are labeled with one or more odor descriptors by olfactory experts (usually a practicing perfumer), creating a multi-label prediction problem. GoodScents describes a list of 1–15 odor descriptors for each molecule (Figure 3A), whereas Leffingwell uses free-form text. Odor descriptors were canonicalized using the GoodScents ontology, and overlapping molecules inherited the union of both datasets’ odor descriptors. After filtering for odor descriptors with at least 30 representative molecules, 138 odor descriptors remained (Figure 3B), including an odorless descriptor. Some odor descriptors were extremely common, like fruity or green, while others were rare, like radish or bready
. This dataset is composed of materials for perfumery, and so is biased away from malodorous compounds. There is also skew in label counts resulting from different levels of specificity, e.g.fruity will always be more common than pineapple.
There is an extremely strong co-occurrence structure among odor descriptors that reflects a common-sense intuition of which odor descriptors are similar and dissimilar (Figure 3C). For example, there is a dairy cluster that includes the dairy, yogurt, milk, and cheese descriptors, indicating that they often co-occur as descriptors in individual molecules. There is also a fruity cluster with apple, pear, pineapple etc., and a bakery cluster that includes toasted, nutty, and cocoa, among others. Previous approaches in QSOR often train one model per odor descriptor. To take advantage of this correlation structure, we apply GNNs to predict all 138 odor descriptor tasks at once.
5 QSOR Prediction Performance Benchmark
We benchmark classification performance for each odor descriptor in our dataset, as a multi-label classification problem. We compare the GNN model against random forest models (RF) and k-nearest neighbor models (KNN) on bit-based RDKit fingerprints (bFP), count-based Morgan fingerprints (cFP), and Mordred features. We report several metrics (Table1), as each metric can highlight different performance characteristics. For rest of the analysis, we primarily compare models on mean AUROC, averaged across odor descriptors; AUROC performance by label is shown in Figure 4. We trained non-graph based fully-connected neural networks on cFP and bFP features, but their performance is indistinguishable from the RF model (data not shown).
|GNN||0.894 [0.888, 0.902]||0.379 [0.351, 0.398]||0.360 [0.337, 0.372]|
|RF-Mordred||0.850 [0.838, 0.860]||0.311 [0.288, 0.333]||0.306 [0.283, 0.319]|
|RF-bFP||0.832 [0.821, 0.842]||0.321 [0.293, 0.339]||0.295 [0.272, 0.308]|
|RF-cFP||0.845 [0.835, 0.854]||0.315 [0.280, 0.332]||0.295 [0.272, 0.311]|
|KNN-bFP||0.791 [0.778, 0.803]||0.328 [0.305, 0.347]||0.323 [0.299, 0.335]|
|KNN-cFP||0.796 [0.785, 0.809]||0.333 [0.307, 0.351]||0.316 [0.292, 0.327]|
6 Evaluating Odor Embeddings
An odor embedding that reflects common-sense relationships between odors should have structure both globally and locally. Specifically, for global structure, odors that are perceptually similar should be nearby in an embedding. For local structure, individual molecules that have similar odor percepts should cluster together and thus be nearby in the embedding. We examine both of these properties in sequence.
6.1 Examining the Global Structure of a Learned Odor Space
We take our embedding representation of each data point from the penultimate-layer output of a trained GNN model. In the case of our best model, each molecule gets mapped to a 63-dimensional vector. Qualitatively, to visualize this space in 2D we use principal components analysis (PCA) to reduce its dimensionality. The distribution of all molecules sharing a similar label can be highlighted using kernel density estimation (KDE).
The global structure of the embedding space is illustrated in Figure 5. In this example, we find that individual odor descriptors (e.g. musk, cabbage, lily and grape) tend to cluster in their own specific region. For odor descriptors that co-occur frequently, we find that the embedding space captures a hierarchical structure that is implicit in the odor descriptors. The clusters for odor labels jasmine, lavender and muguet are found inside the cluster for the broader odor label floral. If we examine the pairwise distances between all odors in our learned embedding, we see the block structure apparent in Figure 3C is reflected by the learned GNN embedding, but not with molecular fingerprints (Figure S1). Further, a dimensionally-reduced molecular fingerprint embedding does not share the same degree of organization and interpretability (Figure S3).
6.2 Evaluating the Local Structure of a Learned Odor Space
] We tested whether molecules nearby in embedding space share perceptual similarity. Specifically, we asked whether molecules with small cosine distances in our GNN embeddings were perceptually similar. As a baseline, we used Tanimoto distance, which is equivalent to Jaccard distance on bFP features. Tanimoto distance is a commonly used metric for molecular database lookup in chemoinformatics. However, molecules with similar structural features do not always smell the same (Figure 1), so we anticipated that nearest neighbors using bFP features may not be as perceptually similar as neighbors in using our embeddings.
We trained a k-nearest neighbors (KNN) classifier () to predict odor descriptors from GNN embeddings and bFPs. GNN embeddings (AUROC = 0.818, 95% CI [0.806, 0.830] ) outperformed bFP (AUROC = 0.782, 95% CI [0.773, 0.797]). Inspecting the nearest neighbors found by each method (Figure 6) reveals that both methods yield molecules with similar structural features, but retrieval using GNN embeddings yields molecules that are more perceptually similar to the source molecule. This suggests that our representations are better able to cluster molecules by their odor perceptual similarity than bit-based fingerprints. Figure S2 and Table S4 show additional results comparing odor perceptual similarity and embedding distance between molecules using different distance metrics with bFPs and GNN embeddings.
We have shown that our embedding space has global and local structure that reflect the common-sense and psychophysical organization of odor descriptors. In the following sections, we show that this organization is useful, and that this embedding can be used to make predictions on adjacent, challenging tasks.
6.3 Transfer Learning to Previously-Unseen Odor Descriptors
An odor descriptor may be newly invented or refined (e.g., molecules with the pear descriptor might be later attributed a more specific pear skin, pear stem, pear flesh, pear core descriptor). A useful odor embedding would be able to transfer learn Pan2010-jk to this new descriptor, using only limited data. To approximate this scenario, we ablated one odor descriptor at a time from our dataset. Using the embeddings trained from odor descriptors as a featurization, we trained a random forest to predict the previously held-out odor descriptor. We used cFP and Mordred features as a baseline for comparison. The results are shown in Figure 7
. GNN embeddings significantly outperform Morgan fingerprints and Mordred features on this task (non-overlapping 95% confidence intervals, see Table1 for Mordred results), but as expected, still perform slightly worse than a GNN trained on the target odor. This indicates that GNN-based embeddings may generalize to predict new, but related, odors.
6.4 Generalizing to Other Olfaction Tasks: the DREAM Olfaction Prediction Challenge
The DREAM Olfaction Prediction Challenge Keller2017-vs was an open competition to build QSOR models on a dataset collected from untrained panelists. The DREAM dataset has several differences from our own. First, it was a regression problem —– panelists rated the amount that a molecule smelled of a particular odor descriptor on a scale from 1 to 100. Second, it had 476 molecules compared to our k (although our dataset contains nearly all of the DREAM molecules). Third, the ratings were provided by a large panel of untrained individuals over a short period of time, whereas ours were gleaned from a small set of experts over many years. The DREAM challenge measured model performance as the Pearson’s correlation of model predictions with the mean reported intensity of each odor descriptor, which we show in Figure 8. Additional statistics such as and 95% confidence intervals are found in Figures S4, S5.
The winning DREAM model used random forest models with a combinations of several sources of features, primarily Dragon and Morgan fingerprints, among other sources of information Keller2017-vs . Using only our embedding with a tuned random forest model, we achieve a mean Pearson’s while the state-of-the-art model described above achieved a mean Pearson’s . While we are able to have better mean performance in 13 tasks, when taking into account confidence intervals, we find the performance is indistinguishable for both and regression scores (Figures S4, S5).
Overall, this indicates that our QSOR modeling approach can generalize to adjacent perceptual tasks, and captures meaningful and useful structure about human olfactory perception, even when measured in different contexts, with different methodologies.
We assembled a novel and large dataset of expertly-labeled single-molecule odorants, and trained a graph neural network to predict the relationship between a molecule’s structure and its smell. We demonstrated state-of-the-art results on this QSOR task with respect to field-recognized baselines. Further, we showed that the embeddings capture meaningful structure on both a local and global scale. Finally, we showed that the embeddings learned by our model are useful in downstream tasks, which is currently a rare property of modern machine learning models and data in chemistry. Thus, we believe our model and its learned embeddings might be generally useful in the rational design of new odorants.
We thank D. Sculley, Steven Kearnes, Jasper Snoek, Emily Reif, Carey Radebaugh, and David Belanger for support, suggestions and useful discussions on the manuscript. We thank Aniket Zinzuwadia for discussion and who did preliminary work on the DREAM and GoodScents datasets for his senior thesis. We thank Bill Luebke of GoodScents and John Leffingwell of Leffingwell and Associates for their generosity in sharing their data for research use. We thank all of our colleagues in the Google Brain Cambridge office for creating and maintaining such a supportive and stimulating environment.
-  Zhenqin Wu, Bharath Ramsundar, Evan N Feinberg, Joseph Gomes, Caleb Geniesse, Aneesh S Pappu, Karl Leswing, and Vijay Pande. MoleculeNet: a benchmark for molecular machine learning, 2018.
-  John B O Mitchell. Machine learning methods in chemoinformatics. Wiley Interdiscip. Rev. Comput. Mol. Sci., 4(5):468–481, September 2014.
-  Franco Scarselli, Marco Gori, Ah Chung Tsoi, Markus Hagenbuchner, and Gabriele Monfardini. Computational capabilities of graph neural networks. IEEE Trans. Neural Netw., 20(1):81–102, January 2009.
-  Justin Gilmer, Samuel S Schoenholz, Patrick F Riley, Oriol Vinyals, and George E Dahl. Neural message passing for quantum chemistry. April 2017.
-  Karen J Rossiter. Structure-odor relationships. Chem. Rev., 96(8):3201–3240, 1996.
-  Chih-Ying Su, Karen Menuz, and John R Carlson. Olfactory perception: receptors, cells, and circuits. Cell, 139(1):45–59, October 2009.
-  John P McGann. Poor human olfaction is a 19th-century myth, 2017.
-  Daniel L K Yamins and James J DiCarlo. Using goal-driven deep learning models to understand sensory cortex, 2016.
-  David K Duvenaud, Dougal Maclaurin, Jorge Iparraguirre, Rafael Gómez-Bombarell, Timothy Hirzel, Alán Aspuru-Guzik, and Ryan P Adams. Convolutional Networks on Graphs for Learning Molecular Fingerprints. In Advances in Neural Information Processing Systems, pages 2215–2223, 2015.
-  Han Altae-Tran, Bharath Ramsundar, Aneesh S Pappu, and Vijay Pande. Low data drug discovery with One-Shot learning. ACS Cent Sci, 3(4):283–293, April 2017.
-  Clyde Fare, Lukas Turcani, and Edward O Pyzer-Knapp. Powerful, transferable representations for molecules through intelligent task selection in deep multitask networks. September 2018.
-  Charles Sell. Perfume in the Bible. Royal Society of Chemistry, July 2019.
-  J E Amoore, J W Johnston, Jr, and M Rubin. THE STEROCHEMICAL THEORY OF ODOR. Sci. Am., 210:42–49, February 1964.
-  G Malcolm Dyson. Raman effect and the concept of odour. Perfum. Essent. Oil Rec, 28:13, 1937.
-  Susan S Schiffman. Physicochemical correlates of olfactory quality. Science, pages 112–117, 1974.
-  M Chastrette, D Cretin, and C el Aïdi. Structure-odor relationships: using neural networks in the estimation of camphoraceous or fruity odors and olfactory thresholds of aliphatic alcohols. J. Chem. Inf. Comput. Sci., 36(1):108–113, January 1996.
-  Sigma-Aldrich Corporation. Aldrich Chemistry 2012-2014: Handbook of Fine Chemicals. 2011.
-  Andrew Dravnieks and ASTM Committee E-18 on Sensory Evaluation of Materials and Products. Section E-18.04.12 on Odor Profiling. Atlas of odor character profiles. Astm Intl, 1985.
-  Steffen Arctander. Perfume and flavor chemicals:(aroma chemicals), volume 2. Allured Publishing Corporation, 1969.
-  C S Sell. On the unpredictability of odor, 2006.
-  Dagmar Stumpfe and Jürgen Bajorath. Exploring activity cliffs in medicinal chemistry. J. Med. Chem., 55(7):2932–2942, April 2012.
-  Andreas Keller, Richard C Gerkin, Yuanfang Guan, Amit Dhurandhar, Gabor Turu, Bence Szalai, Joel D Mainland, Yusuke Ihara, Chung Wen Yu, Russ Wolfinger, Celine Vens, Leander Schietgat, Kurt De Grave, Raquel Norel, DREAM Olfaction Prediction Consortium, Gustavo Stolovitzky, Guillermo A Cecchi, Leslie B Vosshall, and Pablo Meyer. Predicting human olfactory perception from chemical features of odor molecules. Science, 355(6327):820–826, February 2017.
-  Andrea Mauri, Viviana Consonni, Manuela Pavan, and Roberto Todeschini. Dragon software: An easy approach to molecular descriptor calculations. Match, 56(2):237–248, 2006.
-  David Rogers and Mathew Hahn. Extended-connectivity fingerprints. J. Chem. Inf. Model., 50(5):742–754, May 2010.
-  Ngoc Tran, Daniel Kepple, Sergey A Shuvaev, and Alexei A Koulakov. DeepNose: Using artificial neural networks to represent the space of odorants. November 2018.
-  Yuji Nozaki and Takamichi Nakamoto. Predictive modeling for odor character of a chemical using machine learning combined with natural language processing. PLoS One, 13(6):e0198475, June 2018.
-  E Darío Gutiérrez, Amit Dhurandhar, Andreas Keller, Pablo Meyer, and Guillermo A Cecchi. Predicting natural language descriptions of mono-molecular odorants. Nat. Commun., 9(1):4979, November 2018.
-  Günther Ohloff and Wilhelm Pickenhagen. Scent and Chemistry. Wiley, January 2012.
-  Vladimir Svetnik, Andy Liaw, Christopher Tong, J Christopher Culberson, Robert P Sheridan, and Bradley P Feuston. Random forest: a classification and regression tool for compound classification and QSAR modeling. J. Chem. Inf. Comput. Sci., 43(6):1947–1958, November 2003.
-  Hirotomo Moriwaki, Yu-Shi Tian, Norihito Kawashita, and Tatsuya Takagi. Mordred: a molecular descriptor calculator. J. Cheminform., 10(1):4, February 2018.
-  Gerald Maggiora, Martin Vogt, Dagmar Stumpfe, and Jürgen Bajorath. Molecular similarity in medicinal chemistry. J. Med. Chem., 57(8):3186–3204, April 2014.
-  David Rogers and Mathew Hahn. Extended-connectivity fingerprints. J. Chem. Inf. Model., 50(5):742–754, May 2010.
-  Milan Randić and Subhash C Basak. Optimal molecular descriptors based on weighted path numbers. J. Chem. Inf. Comput. Sci., 39(2):261–266, March 1999.
-  RDKit: Open-source cheminformatics. http://www.rdkit.org.
-  Zonghan Wu, Shirui Pan, Fengwen Chen, Guodong Long, Chengqi Zhang, and Philip S. Yu. A comprehensive survey on graph neural networks. CoRR, abs/1901.00596, 2019.
-  Martín Abadi, Paul Barham, Jianmin Chen, Zhifeng Chen, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Geoffrey Irving, Michael Isard, et al. Tensorflow: A system for large-scale machine learning. In 12th USENIX Symposium on Operating Systems Design and Implementation (OSDI 16), pages 265–283, 2016.
-  Yoshua Bengio, Aaron Courville, and Pascal Vincent. Representation learning: a review and new perspectives. IEEE Trans. Pattern Anal. Mach. Intell., 35(8):1798–1828, August 2013.
-  Jeff Donahue, Yangqing Jia, Oriol Vinyals, Judy Hoffman, Ning Zhang, Eric Tzeng, and Trevor Darrell. Decaf: A deep convolutional activation feature for generic visual recognition. In International conference on machine learning, pages 647–655, 2014.
-  Ali Sharif Razavian, Hossein Azizpour, Josephine Sullivan, and Stefan Carlsson. Cnn features off-the-shelf: an astounding baseline for recognition. In , pages 806–813, 2014.
-  Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. BERT: Pre-training of deep bidirectional transformers for language understanding. October 2018.
-  Andy Coenen, Emily Reif, Ann Yuan, Been Kim, Adam Pearce, Fernanda Viégas, and Martin Wattenberg. Visualizing and measuring the geometry of bert. arXiv preprint arXiv:1906.02715, 2019.
-  Rafael Gómez-Bombarelli, Jennifer N Wei, David Duvenaud, José Miguel Hernández-Lobato, Benjamín Sánchez-Lengeling, Dennis Sheberla, Jorge Aguilera-Iparraguirre, Timothy D Hirzel, Ryan P Adams, and Alán Aspuru-Guzik. Automatic chemical design using a Data-Driven continuous representation of molecules. ACS Cent Sci, 4(2):268–276, February 2018.
-  Alex Zhavoronkov, Yan A Ivanenkov, Alex Aliper, Mark S Veselov, Vladimir A Aladinskiy, Anastasiya V Aladinskaya, Victor A Terentiev, Daniil A Polykovskiy, Maksim D Kuznetsov, Arip Asadulaev, Yury Volkov, Artem Zholus, Rim R Shayakhmetov, Alexander Zhebrak, Lidiya I Minaeva, Bogdan A Zagribelnyy, Lennart H Lee, Richard Soll, David Madge, Li Xing, Tao Guo, and Alán Aspuru-Guzik. Deep learning enables rapid identification of potent DDR1 kinase inhibitors. Nat. Biotechnol., 37(9):1038–1040, September 2019.
-  The good scents company - flavor, fragrance, food and cosmetics ingredients information. http://www.thegoodscentscompany.com/. Accessed: 2019-9-4.
-  John C Leffingwell. Leffingwell & associates, 2005.
-  S J Pan and Q Yang. A survey on transfer learning. IEEE Trans. Knowl. Data Eng., 22(10):1345–1359, October 2010.
-  Piotr Szymański and Tomasz Kajdanowicz. A network perspective on stratification of Multi-Label data. April 2017.
Hyperparameter Tunning and GNN Architecture
We consider two types of GNNs: Message Passing Neural Networks (MPNN)  and Graph Convolution Networks (GCN) . With both variants, we utilize a shared trunk that consists of message passing layers, followed by a reduce-sum operation, followed by several fully connected layers.
For the GCN and MPNN, we optimized the hyperparameters of our model using 5-fold cross-validation in our training set of4,000 molecules, and tuned 30 hyperparameters (including learning rate, momentum, architecture depth & width, etc) using 500 trials of random search. Each model fit took less than 1 hour on a Tesla P100. We present results for the model with the highest mean AUROC on the cross-validation set.
We found that MPNNs and GCNs perform similarly. Both MPNNs and GCNs significantly outperform all baseline models. Because MPNNs and GCNs perform similarly, and GCNs are architecturally simpler, the analysis of GNN results in this work are reported on the GCN model.
|Message Passing Layers||concatenation message type, 4 layers of dim: [15,20,27,36], selu activation, max graph pooling||edge-conditioned matrix multiply message type, 5 layers of dim 43, GRU-update at each layer|
|Readout||Global sum pooling with softmax, 175 dim, one per MP layer and summed||
Global sum pooling with softmax, 197 dim, one per MP layer with residual connections and summed
|fully-connected neural net||
2-layers of dim [96, 63] with relu, batchnorm, dropout of 0.47
|3-layers of dim 392 with relu, batchnorm, dropout of 0.12 and l1/l2 regularization|
|Prediction||Multi-headed sigmoid, 138 tasks|
|Training||Weighted-cross entropy loss, optimized with Adam,|
used learning rate decay with warm restarts, 300 epochs
|MPNN||0.890 [0.882, 0.898]||0.379 [0.352, 0.399]||0.387 [0.366, 0.408]||0.362 [0.335, 0.375]|
|GCN||0.894 [0.888, 0.902]||0.379 [0.351, 0.398]||0.390 [0.365, 0.412]||0.360 [0.337, 0.372]|
For our RF baseline methods we tuned an exhaustive space of configurations of fingerprinting methods (bits, radius, counted/binary, RDKit/Morgan), and RF hyperparameters. The RDKit software was used to calculate all features .
For the KNN baseline methods we also tuned fingerprinting options along with the number of neighbors. This resulted in a binary RDKit fingerprint of 4096 bits with radius 6. The optimal was found with an elbow analysis over to 100 using the Jaccard distance. KNN predictions are weighted by distance.
Since our multilabel problem had highly unbalanced labels we used iterative stratification of the second order to build our train/test/validation splits . Iterativative stratification is an iterative procedure for stratified sampling that attempts to preserve many-order label ratios. For second order, this means preserving ratios of pairs of labels in each split.
Confidence intervals were constructed by bootstrap resampling. We resampled the test dataset with replacement times, and computed AUROC on each sample. The training set and model remained fixed. We report the [2.5, 97.5] percentile boundaries to construct a 95% CI interval.
Table of Per-Descriptor Results
AUROC and AUPRC performance results by descriptor for the GNN model and the Random Forest model with counting fingerprint features
|Embedding space / distance metric||Kendall|
|GCN embeddings with Euclidean distance||0.280|
|GCN embeddings with cosine distance||0.235|
|Morgan bit-FP with Jaccard distance||0.187|
|Morgan bit-FP with cosine distance||0.180|