Introduction
Using machine learning to predict molecular structure properties is a challenging problem
[7, 3]. While the governing equations (e.g. Schrodinger equation) are difficult and computationally expensive to solve, the fact that an underlying model exists is appealing for machine learning techniques. However, this problem is difficult from a technical point of view. The space of molecules is discrete and nonnumerical. Thus, “how to best represent molecules and atoms for machine learning problems?” is still a question.Despite having numerous ways to represent molecules such as methods introduced in [18, 1], all the representations are suffering from a few shortcomings, such as 1) discrete representation, 2) lengthy representation, 3) noninjective mapping, and 4) nonmachine readable representation.
Here, we proposed a new method that borrows the main idea from [5] and [12] and overcomes all the aforementioned shortcomings. Our method which takes the graphical structure of the molecule as the inputs consists of a variational framework with a side predictor to better prune the structure of the latent space. Then an inner product decoder transfers the samples of latent space into meaningful adjacency tensors. To compare with the main benchmark which is a textbased encoding of molecules [9] we performed two experiments on the QM9 dataset [16, 15] and ZINC [11]. Both experiments show the success of this method. Although this work is presenting preliminary results of Graph VAE, further experiments and comparisons are left to future work.
Method
Molecules and Graphs
A molecule can be represented by an undirected graph , with nodes (atoms) and labeled edges (bonds) where is an edge type. Since we focus on small molecules with four bond types, is equal to 4. An nodefeature matrix is also carrying more information about each node. These two tensors, together, represent a molecular structure.
Variational Autoencodes
To help ensure that points in the latent space correspond to valid realistic molecules, and to minimize the dead areas of the latent space, we chose to use a variational autoencoder (VAE). To further ensure that the outputs of the decoder are corresponding valid molecules we employed the opensource cheminformatics suite RDKit30 to validate the chemical structures of output molecules in terms of atomic valence. All invalid outputs are discarded. It is necessary to mention that the ordering of the nodes assumed to be unchanged.
VAE and Side Prediction
To better learn the graph structure of the molecules, the encoder part of the VAE consists of GCN layers. The same method as [17] has been employed to perform relational update which can be formulated as:
where denotes the set of nodes connected to node through the edge type . Since we are focusing on small molecules, we applied three layers of GCN in our encoder model to gather information from 3hop neighbors of each atom. The structure of encoder consists of two, threelayer GCNs for both mean and the covariance. GCNs of the encoder share the filters of the first two layers. Here we can formulate the encoding and sampling scheme as follows:
The and similarly are: , where the is the normalized adjacency tensor, is the filter parameter of each layer, and
is the activation function
[2]. Finally, as suggested in [12] we use the simplest form of the decoder which can be seen as graph deconvolution network. The output of the encoder is simply the inner product between latent variable:For the side prediction part, we employ a simple regression model in the form of a multilayer perceptron (MLP) to the network that predicts the properties from the latent space representation. The input of the side predictor is a vector obtained through a pooling mechanism of the latent representation as follows:
Where is the pooling weight matrix and is the output of the .
Finally, the autoencoder is trained jointly on the reconstruction task and a property prediction task; The joint loss function is the summation of the two losses, as follows:
Experiments
We performed two experiments to show the usefulness of continuous representation. In the first experiment, we focus on the prediction of property and the generation of the valid molecules. In the second experiment, we use this continuous representation to propose a new metric for measuring the molecular similarity.
Property Prediction
Side property  Valid outcome  Sol  Synt  Druglikeliness 

Solubility  75.3  97.03  88.7  84.2 
Synthesizability  73.0  89.8  98.21  86.3 
Druglikeliness  74.6  91.0  90.7  95.11 
Using a subset of QM9 dataset [15] as the training set, we extract 48,000 molecules covering a broad range of molecules. Each molecule in the training set is chosen to have up to 20 atoms. The training objective on the side predictor was set to be one of the Solubility, Druglikeliness, and Synthesizability. We employ the continuous representation of molecules using each network to predict the other two unseen properties. The performance of each model plus the percentages of validly generated molecules are summarized in Table 1. In order to check the validity of the outcome, we only check for the validity of the atomic valence. As it is shown in Table 1 the accuracy of each property is comparable to the state of the art property predictions mentioned in [8]. Although Graph VAE is not outperforming the predictions based on [8]
, it shows that using a property as a heuristic to prune the latent space, can help with predicting other molecule properties.
Molecular Similarity Measure
Numerous similarity or distance measures have been used widely to calculate the similarity or dissimilarity between two samples. Since metrics are focusing more on 2dimensional representation rather than 3dimensional structure, our model as a “2D structureaware representation” is an accurate metric for the similarity measure. Normalized Euclidean distance between the latent representation of two molecules after pooling operation is the metric we define to capture the similarity. Here we compare three wellknown similarity measures with our technique and also to the methods introduced in [9]. This method which is using the SMILES representation of the molecules as the input employs a VAE with a side predictor. Both encoder and decoder parts of the VAE are based on RRN and sequence to sequence model. Although all the graphical information of the molecule is encoded within the SMILES representation, inferring the graphical structure (e.g., adjacency tensor) from the SMILES string is an exhausting process that is based on several rules. Despite the numerous techniques built upon using the SMILES representation of the molecules [6, 10, 4, 14, 13], it has been shown that it is more efficient to take advantage of the graph structures and employ GCNs to process molecular structures.
metric  Amphetamine  Ecstasy (MDMA)  Nicotine  Caffeine 

Tanimoto  0.398  0.324  0.229  0.258 
Dice  0.569  0.490  0.373  0.410 
Cosine  0.607  0.490  0.374  0.434 
Graph VAE  0.363  0.199  0.147  0.176 
SMILES VAE [9]  0.724  0.489  0.340  0.321 
Here, we chose Aspirin as a sample drug and compare its similarity with four different drugs with four different similarity measures. We compare the performances of our technique with [9]
, which is using a similar approach but operating on text representation of molecules. Our experiment shows that graphbased hidden representation is carrying more information than only text. Table
2 is summarizing the result of the similarity measure experiment.As it is shown in table 2, our metric is very well aligned with all other wellknown metrics which is another proof for the applicability of our model.
Experiment Details
GVAE consists of two GCNs for the encoder, a pooling mechanism, and a multilayer perceptron for the side prediction. Both GCNs are threelayer networks with filter matrices , and of 32*32, 32*32m and 32*16 respectively. The pooling weight matrix is of size 1*64 which outputs a vector of length 64 to represent the whole molecule. A twolayer MLP with 32 and 1 hidden units is employed to perform the regression task.
In Table 2, we use our own implementation of the SMILES VAE. Both GVA and SMILES VAE are trained using a dataset of 70,000 molecules which are randomly selected from ZINC.
In Table 2, all measures except the continuous representations are calculated with the same fingerprinting algorithm. It identifies and hashes topological paths (e.g. along with bonds) in the molecule and then uses them to set bits in a fingerprint of length 2048.
The set of parameters used by the algorithm is  minimum path size: 1 bond  maximum path size: 7 bonds  number of bits set per hash: 2  target onbit density 0.3.
Conclusion
We proposed a generative model through which we can find continuous representation for molecules. As shown in the experiments section, this technique can be used in different chemoinformatics tasks such as drug design, drug discovery and property prediction. As future work, one can think of attention based graph convolutions and more complicated decoders. These two extensions can be studied in future works.
References

[1]
(2017)
Smiles enumeration as data augmentation for neural network modeling of molecules
. arXiv preprint arXiv:1703.07076. Cited by: Introduction.  [2] (2015) Fast and accurate deep network learning by exponential linear units (elus). External Links: 1511.07289 Cited by: VAE and Side Prediction.

[3]
(2019)
A graphconvolutional neural network model for the prediction of chemical reactivity
. Chemical Science 10 (2), pp. 370–377. Cited by: Introduction.  [4] (2018) DeepSMILES: an adaptation of smiles for use in. Cited by: Molecular Similarity Measure.
 [5] (2015) Convolutional networks on graphs for learning molecular fingerprints. In Advances in Neural Information Processing Systems 28, C. Cortes, N. D. Lawrence, D. D. Lee, M. Sugiyama, and R. Garnett (Eds.), pp. 2224–2232. External Links: Link Cited by: Introduction.
 [6] (2019) Deep learning for molecular designa review of the state of the art. Molecular Systems Design & Engineering. Cited by: Molecular Similarity Measure.
 [7] (2018) Deep learning for chemical reaction prediction. Molecular Systems Design & Engineering. Cited by: Introduction.
 [8] (2017) Neural message passing for quantum chemistry. In Proceedings of the 34th International Conference on Machine LearningVolume 70, pp. 1263–1272. Cited by: Property Prediction.
 [9] (2018) Automatic chemical design using a datadriven continuous representation of molecules. ACS central science 4 (2), pp. 268–276. Cited by: Introduction, Molecular Similarity Measure, Molecular Similarity Measure, Table 2.
 [10] (2018) Convolutional neural network based on smiles representation of compounds for detecting chemical motif. BMC bioinformatics 19 (19), pp. 526. Cited by: Molecular Similarity Measure.
 [11] (2012) ZINC: a free tool to discover chemistry for biology. Journal of chemical information and modeling 52 (7), pp. 1757–1768. Cited by: Introduction.
 [12] (2016) Semisupervised classification with graph convolutional networks. External Links: 1609.02907 Cited by: Introduction, VAE and Side Prediction.
 [13] (2019) SELFIES: a robust representation of semantically constrained graphs with an example application in chemistry. arXiv preprint arXiv:1905.13741. Cited by: Molecular Similarity Measure.
 [14] (2017) DeepCCI: endtoend deep learning for chemicalchemical interaction prediction. In Proceedings of the 8th ACM International Conference on Bioinformatics, Computational Biology, and Health Informatics, pp. 203–212. Cited by: Molecular Similarity Measure.
 [15] Quantum chemistry structures and properties of 134 kilo molecules. Cited by: Introduction, Property Prediction.
 [16] (2012) Enumeration of 166 billion organic small molecules in the chemical universe database gdb17. Journal of chemical information and modeling 52 (11), pp. 2864–2875. Cited by: Introduction.
 [17] (2018) Modeling relational data with graph convolutional networks. In European Semantic Web Conference, pp. 593–607. Cited by: VAE and Side Prediction.
 [18] (2018) MoleculeNet: a benchmark for molecular machine learning. Chemical science 9 (2), pp. 513–530. Cited by: Introduction.