The principal goal of the drug design process is to find new chemical compounds that are able to modulate the activity of a given target (typically a protein) in a desired way.1 However, finding such molecules in the high-dimensional chemical space of all molecules without any prior knowledge is nearly impossible. In silico methods have been introduced to leverage the existing chemical, pharmacological and biological knowledge, thus forming a new branch of science - computer-aided drug design (CADD).2, 3 Computer methods are nowadays applied at every stage of drug design pipelines 2 - from the search of new, potentially active compounds,4 through optimization of their activity and physicochemical profile 5 and simulating their scheme of interaction with the target protein,6 to assisting in planning the synthesis and evaluation of its difficulty.7
The recent advancements in deep learning encouraged its application in CADD.8 The two main approaches are: virtual screening
, that is using discriminative models to screen commercial databases and classify molecules as likely active or inactive;de novo design, that is using generative models to propose novel molecules that likely possess the desired properties. The former application already proved to give outstanding results.9, 10, 11, 12
The latter use case is rapidly emerging, e.g. long short-term memory (LSTM) network architectures have been applied with some success.13, 14, 15, 16
In the center of our interest are the hit-to-lead and lead optimization phases of the compound design process. Their goals are to optimize the drug-like molecules identified in the previous steps in terms of the desired activity profile (increased potency towards given target protein and provision of inactivity towards off-target proteins) and the physicochemical and pharmacokinetic properties. Optimizing a molecule with respect to multiple properties simultaneously remains a challenge.5 Nevertheless, some successful approaches to compound generation and optimization have been proposed.
Variational Autoencoder (VAE)
Variational Autoencoder (VAE)17 can be used for the task of molecule generation. The first such models18 were based on the SMILES representation. Unfortunately, these models can generate invalid SMILES that do not correspond to any molecules. Introduction of grammars into the model improved the success rate of valid SMILES generation.19, 20 Maintaining chemical validity when generating new molecules became possible through VAEs realized directly on molecular graphs.21, 22
Generative Adversarial Networks (GAN)23 is an alternative architecture that has been applied to de novo
drug design. GANs, together with Reinforcement Learning (RL), were recently proposed as models that generate molecules with desired properties while promoting diversity. These models use representations based on SMILES,24 graph adjacency and annotation matrices,25 or are based on graph convolutional policy networks.26 The major obstacle in practical utility of these approaches is that the generated compounds can be difficult (or even impossible) to synthesize.
To address the problem of generating compounds difficult to synthesize, we introduce Mol-CycleGAN – a generative model based on CycleGAN.27 Given a starting molecule, it generates a structurally similar one but with a desired characteristics. The similarity between these molecules is important for two reasons. First, it leads to an easier synthesis of generated molecules, and second, such optimization of the selected property is less likely to spoil the previously optimized ones, which is important in the context of multiparameter optimization. We show that our model generates molecules that possess desired properties (note that by a molecular property we also mean binding affinity towards a target protein) while retaining their structural similarity to the starting compound. Moreover, thanks to employing graph-based representation instead of SMILES, our algorithm always returns valid compounds.
We evaluate the model’s ability to perform structural transformations and molecular optimization. The former indicates that the model is able to do simple structural modifications such as a change in the presence of halogen groups or number of aromatic rings. In the latter, we aim to maximize penalized logP to assess the model’s utility for compound design. Penalized logP is chosen as it is a property often selected as a testing ground for molecule optimization models,22, 26 due to its relevance in the drug design process. In the optimization of penalized logP for drug-like molecules our model significantly outperforms previous results. To the best of our knowledge, Mol-CycleGAN is the first approach to molecule generation that uses the CycleGAN architecture.
2.1 Junction Tree Variational Autoencoder
JT-VAE22 (Junction Tree Variational Autoencoder) is a method based on VAE, which works using graph structures of compounds, in contrast to previous methods which work using the SMILES representation of molecules.18, 19, 20
The VAE models used for molecule generation share the encoder-decoder architecture. The encoder is a neural network used to calculate a continuous, high-dimensional representation of a molecule in the so-called latent space, whereas the decoder is another neural network used to decode a molecule from coordinates in the latent space. In VAEs the entire encoding-decoding process is stochastic (has a random component). In JT-VAE both the encoding and decoding algorithms use two components for representing the molecule: a junction-tree scaffold of molecular sub-components (called clusters) and a molecular graph.22 JT-VAE shows superior properties compared to SMILES-based VAEs, such as 100 validity of generated molecules.
Mol-CycleGAN is a novel method of performing compound optimization by learning from the sets of molecules with and without the desired molecular property (denoted by the sets and ). Our approach is to train a model to perform the transformation and then use this model to perform optimization of molecules. In the context of compound design () can be, e.g., the set of inactive (active) molecules.
To represent the sets and our approach requires an embedding of molecules which is reversible, i.e. enables both encoding and decoding of molecules.
For this purpose we use the latent space of JT-VAE, representation created by neural network during the training process. This approach has the advantage that the distance between molecules (required to calculate the loss function) can be defined directly in the latent space. Moreover, molecular properties are easier to express on graphs rather than using linear SMILES representation.28 One could try formulating the CycleGAN model on the SMILES representation directly, but this would raise the problem of defining a differentiable intermolecular distance, as the standard manners of measuring similarity between molecules (Tanimoto similarity) are non-differentiable.
Our approach extends the CycleGAN framework 27 to molecular embeddings of the latent space of JT-VAE.22 We represent each molecule as a point in the latent space, given by the mean of the variational encoding distribution.17 Our model works as follows (Fig. 1): (i) we start by defining the sets and (e.g., inactive/active molecules); (ii) we introduce the mapping functions and ; (iii) we introduce discriminator (and ) which force the generator (and ) to generate samples from a distribution close to the distribution of (or ). The components , , , and are modeled by neural networks (see Workflow for technical details). The main idea of our approach to molecule optimization is to: (i) take the prior molecule without a specified feature (e.g. specified number of aromatic rings, water solubility, activity) from set , and compute its latent space embedding; (ii) use the generative neural network to obtain the embedding of molecule , that has this feature (as if the molecule came from set ) but is also similar to the original molecule ; (iii) decode the latent space coordinates given by to obtain the SMILES representation of the optimized molecule. Thereby, the method is applicable in lead optimization processes, as the generated compound remains structurally similar to the input molecule.
To train the Mol-CycleGAN we use the following loss function:
and aim to solve
We use the adversarial loss introduced in LS-GAN 29:
which ensures that the generator (and ) generates samples from a distribution close to the distribution of (or ).
The cycle consistency loss
reduces the space of possible mapping functions, such that for a molecule from set , the GAN cycle brings it back to a molecule similar to , i.e. is close to (and analogously is close to ). The inclusion of the cyclic component acts as a regularization and may also help in the regime of low data, as the model can learn from both directions of the transformation. This component makes the resulting model more robust (cf. e.g. the comparison 30 of CycleGAN vs non-cyclic IcGAN 31). Finally, to ensure that the generated (optimized) molecule is close to the starting one we use the identity mapping loss 27
which further reduces the space of possible mapping functions and prevents the model from generating molecules that lay far away from the starting molecule in the latent space of JT-VAE.
In all our experiments we use the hyperparametersand , which were chosen by checking a couple of combinations (for structural tasks) and verifying that our optimization process: (i) improves the studied property and (ii) generates molecules similar to the starting ones. We have not performed a grid search for optimal values of and , and hence there could be space for improvement. Note that these parameters control the balance between improvement in the optimized property and similarity between the generated and the starting molecule. We show in the Results section that both the improvement and the similarity can be obtained with the proposed model.
We conduct experiments to test whether the proposed model is able to generate molecules that possess desired properties and are close to the starting molecules. Namely, we evaluate the model on tasks related to structural modifications, as well as on tasks related to molecule optimization. For testing molecule optimization we select the octanol-water partition coefficient (logP) penalized by the synthetic accessibility (SA) score. logP describes lipophilicity - a parameter influencing a whole set of other characteristics of compounds such as solubility, permeability through biological membranes, ADME (absorption, distribution, metabolism, and excretion) properties, and toxicity. We use the formulation as reported in the paper on JT-VAE,22 i.e. for molecule the penalized logP is given as . We use the ZINC-250K dataset used in similar studies19, 22 which contains 250 000 drug-like molecules extracted from the ZINC database.32 The detailed formulation of the tasks is the following:
Structural transformations We test the model’s ability to perform simple structural transformations of the molecules. To this end, we choose the sets and , differing in some structural aspects, and then test if our model can learn the transformation rules and apply them to molecules previously unseen by the model. There are two features by which we divide the sets:
Halogen moieties We split the dataset into two subsets and . The set consists of molecules which contain at least one of the following SMARTS: ’[!#1]Cl’, ’[!#1]F’, ’[!#1]I’, ’C#N’, whereas the set consists of such molecules which do not contain any of them. The SMARTS chosen in this experiment indicate halogen moieties and the nitrile group. Their presence and position within a molecule can have an immense impact on the compound’s activity.
Aromatic rings Molecules in have exactly two aromatic rings, whereas molecules in have one or three aromatic rings.
Constrained molecule optimization We optimize penalized logP, while constraining the degree of deviation from the starting molecule. The similarity between molecules is measured with Tanimoto similarity on Morgan Fingerprints.33 The sets and are random samples from ZINC-250K, where the compounds’ penalized logP values are below and above the median, respectively.
Unconstrained molecule optimization We perform unconstrained optimization of penalized logP. The set is a random sample from ZINC-250K and the set is a random sample from the top-20 molecules with the highest penalized logP in ZINC-250K.
2.4 Composition of the datasets
In Table 1 we show the number of molecules in the datasets used for training and testing. In all experiments we use separate sets for training the model ( and ) and separate, non-overlapping ones for evaluating the model ( and ). In constrained and unconstrained molecular optimization experiments no set is required.
Distribution of the selected properties
In the experiment on halogen moieties, the set always (i.e., both in train- and test-time) contains molecules without halogen moieties, and the set always contains molecules with halogen moieties. In the dataset used to construct the latent space (ZINC-250K) 65 % molecules do not contain any halogen moiety, whereas the remaining 35 % contain one or more halogen moieties.
In the experiment on aromatic rings, the set always (i.e., both in train- and test-time) contains molecules with 2 rings, and the set always contains molecules with 1 or 3 rings. The distribution of the number of aromatic rings in the dataset used to construct the latent space (ZINC-250K) is shown in Figure 2 along with the distribution for and .
2.5 Architecture of the models
All networks are trained using the Adam optimizer 34 with learning rate
. During training we use batch normalization.35
. In the structural experiments the models are trained for 100 epochs and in the physiochemical experiments for 300 epochs.
2.5.1 Structural data experiments
Generators are built of one fully connected residual layer, followed by one dense layer. All layers contain 56 units.
Discriminators are built of 6 dense layers of the following sizes: 56, 42, 28, 14, 7, 1 units.
2.5.2 Physiochemical data experiments
Generators are built of four fully connected residual layers. All layers contain 56 units.
Discriminators are built of 7 dense layers of the following sizes: 48, 36, 28, 18, 12, 7, 1 units.
3.1 Structural transformations
In each structural experiment we test the model’s ability to perform simple transformations of molecules in both directions and . Here, and are non-overlapping sets of molecules with a specific structural property. We start with experiments on structural properties because they are easier to interpret and the rules related to transforming between and are well defined. Hence, the present task should be easier for the model, as compared to the optimization of complex molecular properties, for which there are no simple rules connecting and .
|Halogen moieties||Aromatic rings|
In Table 2 we show the success rates for the tasks of performing structural transformations of molecules. The task of changing the number of aromatic rings is more difficult than changing the presence of halogen moieties. In the former the transition between (with 2 rings) and (with 1 or 3 rings, cf. Fig. 5) is more than a simple addition/removal transformation, as it is in the other case (see Fig. 5 for the distributions of the aromatic rings). This is reflected in the success rates which are higher for the task of transformations of halogen moieties. In the dataset used to construct the latent space (ZINC-250K) 64.9 % molecules do not contain any halogen moiety, whereas the remaining 35.1 % contain one or more halogen moieties. This imbalance might be the reason for the higher success rate in the task of removing halogen moieties (). Molecular similarity and drug-likeness are achieved in all experiments.
To confirm that the generated molecules are close to the starting ones, we show in Figure 6 distributions of their Tanimoto similarities (using Morgan fingerprints). For comparison we also include distributions of the Tanimoto similarities between the starting molecule and a random molecule from the ZINC-250K dataset. The high similarities between the generated and the starting molecules show that our procedure is neither a random sampling from the latent space, nor a memorization of the manifold in the latent space with the desired value of the property. In Figure 7 we visualize the molecules, which after transformation are the most similar to the starting molecules.
3.2 Constrained molecule optimization
As our main task we optimize the desired property under the constraint that the similarity between the original and the generated molecule is higher than a fixed threshold. This is a more realistic scenario in drug discovery, where the development of new drugs usually starts with known molecules such as existing drugs.36 Here, we maximize the penalized logP coefficient and use the Tanimoto similarity with the Morgan fingerprint33 to define the threshold of similarity, . We compare our results with previous similar studies.22, 26
In our optimization procedure each molecule (given by the latent space coordinates ) is fed into the generator to obtain the ‘optimized’ molecule . The pair defines what we call an ’optimization path’ in the latent space of JT-VAE. To be able to make a comparison with the previous research 22 we start the procedure from the 800 molecules with the lowest values of penalized logP in ZINC-250K and then we decode molecules from points along the path from to in equal steps.
From the resulting set of molecules we report the molecule with the highest penalized logP score that satisfies the similarity constraint. A modification succeeds if one of the decoded molecules satisfies the constraint and is distinct from the starting one.
|0||1.91 2.04||0.28 0.15||4.20 1.28||0.32 0.12||8.30 1.98||0.16 0.09|
|0.2||1.68 1.85||0.33 0.13||4.12 1.19||0.34 0.11||5.79 2.35||0.30 0.11|
|0.4||0.84 1.45||0.51 0.10||2.49 1.30||0.47 0.08||2.89 2.08||0.52 0.10|
|0.6||0.21 0.75||0.69 0.06||0.79 0.63||0.68 0.08||1.22 1.48||0.69 0.07|
In the task of optimizing penalized logP of drug-like molecules, our method significantly outperforms the previous results in the mean improvement of the property (see Table 3). It achieves a comparable mean similarity in the constrained scenario (for ). The success rates are comparable for , whereas for the more stringent constraints () our model has lower success rates (see Table 4). Note that comparably high improvements of penalized logP can be obtained using reinforcement learning.26
However, the molecules optimized in such a manner are not drug-like, e.g., they have very low quantitative estimate of drug-likeness score37 even for . In our method (as well as in JT-VAE) drug-likeness is achieved ‘by construction’ and is an intrinsic feature of the latent space obtained by training the variational autoencoder on molecules from ZINC (which are drug-like).
3.2.1 Molecular paths from constrained optimization experiments
3.3 Unconstrained molecule optimization
Our architecture is tailor made for the scenario of constrained molecule optimization. However, as an additional task, we check what happens when we iteratively use the generator on the molecules being optimized. This should lead to diminishing similarity between the starting molecules and those in consecutive iterations. For the present task the set needs to be a sample from the entire ZINC-250K, whereas the set is chosen as a sample from the top-20 of molecules with the highest value of penalized logP. Each molecule is fed into the generator and the corresponding ‘optimized’ molecule’s latent space representation is obtained. The generated latent space representation is then treated as the new input for the generator. The process is repeated times and the resulting set of molecules is … }. Here, as in the previous task and as in previous research 22 we start the procedure from the 800 molecules with the lowest values of penalized logP in ZINC-250K.
The results of our unconstrained molecule optimization are shown in Figure 12. In Figure 12(a) and (c) we observe that consecutive iterations keep shifting the distribution of the objective (penalized logP) towards higher values. However, the improvement from further iterations is decreasing. Interestingly, the maximum of the distribution keeps increasing (although in somewhat random fashion). After 10-20 iterations it reaches very high values of logP observed from molecules which are not drug-like, similarly to those obtained with RL. 26 Both in the case of the RL approach and in our case, the molecules with the highest penalized logP after many iterations also become non-drug-like – see Figure 15 for a list of compounds with the maximum values of penalized logP in the iterative optimization procedure. This lack of drug-likeness is related to the fact that after performing many iterations, the distribution of coordinates of our set of molecules in the latent space goes far away from the prior distribution (multivariate normal) used when training the JT-VAE on ZINC-250K. In Fig. 12(b) we show the evolution of the distribution of Tanimoto similarities between the starting molecules and those obtained after iterations. We also show the similarity between the starting molecules and random molecules from ZINC-250K. We observe that after 10 iterations the similarity between the starting molecules and the optimized ones is comparable to the similarity of random molecules from ZINC-250K. After around 20 iterations the optimized molecules become less similar to the starting ones than random molecules from ZINC-250K, as the set of optimized molecules is moving further away from the space of drug-like molecules.
3.3.1 Molecular paths from unconstrained optimization experiments
In the following section we show examples of the evolution of selected molecules for the unconstrained optimization experiments. Figures 13 and 14 show starting and final molecules, together with all molecules generated during the iteration over the optimization path and their penalized logP values.
3.3.2 Molecules with the highest values of penalized logP
On the Figure 12(c) we plot the maximum value of penalized logP in the set of molecules being optimized, as a function of number of iterations of for unconstrained molecule optimization. In Figure 15 we show corresponding molecules for iterations 1-24.
In this work, we introduce Mol-CycleGAN – a new model based on CycleGAN which can be used for the de novo generation of molecules. The advantage of the proposed model is the ability to learn transformation rules from the sets of compounds with desired and undesired values of the considered property. The model operates in the latent space trained by another model – in our work we use the latent space of JT-VAE. The model can generate molecules with desired properties, as shown on the example of structural and physicochemical properties. The generated molecules are close to the starting ones and the degree of similarity can be controlled via a hyperparameter. In the task of constrained optimization of drug-like molecules our model significantly outperforms previous results. In the future work we plan to extend the approach to multi-parameter optimization of molecules using StarGAN.30 It would also be interesting to test the model on cases where a small structural change leads to a drastic change in the property (e.g. the so-called activity cliffs) which are hard to model.
All code used to produce the reported results can be found online at https://github.com/ardigen/mol-cycle-gan.
We would like to thank Sabina Podlewska for her helpful comments and for fruitful discussions.
- Ratti and Trist 2001 Ratti, E.; Trist, D. Farmaco 2001, 56, 13–19.
- Rao and Srinivas 2011 Rao, V. S.; Srinivas, K. J. Bioinform. Seq. Anal. 2011, 2, 89–94.
- Bajorath 2002 Bajorath, J. Nat. Rev. Drug Discov. 2002, 1, 882–894.
- Lavecchia and Di Giovanni 2013 Lavecchia, A.; Di Giovanni, C. Curr. Med. Chem. 2013, 20, 2839–2860.
- Honório et al. 2013 Honório, K. M.; Moda, T. L.; Andricopulo, A. D. J. Med. Chem. 2013, 9, 163–176.
- de Ruyck et al. 2016 de Ruyck, J.; Brysbaert, G.; Blossey, R.; Lensink, M. F. Adv. Appl. Bioinform. 2016, 9, 1–11.
- Segler et al. 2018 Segler, M. H.; Preuss, M.; Waller, M. P. Nature 2018, 555, 604.
- Chen et al. 2018 Chen, H.; Engkvist, O.; Wang, Y.; Olivecrona, M.; Blaschke, T. Drug Discov. Today 2018,
- Duvenaud et al. 2015 Duvenaud, D. K.; Maclaurin, D.; Iparraguirre, J.; Bombarell, R.; Hirzel, T.; Aspuru-Guzik, A.; Adams, R. P. Adv. Neur. In.; 2015; pp 2224–2232.
- Jastrzębski et al. 2016 Jastrzębski, S.; Leśniak, D.; Czarnecki, W. M. arXiv preprint arXiv:1602.06289 2016,
- Coley et al. 2017 Coley, C. W.; Barzilay, R.; Green, W. H.; Jaakkola, T. S.; Jensen, K. F. J. Chem. Inf. Model. 2017, 57, 1757–1772.
- Pham et al. 2018 Pham, T.; Tran, T.; Venkatesh, S. arXiv preprint arXiv:1801.02622 2018,
- Segler et al. 2017 Segler, M. H.; Kogej, T.; Tyrchan, C.; Waller, M. P. ACS Cent. Sci. 2017, 4, 120–131.
- Bjerrum and Threlfall 2017 Bjerrum, E. J.; Threlfall, R. arXiv preprint arXiv:1705.04612 2017,
- Winter et al. 2019 Winter, R.; Montanari, F.; Noé, F.; Clevert, D.-A. Chem. Sci. 2019,
- Gupta et al. 2018 Gupta, A.; Müller, A. T.; Huisman, B. J.; Fuchs, J. A.; Schneider, P.; Schneider, G. Mol. Inform. 2018, 37, 1700111.
- Kingma and Welling 2013 Kingma, D. P.; Welling, M. arXiv preprint arXiv:1312.6114 2013,
- Gómez-Bombarelli et al. 2018 Gómez-Bombarelli, R.; Wei, J. N.; Duvenaud, D.; Hernández-Lobato, J. M.; Sánchez-Lengeling, B.; Sheberla, D.; Aguilera-Iparraguirre, J.; Hirzel, T. D.; Adams, R. P.; Aspuru-Guzik, A. ACS Cent. Sci. 2018, 4, 268–276.
- Kusner et al. 2017 Kusner, M. J.; Paige, B.; Hernández-Lobato, J. M. arXiv preprint arXiv:1703.01925 2017,
- Dai et al. 2018 Dai, H.; Tian, Y.; Dai, B.; Skiena, S.; Song, L. arXiv preprint arXiv:1802.08786 2018,
- Simonovsky and Komodakis 2018 Simonovsky, M.; Komodakis, N. arXiv preprint arXiv:1802.03480 2018,
- Jin et al. 2018 Jin, W.; Barzilay, R.; Jaakkola, T. arXiv preprint arXiv:1802.04364 2018,
- Goodfellow et al. 2014 Goodfellow, I.; Pouget-Abadie, J.; Mirza, M.; Xu, B.; Warde-Farley, D.; Ozair, S.; Courville, A.; Bengio, Y. Adv. Neur. In.; 2014; pp 2672–2680.
- Guimaraes et al. 2017 Guimaraes, G. L.; Sanchez-Lengeling, B.; Outeiral, C.; Farias, P. L. C.; Aspuru-Guzik, A. arXiv preprint arXiv:1705.10843 2017,
- De Cao and Kipf 2018 De Cao, N.; Kipf, T. arXiv preprint arXiv:1805.11973 2018,
- You et al. 2018 You, J.; Liu, B.; Ying, R.; Pande, V.; Leskovec, J. arXiv preprint arXiv:1806.02473 2018,
- Zhu et al. 2017 Zhu, J.-Y.; Park, T.; Isola, P.; Efros, A. A. arXiv preprint 2017,
- Weininger 1988 Weininger, D. J. Chem. Inf. Comp. Sci. 1988, 28, 31–36.
Mao et al. 2017
Mao, X.; Li, Q.; Xie, H.; Lau, R. Y.; Wang, Z.; Smolley, S. P.
2017 IEEE International Conference on Computer Vision (ICCV); 2017; pp 2813–2821.
- Choi et al. 2017 Choi, Y.; Choi, M.; Kim, M.; Ha, J.-W.; Kim, S.; Choo, J. arXiv:1711.09020 2017, 1711.
- Perarnau et al. 2016 Perarnau, G.; van de Weijer, J.; Raducanu, B.; Álvarez, J. M. arXiv preprint arXiv:1611.06355 2016,
- Sterling and Irwin 2015 Sterling, T.; Irwin, J. J. J. Chem. Inf. Model. 2015, 55, 2324–2337.
- Rogers and Hahn 2010 Rogers, D.; Hahn, M. J. Chem. Inf. Model. 2010, 50, 742–754.
- Kingma and Ba 2014 Kingma, D. P.; Ba, J. arXiv preprint arXiv:1412.6980 2014,
- Ioffe and Szegedy 2015 Ioffe, S.; Szegedy, C. arXiv preprint arXiv:1502.03167 2015,
- Besnard et al. 2012 Besnard, J.; Ruda, G. F.; Setola, V.; Abecassis, K.; Rodriguiz, R. M.; Huang, X.-P.; Norval, S.; Sassano, M. F.; Shin, A. I.; Webster, L. A. Nature 2012, 492, 215.
- Bickerton et al. 2012 Bickerton, G. R.; Paolini, G. V.; Besnard, J.; Muresan, S.; Hopkins, A. L. Nat. Chem. 2012, 4, 90.