Mol-CycleGAN - a generative model for molecular optimization

02/06/2019 ∙ by Łukasz Maziarka, et al. ∙ 0

Designing a molecule with desired properties is one of the biggest challenges in drug development, as it requires optimization of chemical compound structures with respect to many complex properties. To augment the compound design process we introduce Mol-CycleGAN - a CycleGAN-based model that generates optimized compounds with high structural similarity to the original ones. Namely, given a molecule our model generates a structurally similar one with an optimized value of the considered property. We evaluate the performance of the model on selected optimization objectives related to structural properties (presence of halogen groups, number of aromatic rings) and to a physicochemical property (penalized logP). In the task of optimization of penalized logP of drug-like molecules our model significantly outperforms previous results.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

The principal goal of the drug design process is to find new chemical compounds that are able to modulate the activity of a given target (typically a protein) in a desired way.1 However, finding such molecules in the high-dimensional chemical space of all molecules without any prior knowledge is nearly impossible. In silico methods have been introduced to leverage the existing chemical, pharmacological and biological knowledge, thus forming a new branch of science - computer-aided drug design (CADD).2, 3 Computer methods are nowadays applied at every stage of drug design pipelines 2 - from the search of new, potentially active compounds,4 through optimization of their activity and physicochemical profile 5 and simulating their scheme of interaction with the target protein,6 to assisting in planning the synthesis and evaluation of its difficulty.7

The recent advancements in deep learning encouraged its application in CADD.

8 The two main approaches are: virtual screening

, that is using discriminative models to screen commercial databases and classify molecules as likely active or inactive;

de novo design, that is using generative models to propose novel molecules that likely possess the desired properties. The former application already proved to give outstanding results.9, 10, 11, 12

The latter use case is rapidly emerging, e.g. long short-term memory (LSTM) network architectures have been applied with some success.

13, 14, 15, 16

In the center of our interest are the hit-to-lead and lead optimization phases of the compound design process. Their goals are to optimize the drug-like molecules identified in the previous steps in terms of the desired activity profile (increased potency towards given target protein and provision of inactivity towards off-target proteins) and the physicochemical and pharmacokinetic properties. Optimizing a molecule with respect to multiple properties simultaneously remains a challenge.5 Nevertheless, some successful approaches to compound generation and optimization have been proposed.

Variational Autoencoder (VAE)

17 can be used for the task of molecule generation. The first such models18 were based on the SMILES representation. Unfortunately, these models can generate invalid SMILES that do not correspond to any molecules. Introduction of grammars into the model improved the success rate of valid SMILES generation.19, 20 Maintaining chemical validity when generating new molecules became possible through VAEs realized directly on molecular graphs.21, 22

Generative Adversarial Networks (GAN)23 is an alternative architecture that has been applied to de novo

drug design. GANs, together with Reinforcement Learning (RL), were recently proposed as models that generate molecules with desired properties while promoting diversity. These models use representations based on SMILES,

24 graph adjacency and annotation matrices,25 or are based on graph convolutional policy networks.26 The major obstacle in practical utility of these approaches is that the generated compounds can be difficult (or even impossible) to synthesize.

To address the problem of generating compounds difficult to synthesize, we introduce Mol-CycleGAN – a generative model based on CycleGAN.27 Given a starting molecule, it generates a structurally similar one but with a desired characteristics. The similarity between these molecules is important for two reasons. First, it leads to an easier synthesis of generated molecules, and second, such optimization of the selected property is less likely to spoil the previously optimized ones, which is important in the context of multiparameter optimization. We show that our model generates molecules that possess desired properties (note that by a molecular property we also mean binding affinity towards a target protein) while retaining their structural similarity to the starting compound. Moreover, thanks to employing graph-based representation instead of SMILES, our algorithm always returns valid compounds.

We evaluate the model’s ability to perform structural transformations and molecular optimization. The former indicates that the model is able to do simple structural modifications such as a change in the presence of halogen groups or number of aromatic rings. In the latter, we aim to maximize penalized logP to assess the model’s utility for compound design. Penalized logP is chosen as it is a property often selected as a testing ground for molecule optimization models,22, 26 due to its relevance in the drug design process. In the optimization of penalized logP for drug-like molecules our model significantly outperforms previous results. To the best of our knowledge, Mol-CycleGAN is the first approach to molecule generation that uses the CycleGAN architecture.

2 Methods

2.1 Junction Tree Variational Autoencoder

JT-VAE22 (Junction Tree Variational Autoencoder) is a method based on VAE, which works using graph structures of compounds, in contrast to previous methods which work using the SMILES representation of molecules.18, 19, 20

The VAE models used for molecule generation share the encoder-decoder architecture. The encoder is a neural network used to calculate a continuous, high-dimensional representation of a molecule in the so-called latent space, whereas the decoder is another neural network used to decode a molecule from coordinates in the latent space. In VAEs the entire encoding-decoding process is stochastic (has a random component). In JT-VAE both the encoding and decoding algorithms use two components for representing the molecule: a junction-tree scaffold of molecular sub-components (called clusters) and a molecular graph.

22 JT-VAE shows superior properties compared to SMILES-based VAEs, such as 100 validity of generated molecules.

2.2 Mol-CycleGAN

Mol-CycleGAN is a novel method of performing compound optimization by learning from the sets of molecules with and without the desired molecular property (denoted by the sets and ). Our approach is to train a model to perform the transformation and then use this model to perform optimization of molecules. In the context of compound design () can be, e.g., the set of inactive (active) molecules.

To represent the sets and our approach requires an embedding of molecules which is reversible, i.e. enables both encoding and decoding of molecules.

For this purpose we use the latent space of JT-VAE, representation created by neural network during the training process. This approach has the advantage that the distance between molecules (required to calculate the loss function) can be defined directly in the latent space. Moreover, molecular properties are easier to express on graphs rather than using linear SMILES representation.

28 One could try formulating the CycleGAN model on the SMILES representation directly, but this would raise the problem of defining a differentiable intermolecular distance, as the standard manners of measuring similarity between molecules (Tanimoto similarity) are non-differentiable.

Figure 1: Schematic diagram of our Mol-CycleGAN. and are the sets of molecules with selected values of the molecular property (e.g. active/inactive or with high/low values of logP). and are the generators. and are the discriminators.

Our approach extends the CycleGAN framework 27 to molecular embeddings of the latent space of JT-VAE.22 We represent each molecule as a point in the latent space, given by the mean of the variational encoding distribution.17 Our model works as follows (Fig. 1): (i) we start by defining the sets and (e.g., inactive/active molecules); (ii) we introduce the mapping functions and ; (iii) we introduce discriminator (and ) which force the generator (and ) to generate samples from a distribution close to the distribution of (or ). The components , , , and are modeled by neural networks (see Workflow for technical details). The main idea of our approach to molecule optimization is to: (i) take the prior molecule without a specified feature (e.g. specified number of aromatic rings, water solubility, activity) from set , and compute its latent space embedding; (ii) use the generative neural network to obtain the embedding of molecule , that has this feature (as if the molecule came from set ) but is also similar to the original molecule ; (iii) decode the latent space coordinates given by to obtain the SMILES representation of the optimized molecule. Thereby, the method is applicable in lead optimization processes, as the generated compound remains structurally similar to the input molecule.

To train the Mol-CycleGAN we use the following loss function:

(1)

and aim to solve

We use the adversarial loss introduced in LS-GAN 29:

(2)

which ensures that the generator (and ) generates samples from a distribution close to the distribution of (or ).

The cycle consistency loss

(3)

reduces the space of possible mapping functions, such that for a molecule from set , the GAN cycle brings it back to a molecule similar to , i.e. is close to (and analogously is close to ). The inclusion of the cyclic component acts as a regularization and may also help in the regime of low data, as the model can learn from both directions of the transformation. This component makes the resulting model more robust (cf. e.g. the comparison 30 of CycleGAN vs non-cyclic IcGAN 31). Finally, to ensure that the generated (optimized) molecule is close to the starting one we use the identity mapping loss 27

(4)

which further reduces the space of possible mapping functions and prevents the model from generating molecules that lay far away from the starting molecule in the latent space of JT-VAE.

In all our experiments we use the hyperparameters

and , which were chosen by checking a couple of combinations (for structural tasks) and verifying that our optimization process: (i) improves the studied property and (ii) generates molecules similar to the starting ones. We have not performed a grid search for optimal values of and , and hence there could be space for improvement. Note that these parameters control the balance between improvement in the optimized property and similarity between the generated and the starting molecule. We show in the Results section that both the improvement and the similarity can be obtained with the proposed model.

1:: - the learning rate. - the cycle consistency weight. - the identity mapping weight. , the batch size.
2:: , - initial discriminators’ parameters. , - initial generators’ parameters.
3:Encode the with the JT-VAE encoder to obtain the latent space representation of molecules from the training dataset.
4:while , have not converged do
5:     Sample a batch from the dataset .
6:     Sample a batch from the dataset .
7:     Calculate the adversarial loss. Eq.(2)
.
8:     Calculate the cycle consistency loss. Eq.(3)
.
9:     Calculate the identity mapping loss. Eq.(4)
.
10:     Calculate the loss function. Eq.(1)
.
11:     Calculate the gradients of loss function.
;  .
;  .
12:     Minimize the loss function, using the Adam optimizer, with respect to the parameters of generators .
;  .
13:     Maximize the loss function, using the Adam optimizer, with respect to the parameters of discriminators .
;  .
14:end while
15:Decode and with the JT-VAE decoder to obtain the SMILES representation of molecules returned by the Mol-CycleGAN.
Algorithm 1 Mol-CycleGAN training algorithm. In all experiments in the paper the following values are used: , , , .

2.3 Workflow

We conduct experiments to test whether the proposed model is able to generate molecules that possess desired properties and are close to the starting molecules. Namely, we evaluate the model on tasks related to structural modifications, as well as on tasks related to molecule optimization. For testing molecule optimization we select the octanol-water partition coefficient (logP) penalized by the synthetic accessibility (SA) score. logP describes lipophilicity - a parameter influencing a whole set of other characteristics of compounds such as solubility, permeability through biological membranes, ADME (absorption, distribution, metabolism, and excretion) properties, and toxicity. We use the formulation as reported in the paper on JT-VAE,22 i.e. for molecule the penalized logP is given as . We use the ZINC-250K dataset used in similar studies19, 22 which contains 250 000 drug-like molecules extracted from the ZINC database.32 The detailed formulation of the tasks is the following:

  • Structural transformations We test the model’s ability to perform simple structural transformations of the molecules. To this end, we choose the sets and , differing in some structural aspects, and then test if our model can learn the transformation rules and apply them to molecules previously unseen by the model. There are two features by which we divide the sets:

    • Halogen moieties We split the dataset into two subsets and . The set consists of molecules which contain at least one of the following SMARTS: ’[!#1]Cl’, ’[!#1]F’, ’[!#1]I’, ’C#N’, whereas the set consists of such molecules which do not contain any of them. The SMARTS chosen in this experiment indicate halogen moieties and the nitrile group. Their presence and position within a molecule can have an immense impact on the compound’s activity.

    • Aromatic rings Molecules in have exactly two aromatic rings, whereas molecules in have one or three aromatic rings.

  • Constrained molecule optimization We optimize penalized logP, while constraining the degree of deviation from the starting molecule. The similarity between molecules is measured with Tanimoto similarity on Morgan Fingerprints.33 The sets and are random samples from ZINC-250K, where the compounds’ penalized logP values are below and above the median, respectively.

  • Unconstrained molecule optimization We perform unconstrained optimization of penalized logP. The set is a random sample from ZINC-250K and the set is a random sample from the top-20 molecules with the highest penalized logP in ZINC-250K.

2.4 Composition of the datasets

Dataset sizes

In Table 1 we show the number of molecules in the datasets used for training and testing. In all experiments we use separate sets for training the model ( and ) and separate, non-overlapping ones for evaluating the model ( and ). In constrained and unconstrained molecular optimization experiments no set is required.

Dataset Halogen Aromatic Constrained Unconstrained
moieties rings optimization optimization
75000 80000 80000 80000
86899 18220 800 800
75000 80000 80000 24946
12556 43193 - -
Table 1: Dataset sizes
Distribution of the selected properties

In the experiment on halogen moieties, the set always (i.e., both in train- and test-time) contains molecules without halogen moieties, and the set always contains molecules with halogen moieties. In the dataset used to construct the latent space (ZINC-250K) 65 % molecules do not contain any halogen moiety, whereas the remaining 35 % contain one or more halogen moieties.

In the experiment on aromatic rings, the set always (i.e., both in train- and test-time) contains molecules with 2 rings, and the set always contains molecules with 1 or 3 rings. The distribution of the number of aromatic rings in the dataset used to construct the latent space (ZINC-250K) is shown in Figure 2 along with the distribution for and .

Figure 2: Number of aromatic rings in ZINC-250K and in the sets used in the experiment on aromatic rings.

For the molecule optimization tasks we plot the distribution of the property being optimized (penalized logP) in Figs. 3 (constrained optimization) and 4 (unconstrained optimization).

Figure 3: Distribution of penalized logP in ZINC-250K and in the sets used in the task of constrained molecule optimization. Note that the sets and are non-overlapping (they are a random sample from ZINC-250K split by the median). is the set of 800 molecules from ZINC-250K with the lowest values of penalized logP.
Figure 4: Distribution of penalized logP in ZINC-250K and in the sets used in the task of unconstrained molecule optimization. Note that the set is a random sample from ZINC-250K, and hence the same distribution is observed for the two sets.

2.5 Architecture of the models

All networks are trained using the Adam optimizer 34 with learning rate

. During training we use batch normalization.

35

As the activation function we use leaky-ReLU with

. In the structural experiments the models are trained for 100 epochs and in the physiochemical experiments for 300 epochs.

2.5.1 Structural data experiments

  • Generators are built of one fully connected residual layer, followed by one dense layer. All layers contain 56 units.

  • Discriminators are built of 6 dense layers of the following sizes: 56, 42, 28, 14, 7, 1 units.

2.5.2 Physiochemical data experiments

  • Generators are built of four fully connected residual layers. All layers contain 56 units.

  • Discriminators are built of 7 dense layers of the following sizes: 48, 36, 28, 18, 12, 7, 1 units.

3 Results

3.1 Structural transformations

In each structural experiment we test the model’s ability to perform simple transformations of molecules in both directions and . Here, and are non-overlapping sets of molecules with a specific structural property. We start with experiments on structural properties because they are easier to interpret and the rules related to transforming between and are well defined. Hence, the present task should be easier for the model, as compared to the optimization of complex molecular properties, for which there are no simple rules connecting and .

Halogen moieties Aromatic rings
Success rate 0.6429 0.7161 0.5342 0.4216
Non-identity 0.9345 0.9574 0.9082 0.8899
Uniqueness 0.9952 0.9953 0.9957 0.9954
Table 2: Evaluation of models modifying the presence of halogen moieties and the number of aromatic rings. Success rate is the fraction of times when a desired modification occurs. Non-identity is the fraction of times when the generated molecule is different from the starting one. Uniqueness is the fraction of unique molecules in the set of generated molecules.

In Table 2 we show the success rates for the tasks of performing structural transformations of molecules. The task of changing the number of aromatic rings is more difficult than changing the presence of halogen moieties. In the former the transition between (with 2 rings) and (with 1 or 3 rings, cf. Fig. 5) is more than a simple addition/removal transformation, as it is in the other case (see Fig. 5 for the distributions of the aromatic rings). This is reflected in the success rates which are higher for the task of transformations of halogen moieties. In the dataset used to construct the latent space (ZINC-250K) 64.9 % molecules do not contain any halogen moiety, whereas the remaining 35.1 % contain one or more halogen moieties. This imbalance might be the reason for the higher success rate in the task of removing halogen moieties (). Molecular similarity and drug-likeness are achieved in all experiments.

Figure 5: Distributions of the number of aromatic rings in and (left), and and (right). Identity mappings are not included in the figures.

To confirm that the generated molecules are close to the starting ones, we show in Figure 6 distributions of their Tanimoto similarities (using Morgan fingerprints). For comparison we also include distributions of the Tanimoto similarities between the starting molecule and a random molecule from the ZINC-250K dataset. The high similarities between the generated and the starting molecules show that our procedure is neither a random sampling from the latent space, nor a memorization of the manifold in the latent space with the desired value of the property. In Figure 7 we visualize the molecules, which after transformation are the most similar to the starting molecules.

(a) Halogen moieties
(b) Aromatic rings
Figure 6: Density plots of Tanimoto similarities between molecules from (and ) and their corresponding molecules from (and ). Similarities between molecules from (and ) and random molecules from ZINC-250K are included for comparison. Identity mappings are not included. The distributions of similarities related to transformations given by and show the same trend.
(a) top: ; bottom:
(b) top: ; bottom:
Figure 7: The most similar molecules with changed number of aromatic rings. In the top row we show the starting molecules, whereas in the bottom row we show the generated molecules. Below we provide the Tanimoto similarities between the molecules.

3.2 Constrained molecule optimization

As our main task we optimize the desired property under the constraint that the similarity between the original and the generated molecule is higher than a fixed threshold. This is a more realistic scenario in drug discovery, where the development of new drugs usually starts with known molecules such as existing drugs.36 Here, we maximize the penalized logP coefficient and use the Tanimoto similarity with the Morgan fingerprint33 to define the threshold of similarity, . We compare our results with previous similar studies.22, 26

In our optimization procedure each molecule (given by the latent space coordinates ) is fed into the generator to obtain the ‘optimized’ molecule . The pair defines what we call an ’optimization path’ in the latent space of JT-VAE. To be able to make a comparison with the previous research 22 we start the procedure from the 800 molecules with the lowest values of penalized logP in ZINC-250K and then we decode molecules from points along the path from to in equal steps.

From the resulting set of molecules we report the molecule with the highest penalized logP score that satisfies the similarity constraint. A modification succeeds if one of the decoded molecules satisfies the constraint and is distinct from the starting one.

JT-VAE GCPN Mol-CycleGAN
Delta Improvement Similarity Improvement Similarity Improvement Similarity
0 1.91 2.04 0.28 0.15 4.20 1.28 0.32 0.12 8.30 1.98 0.16 0.09
0.2 1.68 1.85 0.33 0.13 4.12 1.19 0.34 0.11 5.79 2.35 0.30 0.11
0.4 0.84 1.45 0.51 0.10 2.49 1.30 0.47 0.08 2.89 2.08 0.52 0.10
0.6 0.21 0.75 0.69 0.06 0.79 0.63 0.68 0.08 1.22 1.48 0.69 0.07
Table 3: Results of the constrained optimization for JT-VAE 22, GCPN 26 and Mol-CycleGAN.
JT-VAE GCPN Mol-CycleGAN
0 97.5% 100.0% 99.75%
0.2 97.1% 100.0% 93.75%
0.4 83.6% 100.0% 58.75%
0.6 46.4% 100.0% 19.25%
Table 4: Success rate for constrained optimization for JT-VAE 22, GCPN 26 and Mol-CycleGAN.

In the task of optimizing penalized logP of drug-like molecules, our method significantly outperforms the previous results in the mean improvement of the property (see Table 3). It achieves a comparable mean similarity in the constrained scenario (for ). The success rates are comparable for , whereas for the more stringent constraints () our model has lower success rates (see Table 4). Note that comparably high improvements of penalized logP can be obtained using reinforcement learning.26

However, the molecules optimized in such a manner are not drug-like, e.g., they have very low quantitative estimate of drug-likeness score 

37 even for . In our method (as well as in JT-VAE) drug-likeness is achieved ‘by construction’ and is an intrinsic feature of the latent space obtained by training the variational autoencoder on molecules from ZINC (which are drug-like).

Figure 8: Molecules with the highest improvement of the penalized logP for . In the top row we show the starting molecules, whereas in the bottom row we show the optimized molecules. Upper row numbers indicate Tanimoto similarities between the starting and the final molecule. The improvement in the score is given below the generated molecules.

3.2.1 Molecular paths from constrained optimization experiments

In the following section we show examples of the evolution of the selected molecules for the constrained optimization experiments. Figures 9-11 show starting and final molecules, together with all molecules generated along the optimization path and their values of penalized logP.

Figure 9: Evolution of a selected exemplary molecule during constrained optimization. We only include the steps along the path where a change in the molecule is introduced. We show values of penalized logP below the molecules.
Figure 10: Evolution of a selected exemplary molecule during constrained optimization. We only include the steps along the path where a change in the molecule is introduced. We show values of penalized logP below the molecules.
Figure 11: Evolution of a selected exemplary molecule during constrained optimization. We only include the steps along the path where a change in the molecule is introduced. We show values of penalized logP below the molecules.

3.3 Unconstrained molecule optimization

Our architecture is tailor made for the scenario of constrained molecule optimization. However, as an additional task, we check what happens when we iteratively use the generator on the molecules being optimized. This should lead to diminishing similarity between the starting molecules and those in consecutive iterations. For the present task the set needs to be a sample from the entire ZINC-250K, whereas the set is chosen as a sample from the top-20 of molecules with the highest value of penalized logP. Each molecule is fed into the generator and the corresponding ‘optimized’ molecule’s latent space representation is obtained. The generated latent space representation is then treated as the new input for the generator. The process is repeated times and the resulting set of molecules is … }. Here, as in the previous task and as in previous research 22 we start the procedure from the 800 molecules with the lowest values of penalized logP in ZINC-250K.

The results of our unconstrained molecule optimization are shown in Figure 12. In Figure 12(a) and (c) we observe that consecutive iterations keep shifting the distribution of the objective (penalized logP) towards higher values. However, the improvement from further iterations is decreasing. Interestingly, the maximum of the distribution keeps increasing (although in somewhat random fashion). After 10-20 iterations it reaches very high values of logP observed from molecules which are not drug-like, similarly to those obtained with RL. 26 Both in the case of the RL approach and in our case, the molecules with the highest penalized logP after many iterations also become non-drug-like – see Figure 15 for a list of compounds with the maximum values of penalized logP in the iterative optimization procedure. This lack of drug-likeness is related to the fact that after performing many iterations, the distribution of coordinates of our set of molecules in the latent space goes far away from the prior distribution (multivariate normal) used when training the JT-VAE on ZINC-250K. In Fig. 12(b) we show the evolution of the distribution of Tanimoto similarities between the starting molecules and those obtained after iterations. We also show the similarity between the starting molecules and random molecules from ZINC-250K. We observe that after 10 iterations the similarity between the starting molecules and the optimized ones is comparable to the similarity of random molecules from ZINC-250K. After around 20 iterations the optimized molecules become less similar to the starting ones than random molecules from ZINC-250K, as the set of optimized molecules is moving further away from the space of drug-like molecules.

Figure 12: Results of iterative procedure of the unconstrained optimization. (a) Distribution of the penalized logP in the starting set and after iterations. (b) Distribution of the Tanimoto similarity between the starting molecules and random molecules from ZINC-250K, as well as those generated after iterations. (c) Plot of the mean value, percentiles (75th and 90th), and the maximum value of penalized logP as a function of the number of iterations.

3.3.1 Molecular paths from unconstrained optimization experiments

In the following section we show examples of the evolution of selected molecules for the unconstrained optimization experiments. Figures 13 and 14 show starting and final molecules, together with all molecules generated during the iteration over the optimization path and their penalized logP values.

Figure 13: Evolution of a selected molecule during consecutive iterations of unconstrained optimization. We show values of penalized logP below the molecules.
Figure 14: Evolution of a selected molecule during consecutive iterations of unconstrained optimization. We show values of penalized logP below the molecules.

3.3.2 Molecules with the highest values of penalized logP

On the Figure 12(c) we plot the maximum value of penalized logP in the set of molecules being optimized, as a function of number of iterations of for unconstrained molecule optimization. In Figure 15 we show corresponding molecules for iterations 1-24.

Figure 15: Molecules with the highest penalized logP in the set being optimized for iterations 1-24 for unconstrained optimization. We show values of penalized logP below the molecules.

4 Conclusions

In this work, we introduce Mol-CycleGAN – a new model based on CycleGAN which can be used for the de novo generation of molecules. The advantage of the proposed model is the ability to learn transformation rules from the sets of compounds with desired and undesired values of the considered property. The model operates in the latent space trained by another model – in our work we use the latent space of JT-VAE. The model can generate molecules with desired properties, as shown on the example of structural and physicochemical properties. The generated molecules are close to the starting ones and the degree of similarity can be controlled via a hyperparameter. In the task of constrained optimization of drug-like molecules our model significantly outperforms previous results. In the future work we plan to extend the approach to multi-parameter optimization of molecules using StarGAN.30 It would also be interesting to test the model on cases where a small structural change leads to a drastic change in the property (e.g. the so-called activity cliffs) which are hard to model.

All code used to produce the reported results can be found online at https://github.com/ardigen/mol-cycle-gan.

We would like to thank Sabina Podlewska for her helpful comments and for fruitful discussions.

References