1 Introduction
Variational autoencoder (Kingma and Welling, 2013) is an autoencoderbased generative model that provides highquality samples in many data domains, including image generation (Razavi et al., 2019)
(Semeniuta et al., 2017), audio synthesis (Hsu et al., 2019), and drug discovery (Zhavoronkov et al., 2019).Variational autoencoders use stochastic encoder and decoder. An encoder maps an object onto a distribution of the latent codes , and a decoder produces a distribution of objects that correspond to a given latent code. In this paper, we analyze the impact of stochastic decoding on VAE models for discrete data and propose deterministic decoders as an alternative.
With complex stochastic decoders, such as PixelRNN (Oord et al., 2016), VAEs tend to ignore the latent codes, since the decoder is flexible enough to produce the whole data distribution without using latent codes at all. Such behavior can damage the representation learning capabilities of VAE: we will not be able to use its latent codes for downstream tasks. A deterministic decoder, on the contrary, maps each latent code to a single data point, making it harder to ignore the latent codes, as they are the only source of variation.
One application of latent codes of VAEs is Bayesian optimization of molecular properties. GómezBombarelli et al. (2018) trained a Gaussian process regressor on the latent codes of VAE and optimized the latent codes to discover molecular structures with desirable properties. With stochastic decoding, a Gaussian process has to account for stochasticity in target variables, since every latent code corresponds to multiple molecular structures. Deterministic decoding, on the other hand, simplifies the regression task, leading to better predictive quality, as we show in the experiments.
Our contribution is threefold:

We formulate a model of deterministic decoder VAE (DDVAE), derive its evidence lower bound and propose a convenient approximation with proven convergence to optimal parameters of nonrelaxed objective;

We show that lossless autoencoding is impossible with full support proposal distributions and introduce bounded support distributions as a solution;

We provide experiments on multiple datasets (synthetic, MNIST, MOSES, ZINC) to show that DDVAE yields both a proper generative distribution and useful latent codes.
The code for reproducing the experiments is available at https://github.com/insilicomedicine/DDVAE.
2 Deterministic Decoder VAE (DDVAE)
In this section, we formulate a deterministic decoder variational autoencoder (DDVAE). Next, we show the need for bounded support proposals and introduce them in Section 2.1. In Section 2.2, we propose a continuous relaxation of the DDVAE’s ELBO. In Section 2.3, we prove that the optimal solution of the relaxed problem matches the optimal solution of the original problem.
Variational autoencoder (VAE) consists of an encoder and a decoder . The model learns a mapping of data distribution onto a prior distribution of latent codes which is often a standard Gaussian . Parameters and are learned by maximizing a lower bound on log marginal likelihood . is known as an evidence lower bound (ELBO):
(1) 
The term in Eq. 1 is a reconstruction loss, and the term is a KullbackLeibler divergence that encourages latent codes to be marginally distributed as .
For sequence models, is a sequence , where each token of the sequence is an element of a finite vocabulary , and is the length of sequence
. A decoding distribution for sequences is often parameterized as a recurrent neural network that produces a probability distribution over each token
given the latent code and all previous tokens. The ELBO for such model is:(2) 
where .
In deterministic decoders, we decode a sequence from a latent code by taking a token with the highest score at each iteration:
(3) 
To avoid ambiguity, when two tokens have the same maximal probability, is equal to a special “undefined” token that does not appear in the data. Such formulation simplifies derivations in the remaining of the paper. We also assume for convenience. After decoding , reconstruction term of ELBO is an indicator function which is one, if the model reconstructed a correct sequence, and zero otherwise:
(4) 
(5) 
The is if the model has nonzero reconstruction error rate, leading us to two questions: is finite for some parameters and how to optimize . We answer both questions in the following sections.
2.1 Proposal distributions with bounded support
In this section, we discuss bounded support proposal distributions in VAEs and why they are crucial for deterministic decoders.
Variational Autoencoders often use Gaussian proposal distributions
(6) 
where and are neural networks modeling the mean and the covariance matrix of the proposal distribution. For a fixed , Gaussian density is positive for any . Hence, a lossless decoder has to decode every from every with a positive probability. However, a deterministic decoder can produce only a single data point for a given , making reconstruction term of minus infinity. To avoid this problem, we propose to use bounded support proposal distributions.
As bounded support proposal distributions, we suggest to use factorized distributions with marginals defined using a kernel :
(7) 
where and are neural networks that model location and bandwidth of a kernel ; the support of th dimension of in is a range . We choose a kernel such that we can compute divergence between and a prior analytically. If is factorized, divergence is a sum of onedimensional divergences:
(8) 
In Table 1, we show divergence for some bounded support kernels and illustrate their densities in Figure 3. Note that the form of divergence is very similar to the one for a Gaussian proposal distribution—they only differ in a constant multiplier for and an additive constant. For sampling, we use rejection sampling from with a uniform proposal and apply a reparametrization trick to obtain a final sample: . The acceptance rate in such sampling is . Hence, to sample a batch of size , we sample objects and repeat sampling until we get at least accepted samples. We also store a buffer with excess samples and use them in the following batches.
With bounded support proposals, we can use a uniform distribution
as a prior in VAE as long as the support of lies inside the support of a prior distribution. In practice, we ensure this by transforming and from the encoder into and using the following transformation:(9)  
(10) 
We report derived divergences for a uniform prior in Table 2.
Kernel  

Uniform  
Triangular  
Epanechnikov  
Quartic  
Triweight  
Tricube  
Cosine  
Gaussian 
Kernel  

Uniform  
Triangular  
Epanechnikov  
Quartic  
Triweight  
Tricube  
Cosine 
For discrete data, with bounded support proposals we can ensure that for sufficiently flexible encoder and decoder, there exists a set of parameters for which proposals do not overlap for different , and hence ELBO is finite. For example, we can enumerate all objects and map th object to a range .
2.2 Approximating ELBO
In this section, we discuss how to optimize a discontinuous function by approximating it with a smooth function. We also show the convergence of optimal parameters of an approximated ELBO to the optimal parameters of the original function in the next section.
We start by equivalently defining from Eq. 3 for some array :
(11) 
We approximate Eq. 11 by introducing a smooth relaxation of an indicator function parameterized with a temperature parameter :
(12) 
Note that converges to pointwise. In Figure 4 we show function for different values of . Substituting with the proposed relaxation, we get the following approximation of the evidence lower bound:
(13) 
A proposed is finite for and converges to pointwise. In the next section, we formulate the theorem that shows that if we gradually decrease temperature and solve maximization problem for ELBO , we will converge to optimal parameters of a nonrelaxed ELBO .
2.3 Convergence of optimal parameters of to optimal parameters of
In this section, we introduce auxiliary functions that are useful for assessing the quality of the model and formulate a theorem on the convergence of optimal parameters of to optimal parameters of .
Denote a sequencewise error rate for a given encoder and decoder:
(14) 
For a given , we can find an optimal decoder and a corresponding sequencewise error rate by rearranging the terms in Eq. 14 and applying importance sampling:
(15) 
where is an optimal decoder given by:
(16) 
Here, is a set of all possible sequences. Denote a set of parameters for which ELBO is finite:
(17) 
Theorem 1.
Assume that , length of sequences in is bounded , and and are compact sets of possible parameter values. Assume that is equicontinuous in total variation for any and :
(18) 
Let be such sequences that:
(19)  
(20) 
sequence converges to , and for any such that exists such that . Let be:
(21) 
Then the sequencewise error rate decreases asymptotically as
(22) 
, and final parameters solve the optimization problem for :
(23) 
Proof.
See Appendix A. ∎
The maximum length of sequences is bounded in the majority of practical applications. Equicontinuity assumption is satisfied for all distributions we considered in Table 1 if and depend continuously on for all . is not empty for bounded support distributions when encoder and decoder are sufficiently flexible, as discussed in Section 2.1.
Eq. 21 suggests that after we finish training the autoencoder, we should fix the encoder and finetune the decoder. Since , the optimal stochastic decoder for such is deterministic—any corresponds to a single except for a zero probability subset. In theory, we could learn for a fixed by optimizing a reconstruction term of ELBO from Eq. 2:
(24) 
but since in practice we do not anneal the temperature exactly to zero, we found such finetuning optional.
3 Related Work
Autoencoderbased generative models consist of an encoderdecoder pair and a regularizer that forces encoder outputs to be marginally distributed as a prior distribution. This regularizer can take a form of a divergence as in Variational Autoencoders (Kingma and Welling, 2013) or an adversarial loss as in Adversarial Autoencoders (Makhzani et al., 2016) and Wasserstein Autoencoders (Tolstikhin et al., 2016). Besides autoencoderbased generative models, generative adversarial networks (Goodfellow et al., 2014) and normalizing flows (Dinh et al., 2015, 2017) were shown to be useful for sequence generation (Yu et al., 2017; van den Oord et al., 2018).
Variational autoencoders are prone to posterior collapse when the encoder outputs a prior distribution, and a decoder learns the whole distribution by itself. Posterior collapse often occurs for VAEs with autoregressive decoders such as PixelRNN (Oord et al., 2016). Multiple approaches were proposed to tackle posterior collapse, including decreasing the weight of a divergence (Higgins et al., 2017), or encouraging high mutual information between latent codes and corresponding objects (Zhao et al., 2019).
Other approaches modify a prior distribution, making it more complex than a proposal: a Gaussian mixture model
(Tomczak and Welling, 2018; Kuznetsov et al., 2019), autoregressive priors (Chen et al., 2017), or training a deterministic encoder and obtaining prior with a kernel density estimation
(Ghosh et al., 2020). Unlike these approaches, we conform to the standard Gaussian prior, and study the required properties of encoder and decoder to achieve deterministic decoding.Deep generative models became a prominent approach in drug discovery as a way to rapidly discover potentially active molecules (Polykovskiy et al., 2018b; Zhavoronkov et al., 2019). Recent works explored featurebased (Kadurin et al., 2016), stringbased (GómezBombarelli et al., 2018; Segler et al., 2018), and graphbased (Jin et al., 2018; De Cao and Kipf, 2018; You et al., 2018) generative models for molecular structures. In this paper, we use a simplified molecularinput lineentry system (SMILES) (Weininger, 1970; Weininger et al., 1989) to represent the molecules—a system that represents a molecular graph as a string using a depthfirst search order traversal. Multiple algorithms were proposed to exploit SMILES structure using formal grammars (Kusner et al., 2017; Dai et al., 2018).
4 Experiments
We experiment on four datasets: synthetic and MNIST datasets to visualize a learned manifold structure, on MOSES molecular dataset to analyze the distribution quality of DDVAE, and ZINC dataset to see if DDVAE’s latent codes are suitable for goaldirected optimization. We describe model hyperparameters in Appendix
B.4.1 Synthetic data
This dataset provides a proof of concept comparison of standard VAE with a stochastic decoder and a DDVAE model with a deterministic decoder. The data consist of 6bit strings, a probability of each string is given by independent Bernoulli samples with a probability of being 0.8. For example, a probability of string "110101" is .
In Figure 5
, we illustrate the 2D latent codes learned with the proposed model. As an encoder and decoder, we used a 2layer gated recurrent unit (GRU)
(Cho et al., 2014) network with a hidden size 128. We provide illustrations for a proposed model with a uniform prior and compare uniform and tricube proposals. For a baseline model, we trained a VAE with Gaussian proposal and prior. We used , as for larger we observed posterior collapse. For our model, we used , which is equivalent to the described model.For a baseline model, we observe an irregular decision boundary, which also behaves unpredictably for latent codes that are far from the origin. Both uniform and tricube proposals learn a bricklike structure that covers the whole latent space. During training, we observed that the uniform proposal tends to separate proposal distributions by a small margin to ensure there is no overlap between them. As the training continues, the width of proposals grows until they cover the whole space. For the tricube proposal, we observed a similar behavior, although the model tolerates slight overlaps.
4.2 Binary MNIST
To evaluate the model on imaging data, we considered a binarized MNIST
(LeCun and Cortes, 2010) dataset obtained by thresholding the original to grayscale images by a threshold of . The goal of this experiment is to visualize how DDVAE learns 2D latent codes on moderate size datasets.For this experiment, we trained a 4layer fullyconnected encoder and decoder with structure . In Figure 6
, we show learned latent space structure for a baseline VAE with Gaussian prior and proposal and compare it to a DDVAE with uniform prior and proposal. Note that the uniform representation evenly covers the latent space, as all points have the same prior probability. This property is useful for visualization tasks. The learned structure better separates classes, although it was trained in an unsupervised manner: Knearest neighbor classifier on 2D latent codes yields
accuracy for DDVAE and accuracy for VAE.Method  FCD/Test ()  SNN/Test ()  

70%  80%  90%  70%  80%  90%  
VAE (G)  0.205 0.005  0.344 0.003  0.772 0.007  0.550 0.001  0.525 0.001  0.488 0.001 
VAE (T)  0.207 0.004  0.335 0.005  0.753 0.019  0.550 0.001  0.526 0.001  0.490 0.000 
DDVAE (G)  0.198 0.012  0.312 0.011  0.711 0.020  0.555 0.001  0.531 0.001  0.494 0.001 
DDVAE (T)  0.194 0.001  0.311 0.010  0.690 0.010  0.555 0.000  0.532 0.001  0.495 0.001 
4.3 Molecular sets (MOSES)
In this section, we compare the models on a distribution learning task on MOSES dataset (Polykovskiy et al., 2018a). MOSES dataset contains approximately million molecular structures represented as SMILES strings (Weininger, 1970; Weininger et al., 1989); MOSES also implements multiple metrics, including Similarity to Nearest Neighbor (SNN/Test) and Fréchet ChemNet Distance (FCD/Test) (Preuer et al., 2018). SNN/Test is an average Tanimoto similarity of generated molecules to the closest molecule from the test set. Hence, SNN acts as precision and is high if generated molecules lie on the test set’s manifold. FCD/Test computes Fréchet distance between activations of a penultimate layer of ChemNet for generated and test sets. Lower FCD/Test indicates a closer match of generated and test distributions.
In this experiment, we monitor the model’s behavior for high reconstruction accuracy. We trained a layer GRU encoder and decoder with neurons and a latent dimension for both VAE and DDVAE. We pretrained the models with such that the sequencewise reconstruction accuracy was approximately . We monitored FCD/Test and SNN/Test metrics while gradually increasing until sequencewise reconstruction accuracy dropped below .
In the results reported in Table 3, DDVAE outperforms VAE on both metrics. Bounded support proposals have less impact on the target metrics, although they slightly improve both FCD/Test and SNN/Test.
4.4 Bayesian Optimization
Method  Reconstruction  Validity  LL  RMSE  top1  top2  top3 

CVAE  44.6%  0.7%  1.812 0.004  1.504 0.006  1.98  1.42  1.19 
GVAE  53.7%  7.2%  1.739 0.004  1.404 0.006  2.94  2.89  2.80 
SDVAE  76.2%  43.5%  1.697 0.015  1.366 0.023  4.04  3.50  2.96 
JTVAE  76.7%  100.0%  1.658 0.023  1.290 0.026  5.30  4.93  4.49 
VAE (G)  87.01%  78.32%  1.558 0.019  1.273 0.050  5.76  5.74  5.67 
VAE (T)  90.3%  73.52%  1.562 0.022  1.265 0.051  5.41  5.38  5.35 
DDVAE (G)  89.39%  63.07%  1.481 0.020  1.199 0.050  5.13  4.84  4.80 
DDVAE (T)  89.89%  61.38%  1.470 0.022  1.186 0.053  5.86  5.77  5.64 
top1  top2  top3 
VAE, Gaussian  
VAE, Tricube  
DDVAE, Gaussian  
DDVAE, Tricube  
A standard use case for generative molecular autoencoders for molecules is Bayesian Optimization (BO) of molecular properties on latent codes (GómezBombarelli et al., 2018). For this experiment, we trained a layer GRU encoder and decoder with neurons on ZINC with latent dimension . We tuned hyperparameters such that the sequencewise reconstruction accuracy on train set was close to for all our models. The models showed good reconstruction accuracy on test set and good validity of the samples (Table 4). We explored the latent space using a standard twostep validation procedure proposed in (Kusner et al., 2017) to show the advantage of DDVAE’s latent codes. The goal of the Bayesian optimization was to maximize the following score of a molecule :
(25) 
where is wateroctanol partition coefficient of a molecule, is a synthetic accessibility score (Ertl and Schuffenhauer, 2009) obtained from RDKit package (Landrum, 2006), and penalizes the largest ring in a molecule if it consists of more than 6 atoms:
(26) 
Each component in
is normalized by subtracting mean and dividing by standard deviation estimated on the training set. Validation procedure consists of two steps. First, we train a sparse Gaussian process
(Snelson and Ghahramani, 2006) on latent codes of DDVAE trained on approximately SMILES strings from ZINC database, and report predictive performance of a Gaussian process on a tenfold cross validation in Table 4. We compare DDVAE to the following baselines: Character VAE, CVAE (GómezBombarelli et al., 2018); Grammar VAE, GVAE (Kusner et al., 2017); SyntaxDirected VAE, SDVAE (Dai et al., 2018); Junction Tree VAE, JTVAE (Jin et al., 2018).Using a trained sparse Gaussian process, we iteratively sampled latent codes using expected improvement acquisition function and Kriging Believer Algorithm (Cressie, 1990) to select multiple points for the batch. We evaluated selected points and added reconstructed objects to the training set. We repeated training and sampling for 5 iterations and reported molecules with the highest score in Table 4 and Table 5. We also report top molecules for our models in Appendix D.
5 Discussion
The proposed model outperforms the standard VAE model on multiple downstream tasks, including Bayesian optimization of molecular structures. In the ablation studies, we noticed that models with bounded support show lower validity during sampling. We suggest that it is due to regions of the latent space that are not covered by any proposals: the decoder does not visit these areas during training and can behave unexpectedly there. We found a uniform prior suitable for downstream classification and visualization tasks since latent codes evenly cover the latent space.
DDVAE introduces an additional hyperparameter that balances reconstruction and terms. Unlike scale , temperature
changes loss function and its gradients nonlinearly. We found it useful to select starting temperatures such that gradients from
and reconstruction term have the same scale at the beginning of training. Experimenting with annealing schedules, we found loglinear annealing slightly better than linear annealing.Acknowledgements
The authors thank Maksim Kuznetsov and Alexander Zhebrak for helpful comments on the paper. Experiments on synthetic data in Section 4.1 were supported by the Russian Science Foundation grant no. 177120072.
References
 Chen et al. (2017) Chen, X., Kingma, D. P., Salimans, T., Duan, Y., Dhariwal, P., Schulman, J., Sutskever, I., and Abbeel, P. (2017). Variational Lossy Autoencoder. International Conference on Learning Representations.
 Cho et al. (2014) Cho, K., van Merriënboer, B., Gulcehre, C., Bahdanau, D., Bougares, F., Schwenk, H., and Bengio, Y. (2014). Learning phrase representations using RNN encoder–decoder for statistical machine translation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1724–1734, Doha, Qatar. Association for Computational Linguistics.
 Cressie (1990) Cressie, N. (1990). The origins of kriging. Mathematical geology, 22(3):239–252.
 Dai et al. (2018) Dai, H., Tian, Y., Dai, B., Skiena, S., and Song, L. (2018). Syntaxdirected variational autoencoder for molecule generation. In Proceedings of the International Conference on Learning Representations.
 De Cao and Kipf (2018) De Cao, N. and Kipf, T. (2018). MolGAN: An implicit generative model for small molecular graphs.
 Dinh et al. (2015) Dinh, L., Krueger, D., and Bengio, Y. (2015). NICE: Nonlinear Independent Components Estimation. International Conference on Learning Representations Workshop.
 Dinh et al. (2017) Dinh, L., SohlDickstein, J., and Bengio, S. (2017). Density Estimation Using Real NVP. International Conference on Learning Representations.
 Ertl and Schuffenhauer (2009) Ertl, P. and Schuffenhauer, A. (2009). Estimation of synthetic accessibility score of druglike molecules based on molecular complexity and fragment contributions. Journal of cheminformatics, 1(1):8.
 Ghosh et al. (2020) Ghosh, P., Sajjadi, M. S. M., Vergari, A., Black, M., and Scholkopf, B. (2020). From variational to deterministic autoencoders. In International Conference on Learning Representations.
 GómezBombarelli et al. (2018) GómezBombarelli, R., Wei, J. N., Duvenaud, D., HernándezLobato, J. M., SánchezLengeling, B., Sheberla, D., AguileraIparraguirre, J., Hirzel, T. D., Adams, R. P., and AspuruGuzik, A. (2018). Automatic chemical design using a datadriven continuous representation of molecules. ACS central science, 4(2):268–276.
 Goodfellow et al. (2014) Goodfellow, I., PougetAbadie, J., Mirza, M., Xu, B., WardeFarley, D., Ozair, S., Courville, A., and Bengio, Y. (2014). Generative adversarial nets. pages 2672–2680.
 Higgins et al. (2017) Higgins, I., Matthey, L., Pal, A., Burgess, C., Glorot, X., Botvinick, M., Mohamed, S., and Lerchner, A. (2017). betavae: Learning basic visual concepts with a constrained variational framework. ICLR, 2(5):6.
 Hsu et al. (2019) Hsu, W.N., Zhang, Y., Weiss, R. J., Zen, H., Wu, Y., Wang, Y., Cao, Y., Jia, Y., Chen, Z., Shen, J., et al. (2019). Hierarchical generative modeling for controllable speech synthesis. International Conference on Learning Representations.

Jin et al. (2018)
Jin, W., Barzilay, R., and Jaakkola, T. (2018).
Junction tree variational autoencoder for molecular graph generation.
In Dy, J. and Krause, A., editors,
Proceedings of the 35th International Conference on Machine Learning
, volume 80 of Proceedings of Machine Learning Research, pages 2323–2332, Stockholmsmässan, Stockholm Sweden. PMLR.  Kadurin et al. (2016) Kadurin, A., Aliper, A., Kazennov, A., Mamoshina, P., Vanhaelen, Q., Khrabrov, K., and Zhavoronkov, A. (2016). The cornucopia of meaningful leads: Applying deep adversarial autoencoders for new molecule development in oncology. Oncotarget, 8(7):10883.
 Kingma and Welling (2013) Kingma, D. P. and Welling, M. (2013). AutoEncoding Variational Bayes. International Conference on Learning Representations.
 Kusner et al. (2017) Kusner, M. J., Paige, B., and HernándezLobato, J. M. (2017). Grammar variational autoencoder. In Proceedings of the 34th International Conference on Machine LearningVolume 70, pages 1945–1954. JMLR. org.

Kuznetsov et al. (2019)
Kuznetsov, M., Polykovskiy, D., Vetrov, D. P., and Zhebrak, A. (2019).
A prior of a googol gaussians: a tensor ring induced prior for generative models.
In Advances in Neural Information Processing Systems, pages 4104–4114. 
Landrum (2006)
Landrum, G. (2006).
Rdkit: Opensource cheminformatics.
Online). http://www. rdkit. org. Accessed, 3(04):2012.  LeCun and Cortes (2010) LeCun, Y. and Cortes, C. (2010). MNIST handwritten digit database.
 Makhzani et al. (2016) Makhzani, A., Shlens, J., Jaitly, N., and Goodfellow, I. (2016). Adversarial autoencoders.
 Oord et al. (2016) Oord, A. V., Kalchbrenner, N., and Kavukcuoglu, K. (2016). Pixel recurrent neural networks. In Balcan, M. F. and Weinberger, K. Q., editors, Proceedings of The 33rd International Conference on Machine Learning, volume 48 of Proceedings of Machine Learning Research, pages 1747–1756, New York, New York, USA. PMLR.
 Polykovskiy et al. (2018a) Polykovskiy, D., Zhebrak, A., SanchezLengeling, B., Golovanov, S., Tatanov, O., Belyaev, S., Kurbanov, R., Artamonov, A., Aladinskiy, V., Veselov, M., Kadurin, A., Nikolenko, S., AspuruGuzik, A., and Zhavoronkov, A. (2018a). Molecular Sets (MOSES): A Benchmarking Platform for Molecular Generation Models. arXiv preprint arXiv:1811.12823.
 Polykovskiy et al. (2018b) Polykovskiy, D., Zhebrak, A., Vetrov, D., Ivanenkov, Y., Aladinskiy, V., Bozdaganyan, M., Mamoshina, P., Aliper, A., Zhavoronkov, A., and Kadurin, A. (2018b). Entangled conditional adversarial autoencoder for denovo drug discovery. Molecular Pharmaceutics.
 Preuer et al. (2018) Preuer, K., Renz, P., Unterthiner, T., Hochreiter, S., and Klambauer, G. (2018). Fréchet ChemNet distance: A metric for generative models for molecules in drug discovery. J. Chem. Inf. Model., 58(9):1736–1741.
 Razavi et al. (2019) Razavi, A., Oord, A. v. d., and Vinyals, O. (2019). Generating diverse highfidelity images with vqvae2. Advances In Neural Information Processing Systems.
 Segler et al. (2018) Segler, M. H. S., Kogej, T., Tyrchan, C., and Waller, M. P. (2018). Generating focused molecule libraries for drug discovery with recurrent neural networks. ACS Cent Sci, 4(1):120–131.

Semeniuta et al. (2017)
Semeniuta, S., Severyn, A., and Barth, E. (2017).
A hybrid convolutional variational autoencoder for text generation.
In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 627–637, Copenhagen, Denmark. Association for Computational Linguistics.  Snelson and Ghahramani (2006) Snelson, E. and Ghahramani, Z. (2006). Sparse gaussian processes using pseudoinputs. In Advances in neural information processing systems, pages 1257–1264.
 Tolstikhin et al. (2016) Tolstikhin, I., Bousquet, O., Gelly, S., and Schoelkopf, B. (2016). Wasserstein autoencoders.

Tomczak and Welling (2018)
Tomczak, J. and Welling, M. (2018).
Vae with a vampprior.
In Storkey, A. and PerezCruz, F., editors,
Proceedings of the TwentyFirst International Conference on Artificial Intelligence and Statistics
, volume 84 of Proceedings of Machine Learning Research, pages 1214–1223, Playa Blanca, Lanzarote, Canary Islands. PMLR.  van den Oord et al. (2018) van den Oord, A., Li, Y., Babuschkin, I., Simonyan, K., Vinyals, O., Kavukcuoglu, K., van den Driessche, G., Lockhart, E., Cobo, L., Stimberg, F., Casagrande, N., Grewe, D., Noury, S., Dieleman, S., Elsen, E., Kalchbrenner, N., Zen, H., Graves, A., King, H., Walters, T., Belov, D., and Hassabis, D. (2018). Parallel WaveNet: Fast highfidelity speech synthesis. In Dy, J. and Krause, A., editors, Proceedings of the 35th International Conference on Machine Learning, volume 80 of Proceedings of Machine Learning Research, pages 3918–3926, Stockholmsmässan, Stockholm Sweden. PMLR.
 Weininger (1970) Weininger, D. (1970). Smiles, a chemical language and information system. 1. introduction to methodology and encoding rules. 17:1–14.
 Weininger et al. (1989) Weininger, D., Weininger, A., and Weininger, J. L. (1989). Smiles. 2. algorithm for generation of unique smiles notation. Journal of chemical information and computer sciences, 29(2):97–101.
 You et al. (2018) You, J., Ying, R., Ren, X., Hamilton, W., and Leskovec, J. (2018). GraphRNN: Generating realistic graphs with deep autoregressive models. In International Conference on Machine Learning, pages 5694–5703.
 Yu et al. (2017) Yu, L., Zhang, W., Wang, J., and Yu, Y. (2017). Seqgan: Sequence generative adversarial nets with policy gradient. In ThirtyFirst AAAI Conference on Artificial Intelligence.
 Zhao et al. (2019) Zhao, S., Song, J., and Ermon, S. (2019). Infovae: Balancing learning and inference in variational autoencoders. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 33, pages 5885–5892.
 Zhavoronkov et al. (2019) Zhavoronkov, A., Ivanenkov, Y., Aliper, A., Veselov, M., Aladinskiy, V., Aladinskaya, A., Terentiev, V., Polykovskiy, D., Kuznetsov, M., Asadulaev, A., Volkov, Y., Zholus, A., Shayakhmetov, R., Zhebrak, A., Minaeva, L., Zagribelnyy, B., Lee, L., Soll, R., Madge, D., Xing, L., Guo, T., and AspuruGuzik, A. (2019). Deep learning enables rapid identification of potent ddr1 kinase inhibitors. Nature biotechnology, pages 1–4.
References
 Chen et al. (2017) Chen, X., Kingma, D. P., Salimans, T., Duan, Y., Dhariwal, P., Schulman, J., Sutskever, I., and Abbeel, P. (2017). Variational Lossy Autoencoder. International Conference on Learning Representations.
 Cho et al. (2014) Cho, K., van Merriënboer, B., Gulcehre, C., Bahdanau, D., Bougares, F., Schwenk, H., and Bengio, Y. (2014). Learning phrase representations using RNN encoder–decoder for statistical machine translation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1724–1734, Doha, Qatar. Association for Computational Linguistics.
 Cressie (1990) Cressie, N. (1990). The origins of kriging. Mathematical geology, 22(3):239–252.
 Dai et al. (2018) Dai, H., Tian, Y., Dai, B., Skiena, S., and Song, L. (2018). Syntaxdirected variational autoencoder for molecule generation. In Proceedings of the International Conference on Learning Representations.
 De Cao and Kipf (2018) De Cao, N. and Kipf, T. (2018). MolGAN: An implicit generative model for small molecular graphs.
 Dinh et al. (2015) Dinh, L., Krueger, D., and Bengio, Y. (2015). NICE: Nonlinear Independent Components Estimation. International Conference on Learning Representations Workshop.
 Dinh et al. (2017) Dinh, L., SohlDickstein, J., and Bengio, S. (2017). Density Estimation Using Real NVP. International Conference on Learning Representations.
 Ertl and Schuffenhauer (2009) Ertl, P. and Schuffenhauer, A. (2009). Estimation of synthetic accessibility score of druglike molecules based on molecular complexity and fragment contributions. Journal of cheminformatics, 1(1):8.
 Ghosh et al. (2020) Ghosh, P., Sajjadi, M. S. M., Vergari, A., Black, M., and Scholkopf, B. (2020). From variational to deterministic autoencoders. In International Conference on Learning Representations.
 GómezBombarelli et al. (2018) GómezBombarelli, R., Wei, J. N., Duvenaud, D., HernándezLobato, J. M., SánchezLengeling, B., Sheberla, D., AguileraIparraguirre, J., Hirzel, T. D., Adams, R. P., and AspuruGuzik, A. (2018). Automatic chemical design using a datadriven continuous representation of molecules. ACS central science, 4(2):268–276.
 Goodfellow et al. (2014) Goodfellow, I., PougetAbadie, J., Mirza, M., Xu, B., WardeFarley, D., Ozair, S., Courville, A., and Bengio, Y. (2014). Generative adversarial nets. pages 2672–2680.
 Higgins et al. (2017) Higgins, I., Matthey, L., Pal, A., Burgess, C., Glorot, X., Botvinick, M., Mohamed, S., and Lerchner, A. (2017). betavae: Learning basic visual concepts with a constrained variational framework. ICLR, 2(5):6.
 Hsu et al. (2019) Hsu, W.N., Zhang, Y., Weiss, R. J., Zen, H., Wu, Y., Wang, Y., Cao, Y., Jia, Y., Chen, Z., Shen, J., et al. (2019). Hierarchical generative modeling for controllable speech synthesis. International Conference on Learning Representations.

Jin et al. (2018)
Jin, W., Barzilay, R., and Jaakkola, T. (2018).
Junction tree variational autoencoder for molecular graph generation.
In Dy, J. and Krause, A., editors,
Proceedings of the 35th International Conference on Machine Learning
, volume 80 of Proceedings of Machine Learning Research, pages 2323–2332, Stockholmsmässan, Stockholm Sweden. PMLR.  Kadurin et al. (2016) Kadurin, A., Aliper, A., Kazennov, A., Mamoshina, P., Vanhaelen, Q., Khrabrov, K., and Zhavoronkov, A. (2016). The cornucopia of meaningful leads: Applying deep adversarial autoencoders for new molecule development in oncology. Oncotarget, 8(7):10883.
 Kingma and Welling (2013) Kingma, D. P. and Welling, M. (2013). AutoEncoding Variational Bayes. International Conference on Learning Representations.
 Kusner et al. (2017) Kusner, M. J., Paige, B., and HernándezLobato, J. M. (2017). Grammar variational autoencoder. In Proceedings of the 34th International Conference on Machine LearningVolume 70, pages 1945–1954. JMLR. org.

Kuznetsov et al. (2019)
Kuznetsov, M., Polykovskiy, D., Vetrov, D. P., and Zhebrak, A. (2019).
A prior of a googol gaussians: a tensor ring induced prior for generative models.
In Advances in Neural Information Processing Systems, pages 4104–4114. 
Landrum (2006)
Landrum, G. (2006).
Rdkit: Opensource cheminformatics.
Online). http://www. rdkit. org. Accessed, 3(04):2012.  LeCun and Cortes (2010) LeCun, Y. and Cortes, C. (2010). MNIST handwritten digit database.
 Makhzani et al. (2016) Makhzani, A., Shlens, J., Jaitly, N., and Goodfellow, I. (2016). Adversarial autoencoders.
 Oord et al. (2016) Oord, A. V., Kalchbrenner, N., and Kavukcuoglu, K. (2016). Pixel recurrent neural networks. In Balcan, M. F. and Weinberger, K. Q., editors, Proceedings of The 33rd International Conference on Machine Learning, volume 48 of Proceedings of Machine Learning Research, pages 1747–1756, New York, New York, USA. PMLR.
 Polykovskiy et al. (2018a) Polykovskiy, D., Zhebrak, A., SanchezLengeling, B., Golovanov, S., Tatanov, O., Belyaev, S., Kurbanov, R., Artamonov, A., Aladinskiy, V., Veselov, M., Kadurin, A., Nikolenko, S., AspuruGuzik, A., and Zhavoronkov, A. (2018a). Molecular Sets (MOSES): A Benchmarking Platform for Molecular Generation Models. arXiv preprint arXiv:1811.12823.
 Polykovskiy et al. (2018b) Polykovskiy, D., Zhebrak, A., Vetrov, D., Ivanenkov, Y., Aladinskiy, V., Bozdaganyan, M., Mamoshina, P., Aliper, A., Zhavoronkov, A., and Kadurin, A. (2018b). Entangled conditional adversarial autoencoder for denovo drug discovery. Molecular Pharmaceutics.
 Preuer et al. (2018) Preuer, K., Renz, P., Unterthiner, T., Hochreiter, S., and Klambauer, G. (2018). Fréchet ChemNet distance: A metric for generative models for molecules in drug discovery. J. Chem. Inf. Model., 58(9):1736–1741.
 Razavi et al. (2019) Razavi, A., Oord, A. v. d., and Vinyals, O. (2019). Generating diverse highfidelity images with vqvae2. Advances In Neural Information Processing Systems.
 Segler et al. (2018) Segler, M. H. S., Kogej, T., Tyrchan, C., and Waller, M. P. (2018). Generating focused molecule libraries for drug discovery with recurrent neural networks. ACS Cent Sci, 4(1):120–131.

Semeniuta et al. (2017)
Semeniuta, S., Severyn, A., and Barth, E. (2017).
A hybrid convolutional variational autoencoder for text generation.
In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 627–637, Copenhagen, Denmark. Association for Computational Linguistics.  Snelson and Ghahramani (2006) Snelson, E. and Ghahramani, Z. (2006). Sparse gaussian processes using pseudoinputs. In Advances in neural information processing systems, pages 1257–1264.
 Tolstikhin et al. (2016) Tolstikhin, I., Bousquet, O., Gelly, S., and Schoelkopf, B. (2016). Wasserstein autoencoders.

Tomczak and Welling (2018)
Tomczak, J. and Welling, M. (2018).
Vae with a vampprior.
In Storkey, A. and PerezCruz, F., editors,
Proceedings of the TwentyFirst International Conference on Artificial Intelligence and Statistics
, volume 84 of Proceedings of Machine Learning Research, pages 1214–1223, Playa Blanca, Lanzarote, Canary Islands. PMLR.  van den Oord et al. (2018) van den Oord, A., Li, Y., Babuschkin, I., Simonyan, K., Vinyals, O., Kavukcuoglu, K., van den Driessche, G., Lockhart, E., Cobo, L., Stimberg, F., Casagrande, N., Grewe, D., Noury, S., Dieleman, S., Elsen, E., Kalchbrenner, N., Zen, H., Graves, A., King, H., Walters, T., Belov, D., and Hassabis, D. (2018). Parallel WaveNet: Fast highfidelity speech synthesis. In Dy, J. and Krause, A., editors, Proceedings of the 35th International Conference on Machine Learning, volume 80 of Proceedings of Machine Learning Research, pages 3918–3926, Stockholmsmässan, Stockholm Sweden. PMLR.
 Weininger (1970) Weininger, D. (1970). Smiles, a chemical language and information system. 1. introduction to methodology and encoding rules. 17:1–14.
 Weininger et al. (1989) Weininger, D., Weininger, A., and Weininger, J. L. (1989). Smiles. 2. algorithm for generation of unique smiles notation. Journal of chemical information and computer sciences, 29(2):97–101.
 You et al. (2018) You, J., Ying, R., Ren, X., Hamilton, W., and Leskovec, J. (2018). GraphRNN: Generating realistic graphs with deep autoregressive models. In International Conference on Machine Learning, pages 5694–5703.
 Yu et al. (2017) Yu, L., Zhang, W., Wang, J., and Yu, Y. (2017). Seqgan: Sequence generative adversarial nets with policy gradient. In ThirtyFirst AAAI Conference on Artificial Intelligence.
 Zhao et al. (2019) Zhao, S., Song, J., and Ermon, S. (2019). Infovae: Balancing learning and inference in variational autoencoders. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 33, pages 5885–5892.
 Zhavoronkov et al. (2019) Zhavoronkov, A., Ivanenkov, Y., Aliper, A., Veselov, M., Aladinskiy, V., Aladinskaya, A., Terentiev, V., Polykovskiy, D., Kuznetsov, M., Asadulaev, A., Volkov, Y., Zholus, A., Shayakhmetov, R., Zhebrak, A., Minaeva, L., Zagribelnyy, B., Lee, L., Soll, R., Madge, D., Xing, L., Guo, T., and AspuruGuzik, A. (2019). Deep learning enables rapid identification of potent ddr1 kinase inhibitors. Nature biotechnology, pages 1–4.
Appendix A Proof of Theorem 1
We prove the theorem using five lemmas.
Lemma 1.
convergences to pointwise when converges to from the right:
(27) 
Proof.
To prove Eq. 27, we first show that our approximation in Eq.10 from the main paper converges pointwise to . :
(28) 
If is negative, both and converge to , hence converges to zero. If is zero, then which also converges to zero. Finally, for positive we apply L’Hôpital’s rule to compute the limit:
(29) 
To prove the theorem, we consider two cases. First, if , then for some , , and ,
(30) 
From the equation above follows that for given parameters the model violates indicators with positive probability. For those , a smoothed indicator function takes values less than , so the expectation of its logarithm tends to when .
The second case is . Since , indicators are violated only with probability zero, which will not contribute to the loss neither in , nor in . For all , and
, consider a distribution of a random variable
obtained from a distribution . Let be the maximal value of . We now need to prove that(31) 
For any , we select such that . For the next step we will use the fact that , where . By selecting small enough such that , we split the integration limit for in expectation into three segments: , , . A lower bound on in each segment is given by its value in the left end: , , . Also, since and is continuous on compact support of , density is bounded by some constant . Such estimation gives us the final lower bound using pointwise convergence of :
(32) 
We used which can be proved by applying the L’Hôpital’s rule twice. ∎
Proposition 1.
For our model, is finite if and only if a sequencewise reconstruction error rate is zero:
(33) 
Lemma 2.
Sequencewise reconstruction error rate is continuous.
Proof.
Following equicontinuity in total variation of at for any and finiteness of , for any there exists such that for any and any such that
(34) 
For parameters and , we estimate the difference in function values
(35) 
Symmetrically, , resulting in being continuous. ∎
Lemma 3.
Sequencewise reconstruction error rate converges to zero:
(36) 
The convergence rate is .
Proof.
Since is not empty, there exists . From pointwise convergence of to at point , for any exists such that for any :
(37) 
Next, we derive an upper bound on using the fact that if , and if :
(38) 
Combining Eq. 37 and Eq. 38 together we get
(39) 
Adding the defintion of , we obtain
(40) 
The right hand side goes to zero when goes to infinity and hence and with the convergence rate . Since is continuous, . ∎
Lemma 4.
attains its supremum:
(41) 
Proof.
From Lemma 3, . Hence, for a choice of from the theorem statement, . Equivalently, .
Note that since is continuous on a compact set, is a compact set. Also, is constant with respect to on . From the theorem statement, for any such that , there exists such that . Combining all statements together,
(42) 
In , is a continuous function: ,
(43) 
Hence, continuous function attains its supremum on a compact set at some point , where . ∎
Lemma 5.
Parameters from theorem statement are optimal:
(44) 
Proof.
Assume that . Since and , and . As a result, from our assumption, .
From continuity of divergence, for any , exists such that if ,
(45) 
From the convergence of to and convergence of to zero, there exists such that for any , .
From pointwise convergence of at point to , for any , exists such that for all ,