Deep generative modeling has seen many successful recent developments, such as producing realistic images from noise (Radford et al., 2015) and creating artwork (Gatys et al., 2016). We find particularly promising the opportunity to leverage deep generative models for search in high-dimensional discrete spaces (Gómez-Bombarelli et al., 2016b; Kusner et al., 2017). Discrete search is at the heart of problems in drug discovery (Gómez-Bombarelli et al., 2016a)2016; Guimaraes et al., 2017), and symbolic regression (Kusner et al., 2017).
The application of deep modeling to search involves ‘lifting’ the search from the discrete space to a continuous space, via an autoencoder (Rumelhart et al., 1985). An autoencoder learns two mappings: 1) a mapping from discrete space to continuous space called an encoder; and 2) a reverse mapping from continuous space back to discrete space called a decoder. The discrete space is presented to the autoencoder as a sequence in some formal language — for example, in Gómez-Bombarelli et al. (2016b) molecules are encoded as SMILES strings — and powerful sequential models (e.g., LSTMs (Hochreiter & Schmidhuber, 1997) GRUs (Cho et al., 2014), DCNNs (Kalchbrenner et al., 2014)) are applied to the string representation. When employing these models as encoders and decoders, generation of invalid sequences is however possible, and using current techniques this happens frequently. Kusner et al. (2017) aimed to fix this by basing the sequential models on parse tree representations of the discrete structures, where externally specified grammatical rules assist the model in the decoding process. This work boosted the ability of the model to produce valid sequences during decoding, but its performance achieved by this method leaves scope for improvement, and the method requires hand-crafted grammatical rules for each application domain.
In this paper, we propose a generative approach to modeling validity that can learn the validity constraints of a given discrete space. We show how concepts from reinforcement learning may be used to define a suitable generative model and how this model can be approximated using sequence-based deep learning techniques. To assist in training this generative model we propose two data augmentation techniques. Where no labeled data set of valid and invalid sequences is available, we propose a novel approach to active learning for sequential tasks inspired by classic mutual-information-based approaches(Houlsby et al., 2011; Hernández-Lobato et al., 2014). In the context of molecules, where data sets containing valid molecule examples do exist, we propose an effective data augmentation process based on applying minimal perturbations to known-valid sequences. These two techniques allow us to rapidly learn sequence validity models that can be used as a) generative models, which we demonstrate in the context of Python 3 mathematical expressions and b) a grammar model for character-based sequences, that can drastically improve the ability of deep models to decode valid discrete structures from continuous representations. We demonstrate the latter in the context of molecules represented as SMILES strings.
2 A model for sequence validity
To formalise the problem we denote the set of discrete sequences of length by using an alphabet of size . Individual sequences in are denoted . We assume the availability of a validator , an oracle which can tell us whether a given sequence is valid. It is important to note that such a validator gives very sparse feedback: it can only be evaluated on a complete sequence. Examples of such validators are compilers for programming languages (which can identify syntax and type errors) and chemo-informatics software for parsing SMILES strings (which identify violations of valence constraints). Running the standard validity checker on a partial sequence or subsequence (e.g., the first characters of a computer program) does not in general provide any indication as to whether the complete sequence of length is valid.
We aim to obtain a generative model for the sequence set , the subset of valid sequences in . To achieve this, we would ideally like to be able to query a more-informative function which operates on prefixes of a hypothetical longer sequence and outputs
where concatenates a prefix and a suffix to form a complete sequence. The function can be used to determine whether a given prefix can ever successfully yield a valid outcome. Note that we are indifferent to how many suffixes yield valid sequences. With access to , we could create a generative model for which constructs sequences from left to right, a single character at a time, using to provide early feedback as to which of the next character choices will surely not lead to a “dead end” from which no valid sequence can be produced.
We frame the problem of modeling1998) for which we train a reinforcement learning agent to select characters sequentially in a manner that avoids producing invalid sequences. At time , the agent is in state and can take actions . At the end of an episode, following action , the agent receives a reward of . Since in practice we are only able to evaluate in a meaningful way on complete sequences, the agent does not receive any reward at any of the intermediate steps . The optimal -function (Watkins, 1989), a function of a state and an action , represents the expected reward of an agent following an optimal policy which takes action at state . This optimal -function assigns value to actions in state for which there exists a suffix such that , and value to all other state/action pairs. This behaviour exactly matches the desired prefix validator in (1), that is, , and so for the reinforcement learning environment as specified, learning corresponds to learning the Q-function.
Having access to would allow us to obtain a generative model for . In particular, an agent following any optimal policy will always generate valid sequences. If we sample uniformly at random across all optimal actions at each time
, we obtain the joint distribution given by
where are the per-timestep normalisation constants. This distribution allows us to sample sequences in a straightforward manner by sequentially selecting characters given the previously selected ones in .
In this work we focus on learning an approximation to (2
). For this, we use recurrent neural networks, which have recently shown remarkable empirical success in the modeling of sequential data, e.g., in natural language processing applications(Sutskever et al., 2014). We approximate the optimal
-function with a long-short term memory (LSTM) model(Hochreiter & Schmidhuber, 1997) that has one output unit per character in
, with each output unit using a logistic activation function (see figure1), such that the output is in the closed interval . We denote by the value at time of the LSTM output unit corresponding to character when the network weights are the input is the sequence . We interpret the neural network output as
, that is, as the probability that actioncan yield a valid sequence given that the current state is .
Within our framing a sequence will be valid according to our model if every action during the sequence generation process is permissible, that is, if for . Similarly, we consider that the sequence will be invalid if at least one action during the sequence generation process is not valid111 Note that, once is zero, all the following values of in that sequence will be irrelevant to us. Therefore, we can safely assume that a sequence is invalid if is zero at least once in the sequence., that is, if at least once for . This specifies the following log-likelihood function given a training set of sequences and corresponding labels :
where, following from the above characterisation of valid and invalid sequences, we define
according to our model’s predictions. The log-likelihood (3such that .
Instead of directly maximising (3
), we can follow a Bayesian approach to obtain estimates of uncertainty in the predictions of our LSTM model. For this, we can introduce dropout layers which stochastically zero-out units in the input and hidden layers of the LSTM model according to a Bernoulli distribution(Gal & Ghahramani, 2016). Under the assumption of a Gaussian prior over weights, the resulting stochastic process yields an implicit approximation to the posterior distribution . We do this to obtain uncertainty estimates, allowing us to perform efficient active learning, as described in section 3.1.
3 Online generation of synthetic training data
One critical aspect of learning as described above is how to generate the training set in a sensible manner. A naïve approach could be to draw elements from uniformly at random. However, in many cases, contains only a tiny fraction of valid sequences and the uniform sampling approach produces extremely unbalanced sets which contain very little information about the structure of valid sequences. While rejection sampling can be used to increase the number of positive samples, the resulting additional cost makes such an alternative infeasible in most practical cases. The problem gets worse as the length of the sequences considered increases since will always grow as , while will typically grow at a lower rate.
We employ two approaches for artificially constructing balanced sets that permit learning these models in far fewer samples than . In settings where we do not have a corpus of known valid sequences, Bayesian active learning can automatically construct the training set . This method works by iteratively selecting sequences in that are maximally informative about the model parameters given the data collected so far (MacKay, 1992). When we do have a set of known valid sequences, we use these to seed a process for generating balanced sets by applying random perturbations to valid sequences.
3.1 Active learning
Let denote an arbitrary sequence and let be the unknown binary label indicating whether is valid or not. Our model’s predictive distribution for , that is, is given by (4). The amount of information on that we expect to gain by labeling and adding to can be measured in terms of the expected reduction in the entropy of the posterior distribution . That is,
where computes the entropy of a distribution. This formulation of the entropy-based active learning criterion is, however, difficult to approximate, because it requires us to condition on – effectively . To obtain a simpler expression we follow Houlsby et al. (2011) and note that is equal to the mutual information between and given and
which is easier to work with as the required entropy is now that of Bernoulli predictive distributions, an analytic quantity. Let denote a Bernoulli distribution with probability , and with probability mass for values . The entropy of can be easily obtained as
The expectation with respect to can be easily approximated by Monte Carlo. We could attempt to sequentially construct by optimising (6). However, this optimisation process would still be difficult, as it would require evaluating exhaustively on all the elements of . To avoid this, we follow a greedy approach and construct our informative sequence in a sequential manner. In particular, at each time step , we select by optimising the mutual information between and , where denotes here the prefix already selected at previous steps of the optimisation process. This mutual information quantity is denoted by and its expression is given by
The generation of an informative sequence can then be performed efficiently by sequentially optimising (8), an operation that requires only evaluations of .
To obtain an approximation to (8), we first approximate the posterior distribution with and then estimate the expectations in (8) by Monte Carlo using samples drawn from . The resulting estimator is given by
where and is defined in (7). The nonlinearity of means that our Monte Carlo approximation is biased, but still consistent. We found that reasonable estimates can be obtained even for small . In our experiments we use .
The iterative procedure just described is designed to produce a single informative sequence. In practice, we would like to generate a batch of informative and diverse sequences. The reason for this is that, when training neural networks, processing a batch of data is computationally more efficient than individually processing multiple data points. To construct a batch with informative sequences, we propose to repeat the previous iterative procedure times. To introduce diversity in the batch-generation process, we “soften” the greedy maximisation operation at each step by injecting a small amount of noise in the evaluation of the objective function (Finkel et al., 2006). Besides introducing diversity, this can also lead to better overall solutions than those produced by the noiseless greedy approach (Cho, 2016). We introduce noise into the greedy selection process by sampling from
for each , which is a Boltzmann distribution with sampling temperature . By adjusting this temperature parameter, we can trade off the diversity of samples in the batch vs. their similarity.
3.2 Data augmentation
In some settings, such as the molecule domain we will consider later, we have databases of known-valid examples (e.g. collections of known drug-like molecules), but rarely are sets of invalid examples available. Obtaining invalid sequences may seem trivial, as invalid samples may be obtained by sampling uniformly from , however these are almost always so far from any valid sequence that they carry little information about the boundary of valid and invalid sequences. Using just a known data set also carries the danger of overfitting to the subset of covered by the data.
We address this by perturbing sequences from a database of valid sequences, such that approximately half of the thus generated sequences are invalid. These perturbed sequences are constructed by setting each to be a symbol selected independently from with probability , while remaining the original with probability . In expectation this changes entries in the sequence. We choose , which results in synthetic data that is approximately 50% valid.
We test the proposed validity checker in two environments. First, we look at fixed length Python 3 mathematical expressions, where we derive lower bounds for the support of our model and compare the performance of active learning with that achieved by a simple passive approach. Secondly, we look at molecular structures encoded into string representation, where we utilise existing molecule data sets together with our proposed data augmentation method to learn the rules governing molecule string validity. We test the efficacy of our validity checker on the downstream task of decoding valid molecules from a continuous latent representation given by a variational autoencoder. The code to reproduce these experiments is available online222https://github.com/DavidJanz/molecule_grammar_rnn.
4.1 Mathematical expressions
We illustrate the utility of the proposed validity model and sequential Bayesian active learning in the context of Python 3 mathematical expressions. Here, consists of all length 25 sequences that can be constructed from the alphabet of numbers and symbols shown in table 1. The validity of any given expression is determined using the Python 3 eval function: a valid expression is one that does not raise an exception when evaluated.
Measuring model performance
Within this problem domain we do not assume the existence of a data set of positive examples. Without a validation data set to measure performance on, we compare the models in terms of their capability to provide high entropy distributions over valid sequences. We define a generative procedure to sample from the model and measure the validity and entropy of the samples. To sample stochastically, we use a Boltzmann policy, i.e. a policy which samples next actions according to
where is a temperature constant that governs the trade-off between exploration and exploitation. Note that this is not the same as the Boltzmann distribution used as a proposal generation scheme during active learning, which was defined not on -function values but rather on the estimated mutual information.
We obtain samples for a range of temperatures and compute the validity fraction and entropy of each set of samples. These points now plot a curve of the trade-off between validity and entropy that a given model provides. Without a preferred level of sequence validity, the area under this validity-entropy curve (V-H AUC) can be utilised as a metric of model quality. To provide some context for the entropy values, we estimate an information theoretic lower bound for the fraction of the set that our model is able to generate. This translates to upper bounding the false negative rate for our model.
Experimental setup and results
We train two models using our proposed Q-function method: passive
, where training sequences are sampled from a uniform distribution over, and active, where we use the procedure described in section 3.1 to select training sequences. The two models are otherwise identical.
Both trained models give a diverse output distribution over valid sequences (figure 2). However, as expected, we find that the active method is able to learn a model of sequence validity much more rapidly than sampling uniformly from , and the corresponding converged model is capable of generating many more distinct valid sequences than that trained using the passive method. In table 2 we present lower bounds on the support of the two respective models. The details of how this lower bound is computed can be found in appendix A. Note that the overhead of the active learning data generating procedure is minimal: processing 10,000 takes 31s with passive versus 37s with active.
|temperature||passive model||active model|
4.2 SMILES molecules
SMILES strings (Weininger, 1970) are one of the most common representations for molecules, consisting of an ordering of atoms and bonds. It is attractive for many applications because it maps the graphical representation of a molecule to a sequential representation, capturing not just its chemical composition but also structure. This structural information is captured by intricate dependencies in SMILES strings based on chemical properties of individual atoms and valid atom connectivities. For instance, the atom Bromine can only bond with a single other atom, meaning that it may only occur at the beginning or end of a SMILES string, or within a so-called ‘branch’, denoted by a bracketed expression (Br). We illustrate some of these rules, including a Bromine branch, in figure 3, with a graphical representation of a molecule alongside its corresponding SMILES string. There, we also show examples of how a string may fail to form a valid SMILES molecule representation. The full SMILES alphabet is presented in table 3.
|B C N O S P F I H Cl Br @||=#/\12345678||-+||()|
The intricacy of SMILES strings makes them a suitable testing ground for our method. There are two technical distinctions to make between this experimental setup and the previously considered Python 3 mathematical expressions. As there exist databases of SMILES strings, we leverage those by using the data augmentation technique described in section 3.2. The main data source considered is the ZINC data set Irwin & Shoichet (2005), as used in Kusner et al. (2017). We also use the USPTO 15k reaction products data (Lowe, 2014) and a set of molecule solubility information (Huuskonen, 2000)
as withheld test data. Secondly, whereas we used fixed length Python 3 expressions in order to obtain coverage bounds, molecules are inherently of variable length. We deal with this by padding all molecules to fixed length.
Validating grammar model accuracy
As a first test of the suitability of our proposed validity model, we train it on augmented ZINC data and examine the accuracy of its predictions on a withheld test partition of that same data set as well as the two unseen molecule data sets. Accuracy is the ability of the model to accurately recognise which perturbations make a certain SMILES string invalid, and which leave it valid – effectively how well the model has captured the grammar of SMILES strings in the vicinity of the data manifold. Recalling that a sequence is invalid if at any , we consider the model prediction for molecule to be , and compare this to its true label as given by rdkit, a chemical informatics software. The results are encouraging, with the model achieving 0.998 accuracy on perturbed ZINC (test) and 1.000 accuracy on both perturbed USPTO and perturbed Solubility withheld data. Perturbation rate was selected such that approximately half of the perturbed strings are valid.
Integrating with Character VAE
To demonstrate the models capability of improving preexisting generative models for discrete structures, we show how it can be used to improve the results of previous work, a character variational autoencoder (CVAE) applied to SMILES strings (Gómez-Bombarelli et al., 2016b; Kingma & Welling, 2013). Therein, an encoder maps points in to a continuous latent representation and a paired decoder maps points in back to . A reconstruction based loss is minimised such that training points mapped to the latent space decode back into the same SMILES strings. The fraction of test points that do is termed reconstruction accuracy. The loss also features a term that encourages the posterior over
to be close to some prior, typically a normal distribution. A key metric for the performance of variational autoencoder models for discrete structures is the fraction of points sampled from the prior overthat decode into valid molecules. If many points do not correspond to valid molecules, any sort of predictive modeling on that space will likely also mostly output invalid SMILES strings.
The decoder functions by outputting a set of weights for each character in the reconstructed sequence conditioned on a latent point ; the sequence is recovered by sampling from a multinomial according to these weights. To integrate our validity model into this framework, we take the decoder output for each step and mask out choices that we predict cannot give valid sequence continuations. We thus sample characters with weights given by .
Table 4 contains a comparison of our work to a plain CVAE and to the Grammar VAE approach. We use a Kekulé format of the ZINC data in our experiments, a specific representation of aromatic bonds that our model handled particularly well. Note that the results we quote for Grammar VAE are taken directly from Kusner et al. (2017)
and on non-Kekulé format data. The CVAE model is trained for 100 epochs, as per previous work – further training improves reconstruction accuracy.
We note that the binary nature of the proposed grammar model means that it does not affect the reconstruction accuracy. In fact, some modest gains are present. The addition of our grammar model to the character VAE significantly improves its ability to decode discrete structures, as seen by the order of magnitude increase in latent sample validity. The action of our model is completely post-hoc and thus can be applied to any pre-trained character-based VAE model where elements of the latent space correspond to a structured discrete sequence.
|Model||reconstruction accuracy||sample validity|
|CVAE + Validity Model||50.2%||22.3%|
In this work we proposed a modeling technique for learning the validity constraints of discrete spaces. The proposed likelihood makes the model easy to train, is unaffected by the introduction of padding for variable length sequences and, as its optimum is largely independent of the training data distribution, it allows for the utilisation of active learning techniques. Through experiments we found that it is vital to show the model informative examples of validity constraints being validated. Thus, where no informative data sets exist, we proposed a mutual-information-based active learning scheme that uses model uncertainty to select training sequences. We used principled approximations to make that learning scheme computationally feasible. Where data sets of positive examples are available, we proposed a simple method of perturbations to create informative examples of validity constraints being broken.
The model showed promise on the Python mathematical expressions problem, especially when combined with active learning. In the context of SMILES molecules, the model was able to learn near-perfectly the validity of independently perturbed molecules. When applied to the variational autoencoder benchmark on SMILES strings, the proposed method beat the previous results by a large margin on prior sample validity – the relevant metric for the downstream utility of the latent space. The model is simple to apply to existing character-based models and is easy to train using data produced through our augmentation method. The perturbations used do not, however, capture every way in which a molecule may be mis-constructed. Correlated changes such as the insertion of matching brackets into expressions are missing from our scheme. Applying the model to a more structured representation of molecules, for example, sequences of parse rules as used in the Grammar VAE, and performing perturbations in that structured space is likely to deliver even greater improvements in performance.
- Bowman et al. (2016) Samuel R. Bowman, Luke Vilnis, Oriol Vinyals, Andrew M. Dai, Rafal Józefowicz, and Samy Bengio. Generating sentences from a continuous space. In Proceedings of the 20th SIGNLL Conference on Computational Natural Language Learning, (CoNLL), pp. 10–21, 2016.
- Cho (2016) Kyunghyun Cho. Noisy parallel approximate decoding for conditional recurrent language model. arXiv preprint arXiv:1605.03835, 2016.
- Cho et al. (2014) Kyunghyun Cho, Bart van Merriënboer, Çağlar Gülçehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. Learning phrase representations using rnn encoder–decoder for statistical machine translation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1724–1734, Doha, Qatar, October 2014. Association for Computational Linguistics. URL http://www.aclweb.org/anthology/D14-1179.
Finkel et al. (2006)
Jenny Rose Finkel, Christopher D Manning, and Andrew Y Ng.
Solving the problem of cascading errors: Approximate bayesian inference for linguistic annotation pipelines.In Proceedings of the 2006 Conference on Empirical Methods in Natural Language Processing, pp. 618–626. Association for Computational Linguistics, 2006.
- Gal & Ghahramani (2016) Yarin Gal and Zoubin Ghahramani. A theoretically grounded application of dropout in recurrent neural networks. In Advances in Neural Information Processing Systems, pp. 1019–1027, 2016.
Gatys et al. (2016)
Leon A Gatys, Alexander S Ecker, and Matthias Bethge.
Image style transfer using convolutional neural networks.In
- Gómez-Bombarelli et al. (2016a) Rafael Gómez-Bombarelli, Jorge Aguilera-Iparraguirre, Timothy D Hirzel, David Duvenaud, Dougal Maclaurin, Martin A Blood-Forsythe, Hyun Sik Chae, Markus Einzinger, Dong-Gwang Ha, Tony Wu, et al. Design of efficient molecular organic light-emitting diodes by a high-throughput virtual screening and experimental approach. Nature Materials, 15(10):1120–1127, 2016a.
- Gómez-Bombarelli et al. (2016b) Rafael Gómez-Bombarelli, David Duvenaud, José Miguel Hernández-Lobato, Jorge Aguilera-Iparraguirre, Timothy D. Hirzel, Ryan P. Adams, and Alán Aspuru-Guzik. Automatic chemical design using a data-driven continuous representation of molecules. ACS Central Science, 10 2016b.
- Guimaraes et al. (2017) Gabriel L. Guimaraes, Benjamin Sanchez-Lengeling, Pedro Luis Cunha Farias, and Alán Aspuru-Guzik. Objective-reinforced generative adversarial networks (organ) for sequence generation models. In arXiv:1705.10843, 2017.
- Hernández-Lobato et al. (2014) José Miguel Hernández-Lobato, Matthew W Hoffman, and Zoubin Ghahramani. Predictive entropy search for efficient global optimization of black-box functions. In Advances in neural information processing systems, pp. 918–926, 2014.
- Hochreiter & Schmidhuber (1997) Sepp Hochreiter and Jürgen Schmidhuber. Long short-term memory. Neural computation, 9(8):1735–1780, 1997.
- Houlsby et al. (2011) Neil Houlsby, Ferenc Huszár, Zoubin Ghahramani, and Máté Lengyel. Bayesian active learning for classification and preference learning. arXiv preprint arXiv:1112.5745, 2011.
- Huuskonen (2000) Jarmo Huuskonen. Estimation of aqueous solubility for a diverse set of organic compounds based on molecular topology. Journal of Chemical Information and Computer Sciences, 40(3):773–777, 2000.
- Irwin & Shoichet (2005) John J Irwin and Brian K Shoichet. Zinc- a free database of commercially available compounds for virtual screening. Journal of chemical information and modeling, 45(1):177–182, 2005.
- Kalchbrenner et al. (2014) Nal Kalchbrenner, Edward Grefenstette, and Phil Blunsom. A convolutional neural network for modelling sentences. arXiv preprint arXiv:1404.2188, 2014.
- Kingma & Welling (2013) Diederik P Kingma and Max Welling. Auto-encoding variational bayes. In Proceedings of the 2nd International Conference on Learning Representations (ICLR), 2013.
Kusner et al. (2017)
Matt J Kusner, Brooks Paige, and José Miguel Hernández-Lobato.
Grammar Variational Autoencoder.
International Conference on Machine Learning, March 2017.
- Lowe (2014) Daniel Mark Lowe. Patent reaction extraction. Available at https://bitbucket.org/dan2097/patent-reaction-extraction/downloads, 2014.
- MacKay (1992) David JC MacKay. Information-based objective functions for active data selection. Neural computation, 4(4):590–604, 1992.
- Radford et al. (2015) Alec Radford, Luke Metz, and Soumith Chintala. Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks. arXiv:1511.06434 [cs], November 2015.
- Rumelhart et al. (1985) David E Rumelhart, Geoffrey E Hinton, and Ronald J Williams. Learning internal representations by error propagation. Technical report, California Univ San Diego La Jolla Inst for Cognitive Science, 1985.
- Sutskever et al. (2014) Ilya Sutskever, Oriol Vinyals, and Quoc V Le. Sequence to sequence learning with neural networks. In Advances in neural information processing systems, pp. 3104–3112, 2014.
- Sutton & Barto (1998) Richard S Sutton and Andrew G Barto. Reinforcement learning: An introduction, volume 1. MIT press Cambridge, 1998.
- Watkins (1989) Christopher John Cornish Hellaby Watkins. Learning from delayed rewards. PhD thesis, King’s College, Cambridge, 1989.
- Weininger (1970) David Weininger. Smiles, a chemical language and information system. 1. introduction to methodology and encoding rules. In Proc. Edinburgh Math. SOC, volume 17, pp. 1–14, 1970.
Appendix A Coverage estimation
Ideally, we would like to check that the learned model
assigns positive probability to exactly those points which may lead to valid sequences, but for large discrete spaces this is impossible to compute or even accurately estimate. A simple check for accuracy could be to evaluate whether the model correctly identifies points as valid in a known, held-out validation or test set of real data, relative to randomly sampled sequences (which are nearly always invalid). However, if the validation set is too “similar” to the training data, even showing 100% accuracy in classifying these as valid may simply indicate having overfit to the training data: a discriminator which identifies data as similar to the training data needs to be accurate over a much smaller space than a discriminator which estimates validity over all of.
Instead, we propose to evaluate the trade-off between accuracy on a validation set, and an approximation to the size of the effective support of over . Let denote the valid subset of . Suppose we estimate the valid fraction by simple Monte Carlo, sampling uniformly from . We can then estimate by , where , a known quantity. A uniform distribution over sequences would have an entropy of . We denote the entropy of output from the model . If our model was perfectly uniform over the sequences it can generate, it would then be capable of generating . As our model at its optimum is extremely not uniform over sequences , this is very much a lower bound a coverage.