
A Model to Search for Synthesizable Molecules
Deep generative models are able to suggest new organic molecules by generating strings, trees, and graphs representing their structure. While such models allow one to generate molecules with desirable properties, they give no guarantees that the molecules can actually be synthesized in practice. We propose a new molecule generation model, mirroring a more realistic realworld process, where (a) reactants are selected, and (b) combined to form more complex molecules. More specifically, our generative model proposes a bag of initial reactants (selected from a pool of commerciallyavailable molecules) and uses a reaction model to predict how they react together to generate new molecules. We first show that the model can generate diverse, valid and unique molecules due to the useful inductive biases of modeling reactions. Furthermore, our model allows chemists to interrogate not only the properties of the generated molecules but also the feasibility of the synthesis routes. We conclude by using our model to solve retrosynthesis problems, predicting a set of reactants that can produce a target product.
06/12/2019 ∙ by John Bradshaw, et al. ∙ 7 ∙ shareread it

Counterfactual Fairness
Machine learning can impact people with legal or ethical consequences when it is used to automate decisions in areas such as insurance, lending, hiring, and predictive policing. In many of these scenarios, previous decisions have been made that are unfairly biased against certain subpopulations, for example those of a particular race, gender, or sexual orientation. Since this past data may be biased, machine learning predictors must account for this to avoid perpetuating or creating discriminatory practices. In this paper, we develop a framework for modeling fairness using tools from causal inference. Our definition of counterfactual fairness captures the intuition that a decision is fair towards an individual if it the same in (a) the actual world and (b) a counterfactual world where the individual belonged to a different demographic group. We demonstrate our framework on a realworld problem of fair prediction of success in law school.
03/20/2017 ∙ by Matt J. Kusner, et al. ∙ 0 ∙ shareread it

Grammar Variational Autoencoder
Deep generative models have been wildly successful at learning coherent latent representations for continuous data such as video and audio. However, generative modeling of discrete data such as arithmetic expressions and molecular structures still poses significant challenges. Crucially, stateoftheart methods often produce outputs that are not valid. We make the key observation that frequently, discrete data can be represented as a parse tree from a contextfree grammar. We propose a variational autoencoder which encodes and decodes directly to and from these parse trees, ensuring the generated outputs are always valid. Surprisingly, we show that not only does our model more often generate valid outputs, it also learns a more coherent latent space in which nearby points decode to similar discrete outputs. We demonstrate the effectiveness of our learned models by showing their improved performance in Bayesian optimization for symbolic regression and molecular synthesis.
03/06/2017 ∙ by Matt J. Kusner, et al. ∙ 0 ∙ shareread it

GANS for Sequences of Discrete Elements with the Gumbelsoftmax Distribution
Generative Adversarial Networks (GAN) have limitations when the goal is to generate sequences of discrete elements. The reason for this is that samples from a distribution on discrete objects such as the multinomial are not differentiable with respect to the distribution parameters. This problem can be avoided by using the Gumbelsoftmax distribution, which is a continuous approximation to a multinomial distribution parameterized in terms of the softmax function. In this work, we evaluate the performance of GANs based on recurrent neural networks with Gumbelsoftmax output distributions in the task of generating sequences of discrete elements.
11/12/2016 ∙ by Matt J. Kusner, et al. ∙ 0 ∙ shareread it

Private Causal Inference
Causal inference deals with identifying which random variables "cause" or control other random variables. Recent advances on the topic of causal inference based on tools from statistical estimation and machine learning have resulted in practical algorithms for causal inference. Causal inference has the potential to have significant impact on medical research, prevention and control of diseases, and identifying factors that impact economic changes to name just a few. However, these promising applications for causal inference are often ones that involve sensitive or personal data of users that need to be kept private (e.g., medical records, personal finances, etc). Therefore, there is a need for the development of causal inference methods that preserve data privacy. We study the problem of inferring causality using the current, popular causal inference framework, the additive noise model (ANM) while simultaneously ensuring privacy of the users. Our framework provides differential privacy guarantees for a variety of ANM variants. We run extensive experiments, and demonstrate that our techniques are practical and easy to implement.
12/17/2015 ∙ by Matt J. Kusner, et al. ∙ 0 ∙ shareread it

Deep Manifold Traversal: Changing Labels with Convolutional Features
Many tasks in computer vision can be cast as a "label changing" problem, where the goal is to make a semantic change to the appearance of an image or some subject in an image in order to alter the class membership. Although successful taskspecific methods have been developed for some label changing applications, to date no general purpose method exists. Motivated by this we propose deep manifold traversal, a method that addresses the problem in its most general form: it first approximates the manifold of natural images then morphs a test image along a traversal path away from a source class and towards a target class while staying near the manifold throughout. The resulting algorithm is surprisingly effective and versatile. It is completely data driven, requiring only an example set of images from the desired source and target domains. We demonstrate deep manifold traversal on highly diverse label changing tasks: changing an individual's appearance (age and hair color), changing the season of an outdoor image, and transforming a city skyline towards nighttime.
11/19/2015 ∙ by Jacob R. Gardner, et al. ∙ 0 ∙ shareread it

Differentially Private Bayesian Optimization
Bayesian optimization is a powerful tool for finetuning the hyperparameters of a wide variety of machine learning models. The success of machine learning has led practitioners in diverse realworld settings to learn classifiers for practical problems. As machine learning becomes commonplace, Bayesian optimization becomes an attractive method for practitioners to automate the process of classifier hyperparameter tuning. A key observation is that the data used for tuning models in these settings is often sensitive. Certain data such as genetic predisposition, personal email statistics, and car accident history, if not properly private, may be at risk of being inferred from Bayesian optimization outputs. To address this, we introduce methods for releasing the best hyperparameters and classifier accuracy privately. Leveraging the strong theoretical guarantees of differential privacy and known Bayesian optimization convergence bounds, we prove that under a GP assumption these private quantities are also nearoptimal. Finally, even if this assumption is not satisfied, we can use different smoothness guarantees to protect privacy.
01/16/2015 ∙ by Matt J. Kusner, et al. ∙ 0 ∙ shareread it

CostSensitive Tree of Classifiers
Recently, machine learning algorithms have successfully entered largescale realworld industrial applications (e.g. search engines and email spam filters). Here, the CPU cost during test time must be budgeted and accounted for. In this paper, we address the challenge of balancing the testtime cost and the classifier accuracy in a principled fashion. The testtime cost of a classifier is often dominated by the computation required for feature extractionwhich can vary drastically across eatures. We decrease this extraction time by constructing a tree of classifiers, through which test inputs traverse along individual paths. Each path extracts different features and is optimized for a specific subpartition of the input space. By only computing features for inputs that benefit from them the most, our cost sensitive tree of classifiers can match the high accuracies of the current stateoftheart at a small fraction of the computational cost.
10/09/2012 ∙ by Zhixiang Xu, et al. ∙ 0 ∙ shareread it

Image Data Compression for Covariance and Histogram Descriptors
Covariance and histogram image descriptors provide an effective way to capture information about images. Both excel when used in combination with special purpose distance metrics. For covariance descriptors these metrics measure the distance along the nonEuclidean Riemannian manifold of symmetric positive definite matrices. For histogram descriptors the Earth Mover's distance measures the optimal transport between two histograms. Although more precise, these distance metrics are very expensive to compute, making them impractical in many applications, even for data sets of only a few thousand examples. In this paper we present two methods to compress the size of covariance and histogram datasets with only marginal increases in test error for knearest neighbor classification. Specifically, we show that we can reduce data sets to 16 while approximately matching the test error of kNN classification on the full training set. In fact, because the compressed set is learned in a supervised fashion, it sometimes even outperforms the full data set, while requiring only a fraction of the space and drastically reducing testtime computation.
12/04/2014 ∙ by Matt J. Kusner, et al. ∙ 0 ∙ shareread it

Learning a Generative Model for Validity in Complex Discrete Structures
Deep generative models have been successfully used to learn representations for highdimensional discrete spaces by representing discrete objects as sequences, for which powerful sequencebased deep models can be employed. Unfortunately, these techniques are significantly hindered by the fact that these generative models often produce invalid sequences: sequences which do not represent any underlying discrete structure. As a step towards solving this problem, we propose to learn a deep recurrent validator model, which can estimate whether a partial sequence can function as the beginning of a full, valid sequence. This model not only discriminates between valid and invalid sequences, but also provides insight as to how individual sequence elements influence the validity of the overall sequence, and the existence of a corresponding discrete object. To learn this model we propose a reinforcement learning approach, where an oracle which can evaluate validity of complete sequences provides a sparse reward signal. We believe this is a key step toward learning generative models that faithfully produce valid sequences which represent discrete objects. We demonstrate its effectiveness in evaluating the validity of Python 3 source code for mathematical expressions, and improving the ability of a variational autoencoder trained on SMILES strings to decode valid molecular structures.
12/05/2017 ∙ by David Janz, et al. ∙ 0 ∙ shareread it

Predicting Electron Paths
Chemical reactions can be described as the stepwise redistribution of electrons in molecules. As such, reactions are often depicted using "arrowpushing" diagrams which show this movement as a sequence of arrows. We propose an electron path prediction model (ELECTRO) to learn these sequences directly from raw reaction data. Instead of predicting product molecules directly from reactant molecules in one shot, learning a model of electron movement has the benefits of (a) being easy for chemists to interpret, (b) incorporating constraints of chemistry, such as balanced atom counts before and after the reaction, and (c) naturally encoding the sparsity of chemical reactions, which usually involve changes in only a small number of atoms in the reactants. We design a method to extract approximate reaction paths from any dataset of atommapped reaction SMILES strings. Our model achieves stateoftheart results on a subset of the UPSTO reaction dataset. Furthermore, we show that our model recovers a basic knowledge of chemistry without being explicitly trained to do so.
05/23/2018 ∙ by John Bradshaw, et al. ∙ 0 ∙ shareread it
Matt J. Kusner
is this you? claim profile