Bayesian Additive Adaptive Basis Tensor Product Models for Modeling High Dimensional Surfaces: An application to high-throughput toxicity testing

02/15/2017 ∙ by Matthew W. Wheeler, et al. ∙ 0

Many modern data sets are sampled with error from complex high-dimensional surfaces. Methods such as tensor product splines or Gaussian processes are effective/well suited for characterizing a surface in two or three dimensions but may suffer from difficulties when representing higher dimensional surfaces. Motivated by high throughput toxicity testing where observed dose-response curves are cross sections of a surface defined by a chemical's structural properties, a model is developed to characterize this surface to predict untested chemicals' dose-responses. This manuscript proposes a novel approach that models the multidimensional surface as a sum of learned basis functions formed as the tensor product of lower dimensional functions, which are themselves representable by a basis expansion learned from the data. The model is described, a Gibbs sampling algorithm proposed, and is investigated in a simulation study as well as data taken from the US EPA's ToxCast high throughput toxicity testing platform.



There are no comments yet.


page 15

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Chemical toxicity testing is vital in determining the public health hazards posed by chemical exposures. However, the number of chemicals far outweighs the resources available to adequately test all chemicals, which leaves knowledge gaps when protecting public health. For example, there are over chemicals in industrial use with less than of these chemicals subject to long term in vivo studies conducted by the National Toxicology Program, and most of these studies occur only after significant public exposures.

As an alternative to long term studies, which are expensive and take years to complete, there has been an increased focus on the use of high throughput bioassays to determine the toxicity of a given chemical. In an effort to understand the utility of these approaches for risk assessment, many agencies have developed rich databases to study this problem. To such an end, the US EPA ToxCast chemical prioritization project (Judson et al., 2010) was created to collect dose-response information on thousands of chemicals for hundreds of in vitro bioassays, and it has been used to develop screening methods that prioritize chemicals for study based upon in vitro toxicity. Though these methods have shown utility in predicting toxicity for many chemicals that pose a risk to the public health, there are many situations where the in vitro

bioassay information may not be available (e.g, a chemical may be so new that it has not been studied). In these cases, it would be ideal if toxicity could be estimated

in silico by chemical structural activity relationship (SAR) information. Here the goal is to develop a model based on SAR information that predicts the entire dose-response for a given assay. This manuscript is motivated by this problem.

1.1 Quantitative Structure Activity Relationships

There is a large literature estimating chemical toxicity from SAR information. These approaches, termed Quantitative Structure Activity Relationships (QSAR) (for a recent review of the models and the statistical issues encountered see Emmert-Streib et al. (2012)

), estimate a chemical’s toxicity from the chemical’s structural properties. Multiple linear regression has played a role in QSAR modeling since its inception

(Roy et al., 2015, p. 191)

, but models where the predictor enters into the relationship as a linear function often fail to describe the non-linear nature of the relationship. To address the non-linearity of the predictor-response relationship, approaches such as neural networks

(Devillers, 1996), regression trees (Deconinck et al., 2005)

, support vector machines

(Czermiński et al., 2001; Norinder, 2003), and Gaussian processes (Burden, 2001) have been applied to the problem with varying levels of success. These approaches have been tailored to scalar responses, and, save one instance, have not been used to model the dose-response relationship, which may result in a loss of information.

The only QSAR approach that has addressed the problem of estimating a dose-response curve is the work by Low-Kam et al. (2015). This approach defined a Bayesian regression tree over functions where the leaves of the tree represent a different dose-response surface. It was used to identify chemical properties related to the observed dose-response, and when this approach was applied to prediction the approach sometimes performed poorly in a leave one out analysis. Further, it is computationally intensive and was estimated to take more than a week to analyze the data in motivating problem.

1.2 Relevant Literature

Assume that one obtains a dimensional vector , and one wants to predict a dimensional response over from . Given (e.g., SAR characteristics in the motivating problem) and (e.g., doses in the motivating problem) one is interested in estimating an unknown dimensional surface where response curve is a cross section of at Figure (1) describes this in the case of the motivating example. Here two 1-dimensional cross sections (black lines) of a 2-dimensional surface are observed and one is interested the entire 2-dimensional surface.

Figure 1: Example of the problem for a 2-dimensional surface. Here two dose-response curves (black lines), which are cross sections of a larger surface, are observed, and one is interested in this surface.

One may use a Gaussian process (GP, (Rasmussen and Williams, 2006) ) to characterize the entire dimensional surface, but there are computational problems that make the use of a GP impractical. In the motivating example, there are over unique pairs. GP regression requires inversion of the covariance matrix, which is inherently an operation; inverting a matrix in each iteration of a Gibbs sampler is challenging, and, though the covariance matrix may be approximated leading to reduced computational burden (Quiñonero-Candela and Rasmussen (2005),Banerjee et al. (2013)), it is the author’s experience that, in the higher dimensional case of the data example, such approximations became accurate when the dimension of the approximation matrix approaches that of the matrix it is approximating. This leads to minimal computational savings. Further, if the approximation approach can be used resulting in computational benefits, the the proposed method can be used in in conjunction with such approximations.

As an alternative to GPs, one can use tensor product splines (de Boor (2000), Chapter 17); however, even if one defines a functional basis having only two basis functions for each dimension, the resulting tensor spline basis would have dimension which is often computationally intractable. The proposed model sidesteps these issues by defining a tensor product of learned multi-dimsensional surfaces defined on and

One could consider the problem from a functional data perspective (Ramsay, 2006; Morris, 2015) by assuming a functional response over Approaches such as the functional linear array model (Brockhaus et al., 2015) and functional additive mixed models (Scheipl et al., 2015) are closest to the proposed approach as they allow the surface to be modeled as a basis defined on the entire space. For high dimensional spaces where interactions are appropriate, these approaches suffer the drawbacks because they do not allow for interactions in all dimensions. Additionally, these approaches assume that for any two observations when , loadings are drawn from a distribution independent of This differs from the proposed approach assumes, which correlates loadings through

Clustering the functional responses is also a possibility. Here, one would model the surface using and cluster using There are many functional clustering approaches (see for example Hall et al. (2001), Ferraty and Vieu (2006),Zhang and Li (2010),Sprechmann and Sapiro (2010) Delaigle and Hall (2012), Delaigle and Hall (2013), and references therein), but these methods predict based upon observing the functional response , which is the opposite of the problem at hand. In this problem, one observes information on the group (i.e., ) and one wishes to estimate the new response (i.e., ). However different, such approaches can be seen as motivating the proposed method. Instead of clustering the loadings, the proposed approach assumes these loading for each basis is a value on a continuous function over This is similar to clustering as similarity is induced for any two responses having values in that are sufficiently close.

As the number of basis functions used in the model is choice that may impact the model’s ability to represent an arbitrary surface, a sufficiently large number of basis functions are included. Parsimony in this basis set is ensured by adapting to the number components in the model using the multiplicative gamma prior (Bhattacharya and Dunson, 2011)

. This is a global shrinkage prior is used to remove components from the sum by stochastically decreasing the prior variance of consecutive loadings in the sum to zero. The sum adapts to the actual number of basis functions needed to model the data and the choice in the number of basis functions is less important as long as the number is sufficiently large.

In what follows, section 2 defines the model. Section gives the data model for normal responses and outlines a sampling algorithm. Section 4 shows through a simulation study the method outperforms many traditional machine learning approaches, and section is a data example applying the method to data from the US EPA’s ToxCast database.

2 Model

2.1 Basic Approach

Consider modeling the surface where and . Given and tensor product spline approaches (de Boor, 2000, chapter 17) approximate as a product of spline functions defined over and i.e.,


for and The tensor product spline defines and to be in the span of a spline basis. Assuming and are functions defined as


where and are spline bases. The tensor product spline is

where As this approach typically uses 1- or 2-dimensional spline bases, when the dimension of is large the tensor product becomes impractical as the number of functions in the tensor product increases exponentially.

Where tensor product spline models define the basis through a spline basis a priori, many functional data approaches (e.g, Montagna et al. (2012)) model functions from a basis learned from the data. Often the function space is not defined over directly, but is constructed on a smaller dimensional subspace, defined to be in the present discussion. For cross section this approach models to be in the span of a finite basis That is,


where is a vector of basis coefficients. This effectively ignores which may not be reasonable in many applications.

To model over the functional data and tensor product approaches can be combined. The idea is to define a basis over where each basis function is the tensor product of two surfaces defined on and respectively. Extending (2), define to be surfaces on and replace each with Now, loadings are continuous function indexed by , and a new basis with


As pointed out by a referee, an alternative way to look at (3) is that of an ensemble learner (Sollich and Krogh, 1996), which includes techniques such as bagging Breiman (1996)

and random forests

(Breiman, 2001). Ensemble learners describe the estimate as a weighted sum of learners (see Murphy (2012) and references therein). In this way, model (3) can be looked as a weighted sum over tensor tensor product learners, and related to an ensemble based approach.

To define a tensor product, the functions and must be in of a linear function space (DeBoor, 2001, pg 293). For flexibility, let

where and are positive definite kernel functions. This places and in a reproducing kernel Hilbert space defined by or and embeds

in a linear space defined by the tensor product of these functions. By estimating the hyperparameters, this approach learns the basis of each function in (

3), which may be preferable to a tensor spline approach that defines the basis a priori.

One may mistake this definition to be a GP defined by the tensor product of covariance kernels (e.g., see Bonilla et al. (2007)). Such an approach forms a new covariance kernel over as a product of individual covariance kernels defined on and . For the proposed model, the product is the modeled function and not the individual covariance kernels.

2.2 Selection of K

The number of elements in the basis is determined by The larger , the richer the class of functions the model can entertain. In many cases, one would not expect a large number of functions to contribute to the surface and would prefer as few components as possible. One could place a prior on but it is difficult to find efficient sampling algorithms in this case. As an alternative, the multiplicative gamma process (Bhattacharya and Dunson, 2011) is used to define a prior over the that allows the sum to adapt to the necessary number of components. Here,

with and This is an adaptive shrinkage prior over the functions. If the variances are stochastically decreasing favoring more shrinkage as increases. The choice of this prior implies the surface defined by is increasingly close to zero as increases. As many of the basis functions contribute negligibly to modeling the surface, this induces effective basis selection.

2.3 Relationships to other Models

Though GPs are used in the specification of the model, one may use alternative approaches such as polynomial spline models or process-convolution approaches (Higdon, 2002). Depending on the choice (3) can degenerate into other methods. For example, if and

are defined as white noise processes and

and defined to be in the linear span of the same finite basis, the model is identical to the approach of Montagna et al. (2012), which connects the model to functional data approaches. In this way, the additive adaptive tensor product model can be looked at as a functional model with loadings correlated by a continuous stochastic process over instead of a white noise process.

If and the functions and are defined using a spline basis, this approach trivially degenerates to the tensor product spline model. Similarly, let each function in be defined using a common basis, with,

where is a basis used for all . In this case, model (3) can be re-written as

Letting , one arrives at a tensor product model with learned basis and specified basis

2.4 Computational Benefits

When is observed on a fixed number of points and is the same for each cross section, the proposed approach can deliver substantial reductions in the computational resources needed when compared to GP regression. Let be the number of unique replicates on and let be the total number of observed cross sections. For a GP, the dimension of the corresponding covariance matrix is Inverting this matrix is an operation. For the proposed approach, there are inversions of a matrix of dimension and inversions of a matrix of dimension This results in a computational complexity of which can be significantly less than a GP based method. In the data example this results in approximately the resources needed as compared to the GP approach, here , , and . Savings increase as the experiment becomes more balanced. For example, if and there are the same number of observations as in the data example, then a GP approach would require times more computing time than the proposed method.

3 Data Model and Sampling Algorithm

3.1 Data Model

A Markov chain Monte Carlo (MCMC) sampling approach is outlined for normal errors. Assume that for cross section

one observes measurements at For error prone observation let

where Define Model (3) assumes the surface is centered at zero; here, let the model be centered at

In defining and , the covariance kernel, along with its hyper-parameters, determines the smoothness of the function. The squared exponential kernel is used to model smooth response surfaces. Let




where is the Euclidean norm, is the prior variance, and are scale parameters, and . In this specification, the parameter is not necessary; it is equivently defined through the variance of the GP where To allow for a variance other than one for the process , let

Depending on the application any positive definite kernel may be used. For example, one may wish to use a Matern kernel, which is frequently used in spatial statistics. As is described in the application as the distance between spatial locations in an abstract chemical space, such an approach may be appropriate. Inpreliminary tests, a Matern kernel provides results that are qualitatively identical to the squared exponential kernel, and it is not considered further.

Given these choices the data model is


Appropriate priors are placed over the hyperparameters of and For the length parameter of the squared exponential kernels in (4) and (5

), uniform distributions, i.e.,

are placed over the scale parameters and

This places equal prior probability over a range of plausible values allowing the smoothness of the constituent basis to be learned. The value

is chosen so that correlations between any two points in the space of interest are close to and is chosen so that correlation between any two points in the resultant correlation is approximately to

3.2 Sampling Algorithm

Define the vector of observations to be the vector of measurements across the observed curves. Likewise define

to be an matrix, where each row corresponds to the loadings for observation Let

be an matrix, where each row represents the basis functions evaluated at Using these definitions, (6) is expressible as


where is the Schur product, is a row vector, and is the vector of error terms.

Let be the set of uniquely observed inputs across all observations, and define to be an matrix where the rows corresponds to each element in For each row in , all entries are set to zero except at column This entry is set to one, and it corresponds to the observation such that Likewise, define the matrix to be an matrix. For each row , each entry is set to zero except at column which is set to one. This entry corresponds to

Sample from , and in a series of Gibbs steps as follows:

  1. For each , , letting where and are and without column sample at Here

    were is the covariance function constructed from is an matrix defined as and is an identity matrix.

  2. For each , and letting , sample at Here

    where is the covariance matrix formed from , is an matrix defined to be and is the identity matrix.

  3. Sample , where

    is a column vector from column of and is the matrix formed from

  4. For each sample where

    is a column vector from column of is the matrix formed from and .

The other parameters in the model are sampled using Gibbs or Metropolis steps. The algorithm is written in the R programming language (R Core Team, 2015) using the Rcpp C++ extensions (Eddelbuettel, 2013) and the RcppArmadillo (Eddelbuettel and Sanderson, 2014) linear algebra extensions and is available in the supplement.

When the number of unique observations is small, it is possible to develop a block Gibbs sampler for all of the and In large problems, this requires the inversion of a large matrix offsetting the benefits of the increased computational efficiency.

3.3 Predictive Inference

The posterior predictive distribution of

unobserved cross sections of at is estimated through MCMC. Let the vector be observed at and one is interested in evaluated at ; and

are jointly distributed as

Here and represent covariance matrices given and

Using properties of the multivariate normal distribution, conditionally on

For each iteration, this expression is used to draw Given this draw, as well as which represents etc., evaluated at , one can estimate the posterior predicted distribution of If new are of interest, the same technique can be applied to estimating These values can be used with to provide estimates for

4 Simulation

This approach was tested on synthetic data. Here the dimension of was chosen to be 2 or 3, and for a given dimension, synthetic data sets were created, For each data set, a total of cross sections of were observed at seven dose groups. Each data set contained a total of observations.

To create a data set, the chemical information vector , for chemical , was sampled uniformly over the unit square/cube. At each was sampled at and from

where is the magnitude of the response and is the dose where the response is at of the maximum. To vary the response over over a different zero centered Gaussian process, , was sampled at and for each simulation; and were functions of with and This resulted in the maximum response being between and approximately and the dose resulting in response being placed closer to zero for steeper dose-responses. Sample data sets used in the simulation are available in the supplement.

In specifying the model, priors were placed over parameters that reflect assumptions based upon the smoothness of the curve. For each was drawn from a discrete uniform distribution over the set Additionally, for each For , , which were the choices used in (Bhattacharya and Dunson, 2011). The choice of the parameters for the prior over the were examined, and the results were nearly identical with The prior specification for the model was completed by letting

A total of MCMC samples were taken with the first disregarded as burn in. Trace plots from multiple chains were monitored for convergence. Convergence was fast usually occurring after iterations with approximately functions in the learned basis defining the surface (i.e. having posterior variance at or above 1).

To analyze the choice of , the performance of the model was evaluated for and

The estimates produced from these models were compared against bagged multivariate adaptive regression splines (MARS)

(Friedman, 1991) and bagged neural networks (Zhou et al., 2002) using the ‘caret’ package in R (from Jed Wing et al., 2016). Additionally, treed Gaussian Processes (Gramacy and Lee, 2008) ,using the R ‘tgp’ package (Gramacy et al., 2007), were used in the comparison. All packages were run using their default settings; bagged samples were used for both the MARS and neural network models. For the neural network model, hidden layers were used. Posterior predictions from the treed Gaussian process were obtained from the maximum a posteriori (MAP) estimate. This was done as initial tests revealed estimates sampled from the posterior distribution were no better than the MAP estimate, but sampling from the full posterior dramatically increased computation time making the full simulation impossible.

For each data set, and curves were randomly sampled and used to train the respective model; the remaining curves were used as a hold out sample for posterior predictive inference with the true curve being compared against the predictions. All methods were compared using the mean squared predicted error (MSPE).

Adaptive TP
K=1 K=2 K=3 K=15 Neural Net MARS Treed GP
2-dimensions N=75 76.2 69.1 69.5 69.8 108.3
N=125 56.9 48.5 48.8 48.7 92.4 205.8
N=175 48.5 37.7 38.4 38.3 85.1 198.8 61.1
3-dimensions N=75 164.9 162.0 155.4 155.4 185.2
N=125 128.6 125.0 121.0 121.0 160.4 223.4
N=175 106.3 102.6 99.7 100.1 150.0 217.7
Trimmed mean used with of the upper and lower tails removed.
Table 1: Mean squared prediction error in the simulation of the adaptive tensor product approach for four values of K as well as treed Gaussian processes, bagged neural networks, and bagged multivariate regression splines (MARS).

Table (1) describes the average mean squared predicted error across all simulations. For the adaptive tensor product model there is an improvement in prediction when increasing For the 2-dimensional case the improvment occurs from to but the results of and are almost identical to For the 3-dimensional case, improvements are seen up to with identical results for . These results support the assertion that one can make large and let the model adapt to the number of tensor products in the sum.

This table also gives the MSPE for the other methods. Compared to the other approaches, the adaptive tensor product approach is superior. For , the treed Gaussian process, which is often the closest competitor, produces mean square prediction errors that are about times greater than the adaptive TP approach. For smaller values of the treed GP as well as the bagged MARS approach failed to produce realistic predictions. In these cases, the trimmed mean with of the tails removed was used to provide a better estimate of center.

Figure 2: Comparison of the predictive performance between the adaptive tensor product and the treed Gaussian process. In the figure, the corresponding model’s root mean squared predicted error is given as a contour plot. The heat map represents the maximum dose response given the coordinate pair; lighter colors represent greater dose-response activity.

Figure (2), shows where the gains in prediction take place. As a surrogate for the intensity of the true dose-response function, the intensity of the dose-response at across is shown in the 2-dimensional case. Dark gray regions are areas of little or no dose-response activity; lighter regions have the steeper dose-responses. Contour plots of the model’s root MSPE are overlaid on the heat maps. The top plot shows the performance of the adaptive TP and the bottom plot shows the performance of the treed GP, which was the closest competitor for this data set. The figure shows the adaptive TP is generally better at predicting the dose-response curve across and that larger gains are made in regions of high dose response activity. This is most evident in the lower left corner of both plots, where regions of high dose response activity are located. At the peaks, the root MSPE is almost double that for the treed Gaussian process.

5 Data Example

The approach is applied to data released from Phase II of the ToxCast high throughput platform. The AttaGene PXR assay was chosen as it has the highest number of active dose-response curves across the chemicals tested. This assay targets the Preganene X receptor, which detects the presence of foreign substances and up regulates proteins involved with detoxification in the body; an increased response for this assay might relate to the relative toxicity of the given chemical.

Chemical descriptors were calculated using Mold (Hong et al., 2008) where chemical structure was described from simplified molecular-input line-entry system (SMILES) information (Weininger, 1988). Mold computes

unique chemical descriptors. For the set of descriptors, a principal component analysis was performed across all chemicals. This is a standard technique in the QSAR liturature

(Emmert-Streib et al., 2012, pg 44). Here, the first principal components, representing approximately of the descriptor variability, were used as a surrogate for the chemical descriptor .

The database was restricted to chemicals having SMILES information available. In the assay, each chemical was tested across a range of doses between and with no tests done at exactly a zero dose. Eight doses were used per chemical, with each chemical being tested at different doses within the above range. Most chemicals had one observation per dose; however, some of the chemicals tested had multiple observations per dose. In total, the data set consisted of data points from the distinct dose-response curves.

A random sample of chemicals was taken and the model wad trained to this data; in evaluating posterior predictive performance, the remaining chemicals were used as a hold out sample. In this analysis, was the log dose, where this value was rounded to two significant digits, and the same prior specification was in the simulation was used to train the model. To compare the prediction results, boosted MARS and neural networks were used; treed Gaussian processes were attempted, but the R package ‘tgp’ crashed after 8 hours during burn-in. The method of Low-Kam et al. (2015) was also attempted, but, the code was designed such that each chemical is tested at the same doses with the same number of replications per dose point. As the ToxCast data are not in this format, the method could not be applied to the data.

Figure 3: Four posterior predicted dose-response curves (black line) with corresponding

predicted credible intervals (dotted lines) for four chemicals in the hold out samples having repeated measurements per dose. Grey dash-dotted line represents the predicted response from the bagged neural network.

Figures (3) and (4) show the posterior predicted curves (solid black line) with equal tail

posterior predicted quantiles (dashed line) for eight chemicals in the hold out sample. Figure (

3) describes the predictions for chemicals having multiple measurements per dose group, and figure (4) gives predictions having only a single observation per dose group. As compared to the observed data, these figures show the model provides accurate dose-response predictions across a variety of shapes and chemical profiles. Additionally the grey dashed-dotted line represents the prediction using the bagged neural network. These estimates are frequently less smooth and further off from the observed data than the adaptive tensor product splines. As additional confirmation the model is predicting dose-response curves based upon the chemical information, one can look a the chemicals from a biological mode of action perspective. For example in (3), it is interesting to note that the dose-response predictions for both Bisphenol A and Hydroxyprogesterone are similar, because both act similarly and are known to bind the estrogen receptor.

Figure 4: Four posterior predicted dose-response curves (black line) with corresponding predicted credible intervals (dotted lines) for four chemicals in the hold out samples having repeated measurements per dose. Grey dash-dotted line represents the predicted response from the bagged neural network.

In comparison to the other models, the adaptive tensor product approach also had the lowest predicted mean squared error and the predicted mean absolute error for the data in the hold out sample. Here the model had a predicted mean squared error of and mean absolute error of , as compared to values of and for neural networks as well as and for MARS. These results are well in line with the simulation.

One can also compare the ability of the posterior predictive distribution to predict the observations in the hold out sample. To do this, lower and upper tail cut-points defined by were estimated from the posterior predictive distribution. The number of observations that were below the lower cut-point or above the upper cut-point were counted. Under the assumption the posterior predictive distribution adequately describes the data, this count is where equal to the number of observations for that chemical. To test this assumption, the critical value was computed and compared to the count. This was done for and here, and of the posterior predictive distributions were at or below the critical value. This supports the assertion that the model is accurately predicting the dose-response relationship given the chemical descriptor information .

The method was also compared to standard QSAR approaches, which model a single data point. Similar to the standard QSAR methodology, a model was fit to each data-set and the response associated with a given dose was computed to be observation . This value was then used in the model

where is a Gaussian process with squared exponential covariance kernel. The model was fit on the same observations in the training set and predictions were made of the response at that dose. Both this QSAR approach as well as the adaptive tensor product approach were compared to the hold out samples using the correlation coefficient, a standard practice in the QSAR literature. For the dose of the standard QSAR approach had a correlation of as compared to the co-mixture approaches’ correlation of For the dose of a similar increase was seen, and, when observations that have a chemical within the training set defined as ‘close’ (i.e., a relative distance between two chemicals less than 2.2), this improvement in the correlation coefficient is almost (i.e, compared to ). This indicates that to improvements in the correlation coefficient using the proposed approach.

6 Conclusions

The proposed approach allows one to model higher dimensional surfaces as a sum of a learned basis, where the effective number of components in the basis adapts to the complexity of the surface. In the simulation and the motivating problem, this method is shown to be superior to competing approaches, and, given the design of the experiment, it is shown to require significantly less computational resources than GP approaches. Though this approach is demonstrated for high throughput data, it is anticipated it can be used for any multi-dimensional surface.

In terms of the application, this model shows that dose-response curves can be estimated from chemical SAR information, which is a step forward in QSAR modeling. Though such an advance is useful investigating toxic effects, it can also be used in therapeutic effects. It is conceivable, that such an approach can be used in silico to find chemicals that have certain therapeutic effects in certain pathways without eliciting toxic effects in other pathways. Such an approach may be of significant use in drug development as well as chemical toxicity research.

Future research may focus on extending this model to multi-output functional responses. For example, multiple biossays dose-responses may be observed, and, as they target similar pathways, are correlated. In such cases, it may be reasonable to assume their responses are both correlated to each other and related to the secondary input, which is the chemical used in the bioassay. Such an approach may allow for lower level in vitro bioassays, like the ToxCast endpoint studied here, to be used to model higher level in vivo responses.


  • Banerjee et al. (2013) Banerjee, A., Dunson, D. B., and Tokdar, S. T. (2013). Efficient gaussian process regression for large datasets. Biometrika 100, 75.
  • Bhattacharya and Dunson (2011) Bhattacharya, A. and Dunson, D. B. (2011). Sparse Bayesian infinite factor models. Biometrika 98, 291–306.
  • Bonilla et al. (2007) Bonilla, E. V., Chai, K. M., and Williams, C. (2007). Multi-task Gaussian process prediction. In Platt, J., Koller, D., Singer, Y., and Roweiss, S., editors, Proc.  of the 20th Annual Conference on Neural Information Processing Systems, pages 153–160. Neural Information Processing Systems Conference.
  • Breiman (1996) Breiman, L. (1996). Bagging predictors. Machine learning 24, 123–140.
  • Breiman (2001) Breiman, L. (2001). Random forests. Machine learning 45, 5–32.
  • Brockhaus et al. (2015) Brockhaus, S., Scheipl, F., Hothorn, T., and Greven, S. (2015). The functional linear array model. Statistical Modelling 15, 279–300.
  • Burden (2001) Burden, F. R. (2001). Quantitative structure-activity relationship studies using Gaussian processes. Journal of chemical information and computer sciences 41, 830–835.
  • Czermiński et al. (2001) Czermiński, R., Yasri, A., and Hartsough, D. (2001). Use of support vector machine in pattern classification: Application to QSAR studies. Quantitative Structure-Activity Relationships 20, 227–240.
  • de Boor (2000) de Boor, C. (2000). A practical guide to splines, Revised Edition, volume 27. Springer-Verlag.
  • DeBoor (2001) DeBoor, C. (2001). A practical guide to splines. Springer.
  • Deconinck et al. (2005) Deconinck, E., Hancock, T., Coomans, D., Massart, D., and Vander Heyden, Y. (2005). Classification of drugs in absorption classes using the classification and regression trees (CART) methodology. Journal of pharmaceutical and biomedical analysis 39, 91–103.
  • Delaigle and Hall (2012) Delaigle, A. and Hall, P. (2012). Achieving near perfect classification for functional data. Journal of the Royal Statistical Society: Series B (Statistical Methodology) 74, 267–286.
  • Delaigle and Hall (2013) Delaigle, A. and Hall, P. (2013). Classification using censored functional data. Journal of the American Statistical Association 108, 1269–1283.
  • Devillers (1996) Devillers, J. (1996). Neural networks in QSAR and drug design. Academic Press.
  • Eddelbuettel (2013) Eddelbuettel, D. (2013). Seamless R and C++ integration with Rcpp. Springer.
  • Eddelbuettel and Sanderson (2014) Eddelbuettel, D. and Sanderson, C. (2014). RcppArmadillo: Accelerating R with high-performance C++ linear algebra. Computational Statistics and Data Analysis 71, 1054–1063.
  • Emmert-Streib et al. (2012) Emmert-Streib, F., Dehmer, M., Varmuza, K., and Bonchev, D. (2012). Statistical modelling of molecular descriptors in QSAR/QSPR. John Wiley & Sons.
  • Ferraty and Vieu (2006) Ferraty, F. and Vieu, P. (2006). Nonparametric functional data analysis: theory and practice. Springer Science & Business Media.
  • Friedman (1991) Friedman, J. H. (1991). Multivariate adaptive regression splines. The annals of statistics pages 1–67.
  • from Jed Wing et al. (2016) from Jed Wing, M. K. C., Weston, S., Williams, A., Keefer, C., Engelhardt, A., Cooper, T., Mayer, Z., Kenkel, B., the R Core Team, Benesty, M., Lescarbeau, R., Ziem, A., Scrucca, L., Tang, Y., Candan, C., and Hunt., T. (2016). caret: Classification and Regression Training. R package version 6.0-73.
  • Gramacy et al. (2007) Gramacy, R. B. et al. (2007). tgp: an R package for Bayesian nonstationary, semiparametric nonlinear regression and design by treed Gaussian process models. Journal of Statistical Software 19, 6.
  • Gramacy and Lee (2008) Gramacy, R. B. and Lee, H. K. H. (2008). Bayesian treed Gaussian process models with an application to computer modeling. Journal of the American Statistical Association 103, 1119–1130.
  • Hall et al. (2001) Hall, P., Poskitt, D. S., and Presnell, B. (2001). A functional data-analytic approach to signal discrimination. Technometrics 43, 1–9.
  • Higdon (2002) Higdon, D. (2002). Space and space-time modeling using process convolutions. In Anderson, C. W., Vic Barnett, P. C., Chatwin, P. C., and El-Shaarawi, A. H., editors, Quantitative methods for current environmental issues, pages 37–54. Springer.
  • Hong et al. (2008) Hong, H., Xie, Q., Ge, W., Qian, F., Fang, H., Shi, L., Su, Z., Perkins, R., and Tong, W. (2008). MOLD2, molecular descriptors from 2D structures for chemoinformatics and toxicoinformatics. Journal of chemical information and modeling 48, 1337–1344.
  • Judson et al. (2010) Judson, R. S., Houck, K. A., Kavlock, R. J., Knudsen, T. B., Martin, M. T., Mortensen, H. M., Reif, D. M., Rotroff, D. M., Shah, I., Richard, A. M., et al. (2010). In vitro screening of environmental chemicals for targeted testing prioritization: the ToxCast project. Environmental health perspectives 118, 485.
  • Low-Kam et al. (2015) Low-Kam, C., Telesca, D., Ji, Z., Zhang, H., Xia, T., Zink, J. I., Nel, A. E., et al. (2015). A Bayesian regression tree approach to identify the effect of nanoparticles’ properties on toxicity profiles. The Annals of Applied Statistics 9, 383–401.
  • Montagna et al. (2012) Montagna, S., Tokdar, S. T., Neelon, B., and Dunson, D. B. (2012). Bayesian latent factor regression for functional and longitudinal data. Biometrics 68, 1064–1073.
  • Morris (2015) Morris, J. S. (2015). Functional regression. Annual Review of Statistics and Its Application 2, 321–359.
  • Murphy (2012) Murphy, K. P. (2012). Machine learning: a probabilistic perspective. MIT press.
  • Norinder (2003) Norinder, U. (2003). Support vector machine models in drug design: applications to drug transport processes and QSAR using simplex optimisations and variable selection. Neurocomputing 55, 337–346.
  • Quiñonero-Candela and Rasmussen (2005) Quiñonero-Candela, J. and Rasmussen, C. E. (2005). A unifying view of sparse approximate Gaussian process regression. Journal of Machine Learning Research 6, 1939–1959.
  • R Core Team (2015) R Core Team (2015). R: A Language and Environment for Statistical Computing. R Foundation for Statistical Computing, Vienna, Austria.
  • Ramsay (2006) Ramsay, J. O. (2006). Functional data analysis. Wiley.
  • Rasmussen and Williams (2006) Rasmussen, C. E. and Williams, C. K. (2006). Gaussian processes for machine learning. MIT Press.
  • Roy et al. (2015) Roy, K., Kar, S., and Das, R. N. (2015). Understanding the basics of QSAR for applications in pharmaceutical sciences and risk assessment. Academic Press.
  • Scheipl et al. (2015) Scheipl, F., Staicu, A.-M., and Greven, S. (2015). Functional additive mixed models. Journal of Computational and Graphical Statistics 24, 477–501.
  • Sollich and Krogh (1996) Sollich, P. and Krogh, A. (1996). Learning with ensembles: How overfitting can be useful. Advances in neural information processing systems pages 190–196.
  • Sprechmann and Sapiro (2010) Sprechmann, P. and Sapiro, G. (2010). Dictionary learning and sparse coding for unsupervised clustering. In Acoustics Speech and Signal Processing (ICASSP), 2010 IEEE International Conference on, pages 2042–2045. IEEE.
  • Weininger (1988) Weininger, D. (1988). SMILES, a chemical language and information system: 1. introduction to methodology and encoding rules. Journal of chemical information and computer sciences 28, 31–36.
  • Zhang and Li (2010) Zhang, Q. and Li, B. (2010).

    Discriminative K-SVD for dictionary learning in face recognition.

    In Computer Vision and Pattern Recognition (CVPR), 2010 IEEE Conference on, pages 2691–2698. IEEE.
  • Zhou et al. (2002) Zhou, Z.-H., Wu, J., and Tang, W. (2002). Ensembling neural networks: many could be better than all. Artificial intelligence 137, 239–263.