Cluster-based Kriging Approximation Algorithms for Complexity Reduction

02/04/2017 ∙ by Bas van Stein, et al. ∙ universiteit leiden 0

Kriging or Gaussian Process Regression is applied in many fields as a non-linear regression model as well as a surrogate model in the field of evolutionary computation. However, the computational and space complexity of Kriging, that is cubic and quadratic in the number of data points respectively, becomes a major bottleneck with more and more data available nowadays. In this paper, we propose a general methodology for the complexity reduction, called cluster Kriging, where the whole data set is partitioned into smaller clusters and multiple Kriging models are built on top of them. In addition, four Kriging approximation algorithms are proposed as candidate algorithms within the new framework. Each of these algorithms can be applied to much larger data sets while maintaining the advantages and power of Kriging. The proposed algorithms are explained in detail and compared empirically against a broad set of existing state-of-the-art Kriging approximation methods on a well-defined testing framework. According to the empirical study, the proposed algorithms consistently outperform the existing algorithms. Moreover, some practical suggestions are provided for using the proposed algorithms.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

Kriging, or Gaussian Process Regression [1] is a popular and elegant kernel based regression model capable of modeling very complex functions. Kriging is used in many fields e.g. engineering, mining and geology, as a tool for the analysis of data sets, for prediction purposes and for Surrogate model based optimization [2]

. Many other regression models exist, such as parametric models, which are easy to interpret but may lack expressive power to model complex functions. On the other hand,

Regression Tree based methods like Random Forests [3] or Gradient Boosted Decision Trees lack the advantage of interpretation [4] but have more expressive power. Another method is Linear Model Trees [5], which uses a tree structure with linear models at the leaves of the tree. There are also more complex algorithms like Neural Networks, or Extreme Learning Machines [6], that are able to model very complex functions but are usually not easy to work with in practice. There are also different kernel based methods such as Support Vector Machines [7] and Radial Basis Functions [8]

. The main advantage of Kriging over other regression methods is that Kriging provides not only the estimate of the value of a function, but also the mean squared error of the estimation, the so-called

Kriging variance

. The Kriging variance can be seen as the uncertainty assessment of the model and has been exploited in surrogate model based optimization and many other applications. Despite the clear advantage of the Kriging variance, Kriging suffers from one major problem, the high training time and space complexity, which are

and , respectively. Where denotes the number of points. To overcome this complexity problem, Kriging approximation algorithms such as [9] and [10] are introduced. Unfortunately, these approximation algorithms are usually less accurate than the original Kriging algorithm.

Contributions. An overview of Kriging approximation methods is presented and a novel divide and conquer based approach, Cluster Kriging (CK), is introduced. The novel Cluster Kriging framework contains three steps, Partitioning, Modeling and Predicting. Each of the steps can be implemented using a wide range of approaches, which are explained in detail in this paper. Using these approaches four algorithms are implemented and compared against each other and the state of the art. One particular interesting and novel algorithm that uses the Cluster Kriging methodology is the proposed Model Tree Cluster Kriging (MTCK). MTCK uses a regression tree with a specified number of leaf nodes to partition the data in the objective space. A Kriging model is then build on each partition defined by the tree’s leaves. MTCK uses only one of the trained Kriging models per unseen record to predict, depending on which leaf node the unseen record is assigned to. The proposed algorithms are evaluated and compared to several state-of-the-art alternative Kriging approximation algorithms. A well-defined testing framework for Kriging approximation algorithms [11] is adopted for the comparisons.

Ii Kriging

Notation. Throughout this paper, we shall use to denote the number of data points, the number of clusters and the dimensionality of the input space, respectively. In addition, the regression function is denoted as . In complexity statements in this paper we ignore since Kriging is generally used on low dimensional datasets. Without loss of generality, the column vector convention is adopted as the notation used throughout this paper.

Loosely speaking, Kriging is a stochastic interpolation method in which the output value of a stochastic process is predicted as a linear function of the observed output values 

[12, 13]. In particular, Kriging is the best linear unbiased predictor (BLUP) and the corresponding mean squared error of prediction is used for uncertainty qualification. Kriging originates from the field of spatial analysis/geostatistics and more recently is being widely used in Bayesian optimization and design and analysis of computer experiments (DACE) [14, 15]. The model features in providing the theoretical uncertainty measurement of estimations.

When the stochastic process is assumed to be Gaussian, Kriging is equivalent to Gaussian Process Regression (GPR), where the posterior

distribution of the regression function (posterior Gaussian process) is inferred through Bayesian statistics. In this paper, we shall consider this special case and adopt the mathematical treatment of the Gaussian process. Assume that input data points are summarized in the set

and the corresponding output variables are represented as . Specifically, the mostly used variant of Kriging, Ordinary Kriging, models the regression function as a random process, that is a combination of an unknown constant trend with a centered Gaussian Process . The output variables are considered as the “noisy” observation of , that is perturbed by a Gaussian random noise :

Note that the noises (error terms) are assumed to be homoscedastic (identically distributed) and independent, both from each other and the Gaussian Process . The centered Gaussian process

is a stochastic process which possesses zero mean everywhere and any finite collection of its random variables has a joint Gaussian distribution 

[1]. It can be completely specified by providing a covariance function to calculate the pairwise covariance:

The covariance function is a kernel function performing the so-called “kernel trick”, which computes the inner product on the feature space as a function in the input space. In this paper, the covariance function is chosen to be stationary, meaning that the kernel is a function of and invariant to translations in the input space. Consequently, the variance of a Gaussian process is independent from the input and thus denoted as in the following. In practice, a common choice is the Gaussian covariance function (also known as squared exponential kernel):

(1)

where ’s are called hyper-parameters, that are either predetermined or estimated through model fitting, and is inferred by maximum likelihood method. We omit the likelihood function in this paper. To infer output value at an unobserved data point

, the joint distribution of

and observed outputs are derived, conditioning on the input data set , and the unknown prior mean . Such a joint distribution is a multivariate Gaussian and is expressed as follows;

(2)

where denotes a column vector of length that contains only ’s. The homogeneous variance of the noise can be either determined by the user or estimated through maximum likelihood method. The posterior distribution of can be calculated by marginalizing out and conditioning on the observed output variables  [1]. Without any derivations, the posterior distribution for Ordinary Kriging is again Gaussian [16]:

(3)

where the posterior mean and variance are expressed as:

(4)
(5)

Note that the estimation of the trend, is obtained by maximum a posteriori principle (MAP). The posterior mean function (Eq. 4) is used as the predictor while the posterior variance (Eq. 5) is the so-called Kriging variance that measures the uncertainty of the prediction.

Iii Relevant Research

Despite the theoretically sound development of the Kriging model, it suffers several issues when applied to large data sets. The major bottleneck is the high time complexity of the model fitting process: The inverse of the covariance matrix needs to be computed for both the posterior mean and variance (Eq. 4 and 5), which has roughly time complexity111There are asymptotically faster algorithms for matrix inversion, e.g. Strasssen’s and Stothers , but their practical performance is worse than some methods with time complexity.. Moreover, when optimizing the hyper-parameters of the kernel function, the log likelihood function of those parameters is again calculated through , resulting in a computational cost per each optimization iteration. Thus, for a large data set, such a high overhead in model fitting renders Kriging inapplicable in practice. Various attempts have been made to overcome the computational complexity issue of Kriging [1]:

(SoD) [17] is a very simple approach in reducing complexity by taking data points, usually taken at random. Obviously this is a waste of information, but it might work well if a sufficient number of data points is available.

(SoR) [18] approximates Kriging by a linear combination of kernel functions on a set of basis points. The basis points are linearly weighted to construct the predictor. The choice of the basis points does influence the final outcome. As noted also in [19], there are only

(number of basis points) degrees of freedom in the model because the model is degenerate (finite linear-in-the-parameters), which might be too restrictive.

(FITC) [20, 21]. Snelson and Ghahramani proposed what they called Sparse Gaussian Processes using Pseudo-inputs. It uses a more sophisticated likelihood approximation with a richer covariance. It is a non-degenerate version of the SoR algorithm. By providing a set of basis points (Pseudo inputs), the model is fitted and validated on the training data. As with SoR

the choice of basis points is a problem, this is usually either a subset of the training data or a uniform distribution over the input space.

[10] is an algorithm that uses an approximation of the covariance matrix with a sparse precision matrix. It uses Gaussian Markov Random Fields (GMRF) on a reasonable dense grid to exploit the computational benefits of a Markov field while keeping the formula of Kriging weights. This method reduces the complexity for simple and ordinary Kriging, but might not always be efficient with universal Kriging.

(BCM) [9] is an algorithm similar to the one we propose, but developed from a completely different perspective. The basic motivation is to divide a huge training set into several relatively small subsets and then construct Kriging models on each subset. The benefit of this approach is that the training time on each subset is satisfactory and the training task can be easily parallelized. After training, the prediction is made by weighted combination of estimations from all the Kriging models. BCM uses batch prediction to speed up the computation even further. However, BCM does not seem to correct for different hyper parameters per module, neither for badly fitted modules, which becomes a major problem when the number of modules increases.

Several other attempts have been made to divide the Kriging model in sub-models [22, 23], each solution for different domains. In [22], a Bagging [24] method is proposed to increase the robustness of the Kriging algorithm, rather than speeding up the algorithm’s training time. In [23], a partitioning method is introduced to separate the data points into local Kriging models and combine the different models using a distance metric.

All of these approximation algorithms have their advantages and disadvantages and they are compared to our Cluster Kriging algorithms. The algorithms listed above can be divided into three categories:

  1. methods that approximate the covariance matrix (using sparsity),

  2. methods that divide the training data into several clusters and build a model for each cluster,

  3. methods that take only a subset of the training data into account.

For the empirical study, three state of the art algorithms: SoD, FITC and BCM are selected to compare with the proposed approaches in this paper, as they seem to be the mostly used in their category.

Iv Cluster Kriging

The main idea behind our proposed methodology, Cluster Kriging, is to use a clustering algorithm to partition the data set into several clusters and build Kriging models on each cluster. Loosely speaking, if the whole data set is partitioned into clusters of similar sizes, Cluster Kriging will reduce the time complexity by a factor of resulting in (where is the number of clusters) if Kriging models are fitted sequentially. When exploiting CPU processes in parallel, the time complexity will be further reduced to . In practice this means that if we take depending on our algorithm becomes quadratic in time, and using clusters it even reaches linear time complexity. For the output value at an unobserved data point , each Kriging model provides a (local) prediction for . To obtain a global prediction, it is proposed to either combine the predictions from all the Kriging models or select the most proper Kriging model for the prediction.

There are many options for the data partitioning, e.g. K-means and Gaussian mixture models (GMM), and the Kriging model on clusters can also be combined in different manners. By varying the options in each step of the cluster Kriging, many algorithms can be generated. Four of them will be explained in the next section. In this section, the options in each step of the algorithms are introduced gradually.

Iv-a Clustering

The first step in the Cluster Kriging methodology is the clustering of the input data (and the output variables) into several smaller data sets. In general, the goal is to obtain a set containing clusters on the input data set .

(6)

In addition, the output values are also grouped according to the clustering of : . The clustering can be done in many ways, with the most simple and feasible approach being random clustering. For our framework however we introduce three more sophisticated partitioning methods that are used in the experiments later on.

Iv-A1 Hard Clustering

The hard clustering splits the data into smaller disjoint data sets:

This can be achieved by various methods, for instance the K-means algorithm (Eq. 7). K-means clustering minimizes the within-cluster sum of squares, that is expressed as:

(7)

where is the centroid of cluster and is calculated as the mean of the points in . The minimization of the within-cluster sum of squares takes only execution time.

Iv-A2 Soft Clustering

Instead of using a hard clustering approach, a fuzzy clustering algorithm can be used to introduce slight overlap between the various smaller data sets, which might increase the final model accuracy. To incorporate fuzzy clustering, instead of directly applying cluster labels, the probabilities that a point belongs to a cluster are calculated (Eq.

8) and for each cluster points with the highest membership values are assigned, where is a user defined setting that defines the overlap. is set between (no overlap) and (completely overlapping clusters).

In principle, any fuzzy clustering algorithm can be used for the partitioning. In this paper the Fuzzy C-Means (FCM) [25] clustering algorithm and the Gaussian Mixture Models (GMM) [26] are used. FCM is a clustering algorithm very similar to the well known K-means. The algorithm differs from K-means in that it has additional membership coefficients and a fuzzifier. The membership coefficients of a given point give the degrees that this point belongs to each cluster. These coefficients are normalized so they sum up to one. The algorithm can be fitted on a given dataset and returns the coefficients for each data point to each cluster. The number of clusters is a user defined parameter. Fuzzy C-means optimizes the objective function given in Eq. 8 iteratively. In each iteration, the membership coefficients of each point being in the clusters are computed using Eq. 9. Subsequently, the centroid of each cluster is computed as the center of mass of all data points, taking the membership coefficients as weights. The objective of fuzzy C-means is to find a set of centroids that minimizes the following function:

(8)

where are the membership values (see Eq. 9) and is the so-called fuzzifier (set to in this paper). The fuzzifier determines the level of cluster fuzziness as follows:

(9)

The other fuzzy clustering procedure used is the Gaussian Mixture Models. GMM are used together with the expectation-maximization

(EM) algorithm for fitting the Gaussian models. The mixture models are fitted on the training data and later used in the weighted combination of the Kriging models by estimating cluster membership probabilities of the unseen data points. The advantage of this clustering technique is that it is fairly robust and that the number of clusters can be specified by the user. For the GMM method one could use the full covariance matrix whenever the dimensionality of the input data is small. However, when working with high dimensional data a diagonal covariance matrix can be used instead. The time complexity of GMM depends on the underlying EM algorithm. In each iteration EM, it takes

operations to re-estimate the model parameters.

Iv-A3 Regression Tree Partitioning

The third method used is the partitioning by use of a Regression Tree [27] on the complete training set. The regression tree splits the dataset recursively at the best splitting point using the variance reduction criterion. Each leaf node of the Regression Tree represents a cluster of data points. The number of leaves (or the number of records per leave) can be set by the user. By reducing the variance in each leaf node and therefore the variance in each dataset, the Kriging models can be fitted to the local datasets much better as will be presented later on. The time complexity of using a Regression Tree for the partitioning is , given that the depth of the tree or the number of leaf nodes is set by the user.

Fig. 1: Visualisation of a Model Tree. The top node is the root and the bottom nodes are the leaves with attached models. Each record in the data (on the left) is assigned to a leaf node of the regression tree.

Iv-B Modeling

After partitioning the data set into several clusters, Kriging models are fitted on each of the smaller data sets. The Kriging algorithm is applied on each cluster individually, this way each model will be optimized on its own training set and will have different hyper-parameters. For simplicity we assume, in this paper, the kernel functions used on each cluster to be the same. As for the regression tree approach, the data set, or more precisely the input space, is partitioned by the tree algorithm and, for each leaf node, a Kriging model is computed using the data belonging to this node (Fig. 1). A similar technique is introduced in the context of combining linear regression models [28, 29, 5]. In general, the predictive (posterior) distribution of the target variable on each cluster is:

(10)

where and are specified again by Eq. 4 and 5 except that are replaced by here. Note that building the Kriging models can be easily parallelized, which gives an additional speedup to Cluster Kriging. Another benefit of building each model separately, is that each model has usually a much better local fit than a single global Kriging model would obtain.

Iv-C Prediction

After training the various Kriging models, unseen data points need to be predicted. For this prediction, there are several options. The first method which can be used is an optimally weighted combination of the Kriging models.

Iv-C1 Optimal weighting procedure

When the input data set is separated by hard clustering methods, the Gaussian processes built on different clusters are independent from each other. In this sense, it is possible to construct a global Gaussian process model as the superposition of Gaussian processes from all the clusters. In addition, a weighting scheme is used to model how much “trust” should be put on the prediction from each cluster. The weighted superposition of all Gaussian processes is [30]:

(11)

The overall prediction and its variance depend on the weights used in the equation above. Intuitively, the optimal prediction is achieved when the variance of the estimation is minimal. To obtain such an optimal predictor, the overall Kriging variance should be optimized with respect to the weights, resulting in the following optimization task:

The optimal weights are obtained by solving the problem above (see the previous work of the authors [30, 31] for details):

(12)

The optimal weights are then used to construct the optimal predictor, which is the inner product of the model predictions with the optimal weights.

Iv-C2 Membership Probabilities

For the GMM and other soft clustering approaches, the membership probabilities can be used for unseen records to define the weights for the combination of predictions. For each unseen record, the membership probabilities that this record belongs to the clusters are calculated and directly used as the weights in the weighted sum of predictions and variances given by the Kriging models:

(13)

where is the cluster indicator variable ranging from to . The rationale behind such a weighting scheme can be shown from the following derivation. In general, the goal here is to express the predictive distribution of variable that is the conditional density function on the whole data set , using the posterior densities from all clusters. By applying the total probability with respect to the cluster indicator variable , such a density function can be written as [31]:

(14)

The independence assumption between Gaussian process models still holds approximately when the amount of the overlap between clusters is small. Thus, the density function approximately equals to Eq. 14. The first term inside the sum in Eq. 14 is the predictive density function obtained from each cluster. The second term represents the probability of data point belonging to a cluster, which is the weight in Eq. 13. Consequently, the overall predictive density function is a mixture of predictive distributions of all the Gaussian process models on clusters. To predict , the expectation of the conditional density function is calculated:

(15)

Note that is the mean function as shown in Eq. 10. Eq. 15 suggests that the overall prediction made on the whole data set can be expressed as a convex combination of the local predictions on each cluster of data, in which the combination weights are membership probabilities of GMM or similar clustering approaches. Furthermore, the variance of the prediction (expectation) above is derived as follows:

(16)

Note that is again the Kriging variance at point from cluster .

Iv-C3 Single Model Prediction

The last method which can be used to predict unseen data points is by using only one of the local Kriging models. First the partitioning method is used to predict which cluster the new data point belongs to, then the Kriging model trained using this particular cluster is used to predict the mean and variance at the new data point. In case of the Regression Tree procedure, the targets are predicted from new unseen data points by first deciding which model needs to be used, using the Regression Tree. The target is then predicted using the specific Kriging model assigned to the leaf node (Figure 1). The main advantage of this method is that there is no combination of different predictions and only one of the local Kriging models needs to provide a prediction. This results in a significant speed-up for the prediction task.

V Flavors of Cluster Kriging

Using the three stages and various components for each stage of the Cluster Kriging methodology, various algorithms can be implemented. In this paper we asses four different flavors of Cluster Kriging:

Optimally Weighted Cluster Kriging (OWCK), which uses a hard (K-means) clustering technique to partition the data into clusters. Subsequently, a Kriging model is trained on each cluster and to predict unseen data points, the predictions and variances of each model are combined using the Optimal Weights Procedure (Section IV-C1).

Optimally Weighted Fuzzy Cluster Kriging (OWFCK), which uses a soft clustering technique (Fuzzy C-Means) to partition the data into overlapping clusters and also uses the Optimal Weights Procedure combining the different predictions (Section IV-C1).

Gaussian Mixture Model Cluster Kriging (GMMCK), which uses Gaussian Mixture Models to partition the data into overlapping clusters and the trained Kriging models are weighted using the membership probabilities assigned on the unseen data by the Gaussian Mixture Model (Section IV-C2).

Model Tree Cluster Kriging (MTCK), the proposed novel algorithm, uses a regression tree with a fixed amount of leaf nodes to partition the data in the objective space. A Kriging model is then trained on each partition defined by the tree’s leaves. MTCK uses only one of the trained Kriging models per unseen record to predict (Section IV-C3), depending on which leaf node the unseen record is assigned to.

First a decision tree regressor is constructed using the complete dataset. The tree is generated from the root node by recursively splitting the training data using the target variable and the variance reduction criterion. Once a node contains less than the minimum samples needed to split or the node contains only one record, the splitting stops and the node is called a leaf. To control the number of clusters, the user can set the maximum number of leaves or the minimum leaf size. Next, each leaf node is assigned a unique index and each record belonging to the leaf is assigned to this index. For each leaf, a Kriging model is computed using only those records assigned to this leaf. Each Kriging model is now able to predict a particular region defined by the Regression Tree.

For the prediction of the target for unseen records, the regression tree decides which Kriging model should be used. The final predicted mean and variance is provided by this Kriging model.

Vi Experimental Setup and Results

A broad variety of experiments is executed to compare Optimally Weighted Cluster Kriging and its Fuzzy and Model Tree variants, to a wide set of other Kriging approximation algorithms. The algorithms included in the test are; Bayesian Committee Machines, both with shared parameters (BCM sh.) and with individual parameters (BCM), Subset of Data (SoD), Fully Independent Training Conditional (FITC), Optimally Weighted Cluster Kriging (OWCK) using K-means clustering, Fuzzy Cluster Kriging using Fuzzy C-means (OWFCK), Fuzzy Cluster Kriging with Gaussian Mixture Models (GMMCK) and, finally, Model Tree Cluster Kriging (MTCK).

The above algorithms are evaluated on three different data sets from the

UCI machine learning repository

 [32]:

  • Concrete Strength [33], a data set with records, attributes and one target attribute. The task is to predict the strength of concrete.

  • Combined Cycle Power Plant (CCPP) [34], a data set of records, attributes and one target attribute. The target is the hourly electrical energy output and the task is to predict this target.

  • SARCOS [35], a data set from gaussianprocess.org with a training set of records, attributes and target attributes. The task is to predict the joint torques of a anthropomorphic robot arm. All attributes are used as training data but only the target attribute is used as target. The dataset comes with a predefined test set of records.

In addition, synthetic datasets with each records, attributes and one target attribute are used. The synthetic datasets are generated using benchmark functions from the Deap Python Package [36] and are often used in optimization. The functions are Ackley, Schaffer, Schwefel, Rastrigin, H1, Rosenbrock, Himmelblau and Diffpow.

Vi-a Hyper-Parameters

Each of the Kriging approximation algorithms has a hyper-parameter that can be tuned by the user to define the number of data points, clusters or inducing points, basically defining the trade-off between complexity and accuracy. For each of the algorithms a wide range of these hyper-parameters are used to see the effect and make a fair comparison between the different algorithms. The overlap for each of the Fuzzy algorithms is set to , since from empirical experience we know that works well. Although higher percentages (above ) usually increase accuracy, the increase of accuracy is not significant and costs additional training time as well. For the Model Tree variant, the number of leaves is enforced by setting a minimum number of data points per leaf and an optional maximum number of leaves. For the Concrete Strength dataset and all synthetic datasets: FITC is set to a range of inducing points starting from and increasing in powers of to . SoD is set to the same range as FITC but for SoD this means the number of data points. BCM, both shared and non-shared versions and all Cluster Kriging variants are set to a range from to clusters, increasing with powers of . For the Combined Cycle Power Plant dataset: FITC is set to a range of inducing points starting from and increasing in powers of to . SoD is set to the a range from to data points. BCM, both shared and non-shared versions and all Cluster Kriging variants are set to a range from to clusters.

Finally, for the SARCOS dataset, the range of FITC’s inducing points stays the same as for the CCPP dataset, for SoD the range is from to data points, and for all cluster based algorithms and the model tree variant, the range is set from to clusters.

Vi-B Quality Measurements

The quality of the experiments is estimated with the help of -fold cross validation, except of the SARCOS dataset, which uses its predefined test set. The experiments are performed in a test framework similar to the framework proposed by Chalupka, K. et al. [11], i.e. several quality measurements are used to evaluate the performance of each algorithm. The Coefficient of determination score, Mean Standardized Log Loss (MSLL) (see [1] Chapter ) and the Standardized Mean Squared Error (SMSE) are measured for each test run. The Mean Standardized Log Loss is a measurement that takes both the predicted mean and the predicted variance into account. Penalizing wrong predictions that have a small predicted variance more than wrong predictions with a large variance.

Where is the predicted variance for record and the predicted mean. With

the trivial score simulating a predictor that predicts the overall mean and standard deviation:

For MSLL and SMSE lower scores are better, for , is the best possible score meaning a perfect fit and everything lower is worse.

Vi-C Results

Dataset SOD OWCK GMMCK OWFCK FITC BCM BCM sh. MTCK
concrete 0.784 0.826 0.839 0.696 0.675 -81.888 -242.459 0.851
CCPP 0.948 0.937 0.968 0.916 0.890 0.220 -24.602 0.968
sarcos 0.964 0.894 0.996 0.570 0.941 -627.280 0.448 0.999
ackley 0.952 0.957 0.951 0.954 0.260 0.921 -0.039 0.981
schaffer 0.321 0.388 0.369 0.406 0.208 0.452 -0.050 0.672
schwefel 0.990 0.973 0.977 0.947 0.006 0.969 -0.043 0.999
rast 0.973 0.947 0.948 0.932 0.322 0.914 -0.043 0.998
h1 0.676 -0.082 0.527 -1.125 0.165 0.657 -0.046 0.977
rosenbrock 0.999 0.997 0.997 0.981 0.000 0.994 -0.050 1.000
himmelblau 0.997 0.995 0.995 0.981 0.291 0.994 -0.044 1.000
diffpow 0.995 0.991 0.991 0.975 0.001 -0.001 -0.001 1.000
TABLE I: Average score per dataset for each algorithm
Dataset SOD OWCK GMMCK OWFCK FITC BCM BCM sh. MTCK
concrete -0.837 -0.946 -1.100 -0.692 -0.629 18.590 68.013 -1.140
CCPP -0.089 -1.438 -1.525 -1.109 -1.165 7.826 69.346 -1.193
sarcos -1.926 -1.371 -3.147 -0.302 -1.463 780.090 507.721 -3.429
ackley -1.622 -1.516 -1.517 -1.462 -0.104 7.352 13.010 -2.012
schaffer 0.477 -0.073 0.081 -0.091 -0.107 16.872 11.707 -0.514
schwefel -2.554 -2.013 -2.162 -1.944 -0.002 -0.144 12.034 -3.278
rast -2.179 -1.686 -1.807 -1.642 -0.193 4.554 11.590 -2.901
h1 -0.766 -0.276 -0.540 -0.060 -0.059 9.018 17.393 -1.967
rosenbrock -3.479 -2.915 -3.074 -2.738 high* 0.612 18.575 -4.054
himmelblau -3.204 -2.646 -2.790 -2.553 -0.193 -1.422 12.826 -3.739
diffpow -3.020 -2.548 -2.666 -2.438 high* high* high* -3.744
TABLE II: Average MSLL score per dataset for each algorithm
Dataset SOD OWCK GMM-CK FCM-CK FITC BCM BCM sh. MTCK
concrete 0.216 0.174 0.161 0.304 0.325 82.888 243.459 0.149
CCPP 0.052 0.063 0.032 0.084 0.110 0.780 25.602 0.032
sarcos 0.036 0.106 0.004 0.430 0.059 628.280 0.552 0.001
ackley 0.048 0.043 0.049 0.046 0.740 0.079 1.039 0.019
schaffer 0.679 0.612 0.631 0.594 0.792 0.548 1.050 0.328
schwefel 0.010 0.027 0.023 0.053 0.994 0.031 1.043 0.001
rast 0.027 0.053 0.052 0.068 0.678 0.086 1.043 0.002
h1 0.324 1.082 0.473 2.125 0.835 0.343 1.046 0.023
rosenbrock 0.001 0.003 0.003 0.019 1.000 0.006 1.050 0.000
himmelblau 0.003 0.005 0.005 0.019 0.709 0.006 1.044 0.000
diffpow 0.005 0.009 0.009 0.025 0.999 1.001 1.001 0.000
TABLE III: Average SMSE score per dataset for each algorithm

The results of experiments on the real world datasets Concrete Strength, CCPP and SARCOS are shown in Figure 2. The results are shown with both objectives, time and accuracy ( and axis respectively) in mind to show the trade-off and to show that some algorithms are performing better in both objectives. The scores of each dataset per algorithm, averaged over all folds, are shown in Table I. The MSLL scores are provided in Table II and the SMSE scores in Table III. The best results for each dataset are shown in bold face.

(a) Concrete
(b) CCPP
(c) Sarcos
(d) H1
Fig. 2: Quality measurements of each algorithm with the hyper-parameters increasing in sample sizes for FITC and SoD, and decreasing in cluster sizes for the cluster based algorithms as explained in Section VI-A. The results are shown for the Concrete, CCPP and Sarcos datasets and the gerenated dataset for the H1 function. The training time is given on the axis and the score on the y axis. The dashed green line indicates the non-dominated set.

Vi-D Parameter Setting Recommendations

To use the Cluster Kriging algorithms, the minimum cluster size or the number of clusters has to be set as a user defined parameter. It is recommended to set this parameter in such a way that each individual cluster contains between and records. records is still computationally tractable by Kriging in terms of execution time and records is in most cases still doable in terms of fitting the Kriging model. Selecting smaller cluster sizes is likely to result in poorly fitted models and selecting cluster sizes larger than will in most cases not increase accuracy but will only increase execution time. These recommendations are purely based on empirical observations and depend highly on the dataset one is working with. For MTCK smaller cluster sizes are usually still fine because of the low variance in the records per leaf due to the splitting criterion of the Regression Tree.

Vii Conclusions and Further Research

A novel Kriging approximation methodology, Cluster Kriging, is proposed, using a combination of smaller Kriging models trained on partitions of the data set. Four different algorithms using this methodology are proposed and explained in detail and a broad comparison between the novel algorithms and other state of the art Kriging approximation algorithms is done. The results of the experiments (as given in Section VI) clearly show that for each data set, the Gaussian Mixture Models Cluster Kriging (GMM CK) and the Model Tree Cluster Kriging (MTCK) outperform the other algorithms in all measurements. It can also be observed that the Bayesian Committee Machine algorithms, both with shared parameters and with individual parameters, are very unstable when the number of clusters is above . This is most likely due to poor recombination of models with different hyper-parameters and the chance of poor fitting of one of the clusters. In terms of training time, Subset of Data is much faster than any of the other algorithms, though it pays for this complexity reduction by a decrease in accuracy. Both for SoD and FITC, the training time increases faster than the training time of the cluster based algorithms. It is shown that the membership probabilities of the Gaussian Mixture Model can be used as weights in the combination of the various Kriging models’ predictions. It is also shown that a Model Tree of Kriging models works very well in high dimensional problems and requires less prediction time due to the fact that only one Kriging model per unseen data point is used for prediction.

For future research it would be interesting to automatically determine cluster sizes for the different algorithms and optimize the nugget parameter of the Kriging models. The nugget plays an important role in the fitting of the Kriging model as it determines the amount of marginalization.

Acknowledgment

The authors acknowledge support by NWO (Netherlands Organisation for Scientific Research) PROMIMOOC project (project number: 650.002.001).

References

  • [1] C. Rasmussen and C. Williams, Gaussian Processes for Machine Learning, ser. Adaptive computation and machine learning series.   University Press Group Limited, 2006.
  • [2] T. W. Simpson, T. M. Mauery, J. J. Korte, and F. Mistree, “Kriging models for global approximation in simulation-based multidisciplinary design optimization,” AIAA journal, vol. 39, no. 12, pp. 2233–2241, 2001.
  • [3] L. Breiman, “Random forests,” Machine Learning, vol. 45, no. 1, pp. 5–32, Oct. 2001.
  • [4]

    A. D’Ambrosio, M. Aria, and R. Siciliano, “Accurate tree-based missing data imputation and data fusion within the statistical learning paradigm,”

    Journal of classification, vol. 29, no. 2, pp. 227–258, 2012.
  • [5] L. Torgo, “Functional models for regression tree leaves,” in ICML, vol. 97.   Citeseer, 1997, pp. 385–393.
  • [6] G. Huang, D. H. Wang, and Y. Lan, “Extreme learning machines: a survey,” International Journal of Machine Learning and Cybernetics, vol. 2, no. 2, pp. 107–122, May 2011.
  • [7] V. Vapnik,

    The nature of statistical learning theory

    .   Springer Science & Business Media, 2013.
  • [8] M. D. Buhmann, “Radial basis functions: theory and implementations,” Cambridge monographs on applied and computational mathematics, vol. 12, pp. 147–165, 2004.
  • [9] V. Tresp, “A Bayesian committee machine.” Neural computation, vol. 12, no. 11, pp. 2719–2741, 2000.
  • [10] L. Hartman and O. Hössjer, “Fast Kriging of large data sets with Gaussian Markov random fields,” Computational Statistics & Data Analysis, vol. 52, no. 5, pp. 2331–2349, 2008.
  • [11] K. Chalupka, C. K. I. Williams, and I. Murray, “A framework for evaluating approximation methods for gaussian process regression,” J. Mach. Learn. Res., vol. 14, no. 1, pp. 333–350, Feb. 2013. [Online]. Available: http://dl.acm.org/citation.cfm?id=2502581.2502592
  • [12] M. L. Stein, Interpolation of spatial data: some theory for Kriging.   Springer Science & Business Media, 1999.
  • [13] J. P. Kleijnen, “Kriging metamodeling in simulation: a review,” European Journal of Operational Research, vol. 192, no. 3, pp. 707–716, 2009.
  • [14] D. R. Jones, M. Schonlau, and W. J. Welch, “Efficient global optimization of expensive black-box functions,” Journal of Global optimization, vol. 13, no. 4, pp. 455–492, 1998.
  • [15] J. Sacks, W. J. Welch, T. J. Mitchell, and H. P. Wynn, “Design and analysis of computer experiments,” Statistical science, pp. 409–423, 1989.
  • [16] D. Ginsbourger, R. Le Riche, and L. Carraro, “Kriging is well-suited to parallelize optimization,” in Computational Intelligence in Expensive Optimization Problems.   Springer, 2010, pp. 131–162.
  • [17] N. D. Lawrence, “Gaussian process latent variable models for visualisation of high dimensional data,” Advances in neural information processing systems, vol. 16, no. 3, pp. 329–336, 2004.
  • [18] B. W. Silverman, “Some aspects of the spline smoothing approach to non-parametric regression curve fitting,” Journal of the Royal Statistical Society. Series B (Methodological), vol. 47, no. 1, pp. 1–52, 1985.
  • [19] J. Quiñonero-Candela and C. E. Rasmussen, “A unifying view of sparse approximate gaussian process regression,” The Journal of Machine Learning Research, vol. 6, no. 1, pp. 1939–1959, 2005.
  • [20] A. Naish-Guzman and S. Holden, “The generalized FITC approximation,” in Advances in Neural Information Processing Systems, 2007, pp. 1057–1064.
  • [21] E. Snelson and Z. Ghahramani, “Sparse gaussian processes using pseudo-inputs,” in Advances in neural information processing systems, 2005, pp. 1257–1264.
  • [22] T. Chen and J. Ren, “Bagging for gaussian process regression,” Neurocomput., vol. 72, no. 7-9, pp. 1605–1610, Mar. 2009.
  • [23] D. Nguyen-Tuong, M. Seeger, and J. Peters, “Model learning with local Gaussian process regression,” Advanced Robotics, vol. 23, no. 15, pp. 2015–2034, 2009.
  • [24] L. Breiman, “Bagging predictors,” Machine Learning, vol. 24, no. 2, pp. 123–140, 1996.
  • [25] J. C. Dunn, “A fuzzy relative of the isodata process and its use in detecting compact well-separated clusters,” Journal of Cybernetics, vol. 3, no. 3, pp. 32–57, 1973.
  • [26] D. Reynolds, “Gaussian mixture models,” in Encyclopedia of Biometrics.   Springer, 2009, pp. 659–663.
  • [27] L. Breiman, J. Friedman, C. J. Stone, and R. A. Olshen, Classification and regression trees.   CRC press, 1984.
  • [28] Y. Wang and I. H. Witten, Induction of model trees for predicting continuous classes.   Department of Computer Science, University of Waikato, 1996.
  • [29] N. Landwehr, M. Hall, and E. Frank, “Logistic model trees,” Machine Learning, vol. 59, no. 1-2, pp. 161–205, 2005.
  • [30] B. van Stein, H. Wang, W. Kowalczyk, T. Bäck, and M. Emmerich, “Optimally weighted cluster kriging for big data regression,” in Advances in Intelligent Data Analysis XIV.   Springer, 2015, pp. 310–321.
  • [31] B. van Stein, H. Wang, W. Kowalczyk, M. Emmerich, and T. Bäck, “Fuzzy clustering for optimally weighted cluster kriging,” in Fuzzy Systems (FUZZ-IEEE), 2016 IEEE International Conference on.   IEEE, 2016, pp. 939–945.
  • [32] K. Bache and M. Lichman, “UCI machine learning repository,” http://archive.ics.uci.edu/ml, 2013.
  • [33] I.-C. Yeh, “Modeling of strength of high-performance concrete using artificial neural networks,” Cement and Concrete research, vol. 28, no. 12, pp. 1797–1808, 1998.
  • [34] H. Kaya, P. Tüfekci, and S. F. Gürgen, “Local and Global Learning Methods for Predicting Power of a Combined Gas & Steam Turbine,” International Conference on Emerging Trends in Computer and Electronics Engineering (ICETCEE 2012), pp. 13–18, 2012.
  • [35] S. Vijayakumar, A. D’souza, and S. Schaal, “Incremental online learning in high dimensions,” Neural computation, vol. 17, no. 12, pp. 2602–2634, 2005.
  • [36]

    F. Fortin, F. Michel, M. A. Gardner, M. Parizeau, and C. Gagné, “DEAP: Evolutionary algorithms made easy,”

    Journal of Machine Learning Research, vol. 13, pp. 2171–2175, jul 2012.