For both practical and theoretical reasons, it is often necessary to combine the predictions of multiple learned models. Mixture of experts, product of experts (PoE)[Hinton (2002)], ensemble methods are perhaps the most obvious frameworks for such prediction fusion. However there are four desirable properties which no existing fusion scheme achieves at the same time: (i) predictions are combined without the need for joint training or training meta models; (ii) the way predictions are combined depends on the input rather than fixed; (iii) the combined prediction is a valid probabilistic model; (iv) unreliable predictions are automatically filtered out from the combined model. Property (i) allows individual experts to be trained independently, making the overall model easily scalable via parallelization; property (ii) gives the combined model more expressive power; while property (iii) allows uncertainty to be used in subsequent modelling or decision making; and finally property (iv) ensures that the combined prediction is robust to poor prediction by some of the experts. In this work, we propose a novel scheme called generalized product of expert (gPoE) that achieves all four properties if individual experts are Gaussian processes, and consequently, excels in terms of scalability, robustness and expressiveness of the resulting model.
In comparison, a mixture of experts with fixed mixing probabilities does not satisfy (ii) and (iv), and because experts and mixing probabilities generally need to be learned together, (i) is not satisfied either. If an input dependent gating function is used, then the MoE can achieve property (ii) and (iv), but joint training is still needed, and the ability (iv) to filter out poor predictions crucially depends on the joint training.
Depending on the nature of the expert model, a PoE may or may not need joint or re-training, but it does not satisfy property (iv), because without the gating function to ”shut-down” bad experts, the combined prediction is easily mislead by a single expert putting low probability on the true label.
In the ensemble method regime, baggingBreiman, Leo (1996) does not satisfy (ii) and (iv), as it uses fixed equal weights to combine models, and does not automatically filter poor predictions, although empirically it is usually robust due to the equal weight voting. Boosting and stackingDavid H. Wolpert (1992) requires sequential joint training and training a meta-predictor respectively, so they do not satisfy (i). Furthermore, boosting does not satisfy (ii) and (iv), while stacking only has limited ability for (iv) that depends on training.
As we will demonstrate, the proposed gPoE of Gaussian processes not only enjoys the good qualities given by the four desired properties of prediction fusion, but it also retains some important attributes of PoE: many weak uncertain predictions can yield very sharp combined prediction together; and the combination has closed analytical form as another Gaussian distribution.
2 Generalized Product of Expert
We start by briefly describing the product of expert model, of which our proposed method is a generalization. A PoE models a target probability distribution as the product of multiple densities, each of which is given by one expert. The product is then renormalized to sum to one. In the context of supervised learning, the distributions are conditional:
In contrast to mixture models, experts in PoE hold ”veto” power, in the sense that a value has low probability under the PoE if a single expert assigns low probability to a particular value. As Hinton pointed out Hinton (2002), training such model for general experts by maximizing likelihood is hard because of the renormalization term . However, in the special case of Gaussian experts , the product distribution is still Gaussian, with mean and covariance:
where is the precision of the -th Gaussian expert at point
. Qualitatively, confident predictions have more influence over the combined prediction than the less confident ones. If the predicted variance were always the correct confidence to be used, then PoE would have exactly the behavior needed. However, a slight model misspecification could cause an expert to produce erroneously low predicted variance along with a biased mean prediction. Because of the combination rule, such over-confidence by a single expert about its erroneous prediction is enough to be detrimental for the resulting combined model.
Given that PoE has almost the desired behavior except for the fact that an expert’s predictive precision is not necessarily the right measure of reliability of prediction for use in weighting, we will introduce another measure of such reliability to down-weight or ignore bad predictions.Like PoE, the proposed generalized product of experts is also a probability model defined by products of distributions. Here we again focus on conditional distribution for supervised learning, taking the form:
where is a measure of the -th expert’s reliability at point . We will introduce one particular choice for Gaussian processes in the next subsection, for now let us first analyze the effect of . Raising a density to a power as done in equation (4
) has been widely used as a way for annealing distributions in MCMC, or balancing different parts of a probabilistic model that has different degrees of freedom (urtasun_thesis (2013), see 6.1.2 Balanced GPDM). If , then we recover the PoE as a special case. sharpens the -th distribution in the product (4), whereas broadens it. The limit cases of , while the other exponents are fixed causes the largest mode of -th distribution to dominate the product distribution with arbitrarily large “veto” power; on the other hand, causes -th expert to have arbitrarily small weight in the combined model, effectively ignoring its prediction.
Another interesting property of gPoE is that if each is Gaussian, then the resulting is still Gaussian as in PoE. To see this, it suffices to show that is Gaussian, which is apparent with a little algebraic manipulation:
2.3 gPoE for Gaussian processes
Now that we have established how can be used to control the influence of individual experts, there is a natural choice of for Gaussian processes that can reliably detect if a particular GP expert does not generalize well at a given point : the change in entropy from prior to posterior at point , , which takes almost no extra computation since posterior variance at is already computed when the GP expert makes prediction and the prior variance is simply , where is the kernel used.
When the entropy change at point is zero, it means the -th expert does not have any information about this point that comes from training observation, therefore, it shall not contribute to the combined prediction, which is achieved by our model because in (6) and (7). For Gaussian processes, this covers both the case if point is far away from training points or if the model is misspecified. 111For the experiments, we also normalize the weighting factors at each point so that .
There are other quantities that could be used as , for example the difference between the prior and posterior variance (instead of half of difference of log of the two variances). The reason we choose the entropy change is because it is unit-less in the sense of dimensional analysis in physics, so that the resulting predictive variance in equation (7) is of the correct unit. The same is not true if is the difference of variances which carries the unit of variance (or squared unit of variable ). The KL divergence between prior and posterior distribution at point could also potentially be used as , but we find the entropy change to be already effective in our experiments. We will explore the use of KL divergence in future work.
We compare gPoE against bagging, MoE, and PoE on three different datasets: KIN40K (8D feature space, 10K training points), SARCOS (21D, 44484 training points), and the UK apartment price dataset (2D, 64910 training points) used in SVI-GP work of Hensman et al. James Hensman, Nicolo Fusi, Neil D. Lawrence (2013). We try three different ways to build individual GP experts: (SoD) random subset of data; (local) local GP around a randomly selected point; (tree) a tree based construction, where a ball tree Omohundro, Stephen M (1989) built on training set recursively partitions the space, and on each level of the tree, a random subset of data is drawn to build a GP. On all datasets and for all methods of GP expert construction, we use data points for each expert, and construct
GP experts in total. Each GP expert uses a kernel that is the sum of an ARD kernel and white kernel, and all hyperparameters are learned by scaled conjugate gradient.
For MoE, we do not jointly learn experts and gating functions, as it is very time consuming, instead we use the same entropy change as the gating function (re-normalized to sum to one). Therefore, all experts in all combination schemes could be learned independently in parallel. On a 32-core machine, with the described setup, training GP experts with independent hyperparameter learning via SGD on each datasets takes between 20 seconds to just under one minute, including the time for any preprocessing such as fitting the ball tree.
show that gPoE consistently out-performs bagging, MoE and PoE combination rules by a large margin on both scores. Under the tree based expert construction method, we explore a heuristic variant of gPoE (tree-gPoE), which takes only experts that are on the path defined by the root node to the leaf node of the test point in the tree. Alternatively, this could be viewed as definingfor all experts that are not on the root-to-leaf path. This variant gives a slight further boost to performance across the board.
Another interesting observation is that while gPoE performs consistently well, PoE is almost always poor, especially in SNLP score. This empirically confirms our previous analysis that misguided over-confidence by experts are detrimental to the resulting PoE, and shows that the correction by entropy change in gPoE is an effective way to fix this problem.
Finally, we would like to note that as a testimony to the expressive power given by the gPoE, GP experts trained on only points with very generic kernels could combine to give prediction performance close to or even superior to sophisticated sparse Gaussian process approximation such as stochastic variational inference (SVI-GP), as evidenced by the comparison in table 3 for the UK-APT dataset. Note also that due to parallelization, training in our case took less than 30 seconds on this problem of 64910 training points, although testing time in our gPoE is much longer than sparse GP approximations. Similar results competitive or even superior to sophisticated FITCSnelson and Ghahramani (2006) or CholQRY. Cao, M. Brubaker, D. J. Fleet, A. Hertzmann. (2013) approximations are observed on SARCOS and KIN40K as well, but due to space and time constraints are not included in this extended abstract, but in future work instead. We would like to emphasize that such comparison does not suggest gPoE as a silver bullet for beating benchmarks using any naive expert GP model, but to demonstrate the expressiveness of the resulting model, and shows its potential to be used in conjunction with other sophisticated techniques for sparsification and automatic model selection.
|RMSE||0.566||0.522 +/- 0.018||0.510 +/- 0.015||0.503 +/- 0.011||0.502 +/- 1.012||0.426||0.556||0.419||0.484||0.456|
4 Discussion and Conclusion
In this work, we proposed a principled way to combine predictions of multiple independently learned GP experts without the need for further training. The combined model takes the form of a generalized product of experts, and the combined prediction is Gaussian and has desirable properties such as increased expressiveness and robustness to poor predictions of some experts.
We showed that the gPoE has many interesting qualities over other combination rules. However, one thing it cannot capture is multi-modality like in mixture of experts. In future work it would interesting to explore generalized product of mixture of Gaussian processes, which captures both “or” constraints as well as “and” constraints. Another future work direction is to explore other measures of model reliability for GP. Finally, while the lack of change in entropy is an indicator of irrelevant prediction, the converse statement does not seem to be true, i.e., sufficient change in entropy does not necessarily guarantee reliable prediction because of all the potential ways the model could be mis-specified. However, our empirical results suggest that at least with RBF kernels, the change in entropy is always reliable even if the estimated posterior variance itself is not accurate. Further theoretical work is needed to better understand the converse case.
We thank J. Hensman for providing his dataset for benchmarking, as well as the anonymous reviewer for insightful comments and question about the reliability issue in the converse case.
Training products of experts by minimizing contrastive divergence.Neural Computation, 14:1771–1800, 2002.
- (3) Breiman, Leo. Bagging predictors. Machine Learning, 24 (2): 123–140, 1996.
- (4) David H. Wolpert. Stacked Generalization Neural Networks, 5, 241–259, 1992.
- (5) R. Urtasun. Motion Models for Robust 3D Human Body Tracking. Phd Thesis 3541, EPFL 2006
- (6) Omohundro, Stephen M. Five Balltree Construction Algorithms International Computer Science Institute Technical Report, 1989.
- (7) E. Snelson and Z. Ghahramani. Sparse gaussian processes using pseudo-inputs. NIPS 18, pp. 1257–1264. 2006.
- (8) Y. Cao, M. Brubaker, D. J. Fleet, A. Hertzmann. Efficient Optimization for Sparse Gaussian Process Regression NIPS 2013, 2013.
- (9) James Hensman, Nicolo Fusi, Neil D. Lawrence. Gaussian Processes for Big Data NIPS 2013, 2013.