1 Introduction
Algorithmic scalability is an important component of modern machine learning. Making high quality inference on large, feature rich datasets under a constrained computational budget is arguably the primary goal of the learning community. This, however, comes with significant challenges. On the one hand, the exact computation of linear algebraic quantities may be prohibitively expensive, such as that of the log determinant. On the other hand, an analytic expression for the quantity of interest may not exist at all, such as the case for the entropy of a Gaussian mixture model, and approximate methods are often both inefficient and inaccurate. These highlight the need for efficient approximations especially in solving largescale machine learning problems.
In this paper, to address this challenge, we propose a novel, robust maximum entropy algorithm, stable for a large number of moments, surpassing the limit of previous maximum entropy algorithms (Granziol and Roberts, 2017; Bandyopadhyay et al., 2005; Mead and Papanicolaou, 1984). We show that the ability to handle more moment information, which can be calculated cheaply either analytically or with the use of stochastic trace estimation, leads to significantly enhanced performance. We showcase the effectiveness of the proposed algorithm by applying it to log determinant estimation (Han et al., 2015; Dong et al., 2017; Zhang and Leithead, 2007) and entropy term approximation in the informationtheoretic Bayesian optimisation (HernándezLobato et al., 2014; Wang and Jegelka, 2017; Ru et al., 2018)
. Specifically, we reformulate the log determinant estimation into an eigenvalue spectral estimation problem so that we can estimate the log determinant of a symmetric positive definite matrix via computing the maximum entropy spectral density of its eigenvalues. Similarly, we learn the maximum entropy spectral density for the Gaussian mixture and then approximate the entropy of the Gaussian mixture via the entropy of the maximum entropy spectral density, which provides an analytic upper bound. Furthermore, in developing our algorithm, we establish equivalence between maximum entropy methods and constrained Bayesian variational inference
(Fox and Roberts, 2012).The main contributions of the paper are as follows:

We propose a maximum entropy algorithm, which is stable and consistent for hundreds of moments, surpassing other offtheshelf algorithms with a limit of a small number of moments. Based on this robust algorithm, we develop a new Maximum Entropy Method (MEMe) which improves upon the scalability of existing machine learning algorithms by efficiently approximating computational bottlenecks using maximum entropy and fast moment estimation techniques;

We establish the link between maximum entropy methods and variational inference under moment constraints, hence connecting the former to wellknown Bayesian approximation techniques;

We apply MEMe to the problem of estimating the log determinant, crucial to inference in determinental point processes (Kulesza, 2012), and to that of estimating the entropy of a Gaussian mixture, important to stateoftheart informationtheoretic Bayesian optimisation algorithms.
2 Theoretical Framework
The method of maximum entropy, hereafter referred to as MaxEnt (Pressé et al., 2013)
, is a procedure for generating the most conservative estimate of a probability distribution with the given information and the most noncommittal one with regard to missing information
(Jaynes, 1957). Intuitively, in a bounded domain, the most conservative distribution, i.e., the distribution of maximum entropy, is the one that assigns equal probability to all the accessible states. Hence, the method of maximum entropy can be thought of as choosing the flattest, or most equiprobable distribution, satisfying the given moment constraints.To determine the maximally entropic density , we maximise the entropic functional
(1) 
with respect to , where the second term with for some density are the power moment constraints on the density , are the corresponding Lagrange multipliers, and is the number of moments. The first term in Equation (1) is referred to as the Boltzmann–Shannon–Gibbs (BSG) entropy, which has been applied in multiple fields, ranging from condensed matter physics (Giffin et al., 2016) to finance (Neri and Schneider, 2012). For the case of , the Lagrange multipliers can be calculated analytically; for , they must be determined numerically.
In this section, we first establish links between the method of MaxEnt and Bayesian variational inference. We then describe fast moment estimation techniques. Finally, we present the proposed MaxEnt algorithm.
2.1 Maximum Entropy as Constrained Bayesian Variational Inference
The work of Bretthorst (2013)
makes the claim that the method of maximum entropy (MaxEnt) is fundamentally at odds with Bayesian inference. At the same time, variational inference
(Beal et al., 2003) is a widely used approximation technique that falls under the category of Bayesian learning. In this section, we show that the method of maximum relative entropy (Caticha, 2012) is equivalent to constrained variational inference, thus establishing the link between MaxEnt and Bayesian approximation.2.1.1 Variational Inference
Variational methods (Beal et al., 2003; Fox and Roberts, 2012) in machine learning pose the problem of intractable density estimation from the application of Bayes’ rule as a functional optimisation problem:
(2) 
where and
represent the prior and posterior distributions of the random variable
, respectively. Variational inference therefore seeks to find as an approximation of the posterior , which has the benefit of being a strict bound to the true posterior.Typically, while the functional form of is known, calculating is intractable. Using Jensen’s inequality, we can show that:
(3) 
Furthermore, the reverse KullbackLeibler (KL) divergence between the posterior and the variational distribution, , can be written as:
(4) 
Hence, maximising the evidence lower bound is equivalent to minimising the reverse KL divergence between and .
2.1.2 MaxEnt is equivalent to Constrained Variational Inference
Minimising the reverse KL divergence between our posterior and our prior as is done in variational inference, with respect to :
(5) 
where denotes the differential entropy of the density , such that and . By the theory of Lagrangian duality, the convexity of the KL divergence, and the affine nature of the moment constraints, we maximise the dual form (Boyd and Vandenberghe, 2009) of Equation (5):
(6) 
with respect to or, alternatively, we minimise
(7) 
In the field of information physics, the minimisation of Equation (7) is known as the method of relative entropy (Caticha, 2012). It can be derived as the unique functional satisfying the axioms of locality, coordinate invariance, subsystem invariance and objectivity.
The restriction to a functional is derived from considering the set of all distributions compatible with the constraints and devising a transitive ranking scheme (Transitive ranking means if and , then .). Furthermore, it can be shown that Newton’s laws, nonrelativistic quantum mechanics, and Bayes’ rule can all be derived under this formalism Caticha (2012). In the case of a flat prior, we reduce to the method of maximum entropy with moment constraints (Jaynes, 1957) as is a constant. Hence, MaxEnt and constrained variational inference are equivalent.
2.2 Fast Moment Estimation Techniques
As shown in Section 2.1, constrained variational inference requires the knowledge of the moments of the density being approximated to make inference. In this section, we discuss fast moment estimation techniques for both the general case and the case for which analytic expressions exist.
2.2.1 Moments of a Gaussian Mixture
In some problems, there exist analytic expressions which can be used to compute the moments, One popular example is the Gaussian distribution (and the mixture of Gaussians). Specifically, for the onedimensional Gaussian, we can find an analytic expression for the moments:
(8) 
where and
are the mean and standard deviation of the Gaussian, respectively, and:
(9) 
where denotes double factorial. Hence, for a mixture of Gaussians with mean and standard deviation , the th moment is analytic:
(10) 
where is the weight of th Gaussian in the mixture.
2.2.2 Moments of the Eigenvalue Spectrum of a Matrix
Stochastic trace estimation is an effective way of cheaply estimating the moments of the eigenvalue spectrum of a given matrix (Hutchinson, 1990)
. The essential idea is that we can estimate the noncentral moments of the spectrum by using matrixvector multiplications.
Recall that for any multivariate random variable with mean
and variance
, we have:(11) 
where for the second equality we have assumed (without loss of generality) that possesses zero mean and unit variance. By the linearity of trace and expectation, for any number of moments we have:
(12) 
where are the eigenvalues of . In practice, we approximate the expectation in Equation (12) with a set of probing vectors and a simple Monte Carlo average, i.e., for random unit vectors :
(13) 
where we take the product of the matrix with the vector a total of times and update each time, so as to avoid the costly matrix multiplication with complexity. This allows us to calculate the noncentral moment expectations in complexity for dense matrices, or complexity for sparse matrices, where and is the number of nonzero entries in the matrix. The random vector can be drawn from any distribution, such as a Gaussian. The resulting stochastic trace estimation algorithm is outlined in Algorithm 1.
2.3 The Proposed MaxEnt Algorithm
In this section, we develop an algorithm for determining the maximum relative entropy density given moment constraints. Under the assumption of a flat prior in a bounded domain, this reduces to a generic MaxEnt density, which we use in various examples.
As we discussed previously, in order to obtain the MaxEnt distribution, we maximise the generic objective of Equation (1). In practice, we instead minimise the dual form of this objective Boyd and Vandenberghe (2009), which can be written as follows:
(14) 
Notice that this dual objective admits analytic expressions for both the gradient and the Hessian:
(15) 
(16) 
For a flat prior in a bounded domain, which we use as a prior for the spectral densities of large matrices and for Gaussian mixture models, is a constant that can be dropped out. With the notation
(17) 
we obtain the MaxEnt algorithm in Algorithm 2.
The input of Algorithm 2 are estimated using fast moment estimation techniques, as explained in Section 2.2.2. In our implementation, we use Python’s SciPy Newtonconjugate gradient (CG) algorithm to solve the minimisation in step 4 (line 6), having firstly computed the gradient within a tolerance level of as well as the Hessian. To make the Hessian better conditioned so as to achieve convergence, we symmetrise it and add jitter of intensity along the diagonal. We estimate the given integrals with quadrature using a grid size of . Empirically, we observe that the algorithm is not overly sensitive to these choices. In particular, we find that, for well conditioned problems, the need for jitter is obviated and a smaller grid size works fine; in the case of worse conditioning, jitter helps improve convergence and the grid size becomes more important, where a smaller grid size leads to a computationally more intensive implementation.
Given that any polynomial sum can be written as , where denotes another polynomial basis, whether we choose to use power moments in our constraints or another polynomial basis, such as the Chebyshev or Legendre basis, does not change the entropy or solution of our objective. However, the performance of optimisers working on these different formulations may vary (Skilling, 1989). For simplicity, we have kept all the formulas in terms of power moments. However, we find vastly improved performance and conditioning when we switch to orthogonal polynomial bases (so that the errors between moment estimations are uncorrelated), as shown in Section 3.2.2. We implement both Chebyshev and Legendre moments in our Lagrangian and find similar performance for both.
3 Applications
We apply the proposed algorithm to two problems of interest, namely, log determinant estimation and Bayesian optimisation. In both cases we demonstrate substantial speed up and improvement in performance.
3.1 Log Determinant Estimation
Calculation of the log determinant is a common hindrance in machine learning, appearing in Gaussian graphical models (Friedman et al., 2008), Gaussian processes (Rasmussen and Williams, 2006), variational inference (MacKay, 2003), metric and kernel learning (Van Aelst and Rousseeuw, 2009), Markov random fields (Wainwright and Jordan, 2006) and determinantal point processes (Kulesza, 2012). For a large positive definite matrix , the canonical solution involves the Cholesky decomposition of . The log determinant is then trivial to calculate as , where is the th entry of . This computation invokes a computational complexity of and storage complexity of , which becomes prohibitive for large , i.e., .
3.1.1 Log Determinant Estimation as a Spectral Estimation Problem using MaxEnt
Any symmetric positive definite (PD) matrix
is diagonalisable by a unitary matrix
, i.e., , where is the diagonal matrix of eigenvalues of . Hence we can write the log determinant as:(18) 
where we have used the cyclicity of the determinant and denotes the expectation under the spectral measure. The latter can be written as:
(19) 
where and correspond to the largest and smallest eigenvalues, respectively.
Given that the matrix is PD, we know that and we can divide the matrix by an upper bound of the eigenvalue, , via the Gershgorin circle theorem (Gershgorin, 1931) such that:
(20) 
where and , i.e., the maximum sum of the rows of the absolute of the matrix . Hence we can comfortably work with the transformed measure:
(21) 
where the spectral density is outside of its bounds of . We therefore arrive at the following maximum entropy method (MEMe) for log determinant estimation detailed in Algorithm 3.
3.1.2 Experiments
Log Determinant Estimation for Synthetic Kernel Matrix
To specifically compare against commonly used Chebyshev (Han et al., 2015) and Lanczos approximations (Ubaru et al., 2016) to the log determinant, and see how their accuracy degrades with worsening condition number, we generate a typical squared exponential kernel matrix (Rasmussen and Williams, 2006), , using the Python GPy package with dimensional Gaussian inputs with a variety of realistic uniform lengthscales. We then add noise of variance along the diagonal of .
We use moments and Hutchinson probe vectors (Hutchinson, 1990) in MEMe for the log determinant estimation, and display the absolute relative estimation error for different approaches in Table 1. We see that, for low condition numbers, the benefit of framing the log determinant as an optimisation problem is marginal, whereas for large condition numbers, the benefit becomes substantial, with orders of magnitude better results than competing methods.
MEMe  Chebyshev  Lanczos  

0.05  0.0014  0.0037  0.0024  
0.15  0.0522  0.0104  0.0024  
0.25  0.0387  0.0795  0.0072  
0.35  0.0263  0.2302  0.0196  
0.45  0.0284  0.3439  0.0502  
0.55  0.4089  0.0646  
0.65  0.5049  0.0838  
0.75  0.0086  0.5049  0.1050  
0.85  0.0177  0.5358  0.1199 
Log Determinant Estimation on Real Data
The work in Granziol and Roberts (2017) has shown that the addition of an extra moment constraint cannot increase the entropy of the MaxEnt solution. For the problem of log determinant, this signifies that the entropy of the spectral approximation should decrease with the addition of every moment constraint. We implement the MaxEnt algorithm proposed in Bandyopadhyay et al. (2005), which we refer to as OMxnt, in the same manner as applied for log determinant estimation in Fitzsimons et al. (2017), and compare it against the proposed MEMe approach. Specifically, we show results on the Ecology dataset (Davis and Hu, 2011), with , for which the true log determinant can be calculated. For OMxnt, we see that after the initial decrease, the error (Figure 0(b)) begins to increase for moments and the entropy (Figure 0(a)) increases at and moments. For the proposed MEMe algorithm, the performance is vastly improved in terms of estimation error (Figure 0(d)); furthermore, the error continually decreases with increasing number of moments, and the entropy (Figure 0(c)) decreases smoothly. This demonstrates the superiority both in terms of consistency and performance of our novel algorithm over established existing alternatives.
3.2 Bayesian Optimisation
Bayesian Optimisation (BO) is a powerful tool to tackle global optimisation challenges. It is particularly useful when the objective function is unknown, nonconvex and very expensive to evaluate (Hennig et al., 2015), and has been successfully applied in a wide range of applications including automatic machine learning (Bergstra et al., 2011; Snoek et al., 2012; Thornton et al., 2013; Hoffman et al., 2014), robotics (Lizotte et al., 2007; MartinezCantin et al., 2007) and experimental design (Azimi et al., 2012). When applying BO, we need to choose a statistical prior to model the objective function and define an acquisition function which trades off exploration and exploitation to recommend the next query location (Brochu et al., 2010). The generic BO algorithm is presented in Algorithm 6 in Appendix B. In the literature, one of the most popular prior models is the Gaussian processes (GPs), and the most recent class of acquisition functions that demonstrate stateoftheart empirical performance is the informationtheoretic ones (Hennig and Schuler, 2012; HernándezLobato et al., 2014; Wang and Jegelka, 2017; Ru et al., 2018).
3.2.1 MaxEnt for InformationTheoretic Bayesian Optimisation
Informationtheoretic BO methods select the next query point that maximises information gain about the unknown global optimiser/optimum. Despite their impressive performance, the computation of their acquisition functions involves an intractable term of Gaussian mixture entropy, as shown in Equation (22
), if we perform a fully Bayesian treatment for the GP hyperparameters. Accurate approximation of this Gaussian mixture entropy term is crucial for the performance of informationtheoretic BO, but can be difficult and/or expensive. In this paper, we propose an efficient approximation of the Gaussian mixture entropy by using MEMe, which allows for efficient computation of the informationtheoretic acquisition functions.
As an concrete example, we consider the application of MEMe for Fast InformationTheoretic Bayesian Optimisation (FITBO) (Ru et al., 2018) as described in Algorithm 5, which is a highly practical informationbased BO method proposed recently. The FITBO acquisition function has the following form:
(22) 
where is the predictive posterior distribution for conditioned on the observed data , the test location , and a hyperparameter sample . The first entropy term is the entropy of a Gaussian mixture, where is the number of Gaussian components.
The entropy of a Gaussian mixture does not have a closedform solution and needs to be approximated. In FITBO, the authors approximate the quantity using numerical quadrature and moment matching (This corresponds to the maximum entropy solution for two moment constraints as well as the normalization constraint.). In this paper, we develop an effective analytic upper bound of the Gaussian mixture entropy using MEMe, which is derived from the nonnegative relative entropy between the true density and the MaxEnt distribution (Granziol and Roberts, 2017):
(23) 
hence
(24) 
Notice that shares the same moment constraints as ; furthermore, the entropy of the MaxEnt distribution can be derived analytically:
(25) 
where are the moments of a Gaussian mixture, which can be computed analytically using Equation (10), and are the Lagrange multipliers that can be obtained by applying Algorithm 2. The overall algorithm for approximating the Gaussian mixture entropy is then summarised in Algorithm 4. In Appendix B.1, we also prove that the moments of the Gaussian mixture uniquely determine its density, and the bound becomes tighter with every extra moment constraint: in the limit, the entropy of the MaxEnt distribution converges to the true Gaussian mixture entropy. Hence, the use of a momentbased MaxEnt approximation can be justified (Mead and Papanicolaou, 1984).
3.2.2 Experiments
Entropy of the Gaussian Mixture in Bayesian Optimisation
We first test a diverse set of methods to approximate the entropy of two sets of Gaussian mixtures, which are generated using FITBO with hyperparameter samples and hyperparameter samples on a D problem. Specifically, we compare the following approaches for entropy approximation: MaxEnt methods using 10 and 30 power moments (MEMe10 and MEMe30), MaxEnt methods using 10 and 30 Legendre moments (MEMeL10 and MEMeL30), variational upper bound (VUB) (Hershey and Olsen, 2007), method proposed in (Huber et al., 2008) with 2 Taylor expansion terms (Huber2), Monte Carlo with 100 samples (MC100), and simple moment matching (MM).
We evaluate the performance in terms of the approximation error, i.e., the relative error between the approximated entropy value and the true entropy value, the latter of which is estimated via expensive numerical integration. The results of the mean approximation errors by all methods over 80 different Gaussian mixtures are shown in Table 2 (The version with the standard deviation of errors is presented as Table 4 in Appendix B.2.) . We can see clearly that all MEMe approaches outperform other methods in terms of the approximation error. Among the MEMe approaches, the use of Legendre moments, which apply orthogonal Legendre polynomial bases, outperforms the use of simple power moments.
Methods  M=200  M=400 

MEMe10  
MEMe30  
MEMeL10  
MEMeL30  
VUB  
Huber2  
MC100  
MM 
The mean runtime taken by all approximation methods over 80 different Gaussian mixtures are shown in Table 3 (The version with the standard deviation is presented as Table 5 in Appendix B.2.). To ensure a fair comparison, all the methods are implemented in MATLAB and all the timing tests are performed on a 2.3GHz Intel Core i5 processor. As we expect, the moment matching technique, which enables us to obtain an analytic approximation for the Gaussian mixture entropy, is the fastest method. MEMe approaches are significantly faster than the rest of approximation methods. This demonstrates that MEMe approaches are highly efficient in terms of both approximation accuracy and computational speed. Among all the MEMe approaches, we choose to apply MaxEnt with 10 Legendre moments in the BO for the next set of experiments, as it is able to achieve lower approximation error than MaxEnt with higher power moments while preserving the computational benefit of FITBO.
Methods  M=200  M=400 

MEMe10  
MEMe30  
MEMeL10  
MEMeL30  
VUB  
Huber2  
MC100  
MM 
InformationTheoretic Bayesian Optimisation
We now test the effectiveness of MEMe for informationtheoretic BO. We first illustrate the entropy approximation performance of MEMe using a 1D example. In Figure 2, the top plot shows the objective function we want to optimise (red dash line) and the posterior distribution of our surrogate model (blue solid line and shaded area). The bottom plot shows the acquisition functions computed based on Equation (22) using the same surrogate model but three different methods for Gaussian mixture entropy approximation, i.e., expensive numerical quadrature or Quad (red solid line), MM (black dash line), and MEMe using 10 Legendre moments (green dash line). In BO, the next query location is obtained by maximising the acquisition function, therefore the location instead of the magnitude of the modes of the acquisition function matters most. We can see from the bottom plot that MEMeL10 results in an approximation that well matches the true acquisition function obtained by Quad. As a result, MEMeL10 manages to recommend the same next query location as Quad. In comparison, the loose upper bound of the MM method, though successfully capturing the locations of the peak values, fails to correctly predict the global maximiser of the true acquisition function. MM therefore recommends a query location that is different from the optimal choice. As previously mentioned, the acquisition function in informationtheoretic BO represents the information gain about the global optimum by querying at a new location. The suboptimal choice of the next query location thus imposes a penalty on the optimisation performance as seen in Figure 3.
In the next set of experiments, we evaluate the optimisation performance of three versions of FITBO that use different approximation methods. Specifically, FITBOQuad denotes the version that uses expensive numerical quadrature to approximate the entropy of the Gaussian mixture, FITBOMM denotes the one using simple moment matching, and FITBOMEMeL denotes the one using MEMe with 10 Legendre moments. We test these BO algorithms on two challenging optimisation problems, i.e., the Michalewicz5D function (Molga and Smutnicki, 2005) and the Hartmann6D function (Dixon, 1978), and measure the optimisation performance in terms of the immediate regret (IR): , which measures the absolute difference between the true global minimum value and the best guess of the global minimum value by the BO algorithm. The average (median) result over 10 random initialisations for each experiment is shown in Figure 3. It is evident that the MEMe approach (FITBOMEMeL10), which better approximates the Gaussian mixture entropy, leads to a superior performance of the BO algorithm compared to the BO algorithm using simple moment matching technique (FITBOMM).
4 Conclusion
In this paper, we established the equivalence between the method of maximum entropy and Bayesian variational inference under moment constraints, and proposed a novel maximum entropy algorithm (MEMe) that is stable and consistent for a large number of moments. We apply MEMe in two applications, i.e., log determinant estimation and Bayesian optimisation, to demonstrate its effectiveness and superiority over stateoftheart approaches. The proposed algorithm can further benefit a wide range of largescale machine learning applications where efficient approximation is of crucial importance.
Conceptualization, D.G.; formal analysis, investigation, methodology, software and writing–original draft, D.G. and B.R.; supervision and writing–review and editing, S.Z., X.D., M.O. and S.R.
B.R. would like to thank the Oxford Clarendon Fund and the OxfordMan Institute of Quantitative Finance; D.G., S.Z. and X.D would like to thank the OxfordMan Institute of Quantitative Finance for financial support; S.R. would like to thank the UK Royal Academy of Engineering and the OxfordMan Institute of Quantitative Finance
The authors declare no conflict of interest.
The following abbreviations are used in this manuscript:
MEMe  Maximum entropy method 

MaxEnt  Maximum entropy 
PD  Positive definite 
CG  Conjugate gradient 
OMxnt  Old MaxEnt algorithm proposed by Bandyopadhyay et al. (2005) 
BO  Bayesian optimisation 
GP  Gaussian process 
FITBO  Fast informationtheoretic Bayesian optimisation 
GM  Gaussian mixture 
VUB  Variational upper bound 
Huber  Method proposed by Huber et al. (2008) for estimating the Gaussian mixture entropy 
MC  Monte Carlo sampling 
MM  Moment matching 
Quad  Numerical quadrature 
IR  Immediate regret 
no
Appendix A Polynomial Approximations to the Log Determinant
Recent work (Han et al., 2015; Dong et al., 2017; Zhang and Leithead, 2007) has considered incorporating knowledge of the noncentral moments (Also using stochastic trace estimation.) of a normalised eigenspectrum by replacing the logarithm with a finite polynomial expansion,
(26) 
Given that is not analytic at , it can be seen that, for any density with a large mass near the origin, a very large number of polynomial expansions, and thus moment estimates, will be required to achieve a good approximation, irrespective of the choice of basis.
a.1 Taylor Approximations are Unsound
In the case of a Taylor expansion, Equation (26) can be written as,
(27) 
The error in this approximation can be written as the difference of the two sums,
(28) 
where we have used the Taylor expansion of and denotes the expectation under the spectral measure. We begin with complete ignorance about the spectral density (other than its domain ) and by some scheme after seeing the first noncentral moment estimates we propose a surrogate density . The error in our approximation can be written as,
(29) 
For this error to be equal to that of our Taylor expansion, Equation (28), our implicit surrogate density must have the first noncentral moments of identical to the true spectral density and all others .
For any PD matrix , for which , (We ignore the trivial case of a Dirac distribution at , which is of no practical interest.) for Equation (29) to be equal to Equation (28), we must have,
(30) 
Given that and that we have removed the trivial case of the spectral density (and by implication its surrogate) being a delta function at , the sum is manifestly positive and hence for some , which violates the definition of a density.
Appendix B Bayesian Optimisation
The generic algorithm for Bayesian optimisation can be summarised in Algorithm 6 .
b.1 Are Moments Sufficient to Fully Describe the Problem?
It was shown in Billingsley (2012) that, for a probability measure having finite moments of all orders , if the power series has a positive radius of convergence, then is the only probability measure with the moments . Informally, a Gaussian has finite moments of all orders; therefore, any finite combination of Gaussians must necessarily possess finite moments of all orders hence the above condition is satisfied. We explicitly show this for the case of a onedimensional Gaussian as follows.
We take the location parameter as and standard deviation as . It can be seen that the th moment of the Gaussian is given as:
(31) 
where we make use of the fact that all odd central power moments of the Gaussian are . Hence for the Gaussian mixture model we have:
(32) 
where are the weights for the components that satisfy and . Notice that is upper bounded by a quantity greater than 1 and is lower bounded by a quantity smaller than 1.
Furthermore, the following relationship holds: . Therefore, the expression in Equation (32) can be upper bounded as:
(33) 
which is smaller than in the limit by taking the logarithm:
(34) 
b.2 Experimental Results on Approximating the Gaussian Mixture Entropy
The mean and standard deviation of the approximation error and the runtime taken by all approximation methods over 80 different Gaussian mixtures are shown in Table 4 and Table 5 respectively.
Methods  M=200  M=400 

MEMe10  
()  ()  
MEMe30  
( )  ( )  
MEMeL10  
( )  ( )  
MEMeL30  
( )  ( )  
Variational  
Upper Bound  ( )  () 
Huber2  
()  ( )  
MC100  
( )  ()  
Moment  
Matching  ( )  () 
Methods  M=200  M=400 

MEMe10  
()  ()  
MEMe30  
( )  ( )  
MEMeL10  
( )  ( )  
MEMeL30  
( )  ( )  
Variational  
Upper Bound  ( )  () 
Huber2  
()  ( )  
MC100  
( )  ()  
Moment  
Matching  ( )  () 
References
References
 Granziol and Roberts (2017) Granziol, D.; Roberts, S.J. Entropic determinants of massive matrices. In Proceedings of the 2017 IEEE International Conference on Big Data, Boston, MA, USA, 11–14 December 2017; pp. 88–93.
 Bandyopadhyay et al. (2005) Bandyopadhyay, K.; Bhattacharya, A.K.; Biswas, P.; Drabold, D. Maximum entropy and the problem of moments: A stable algorithm. Phys. Rev. E 2005, 71, 057701.
 Mead and Papanicolaou (1984) Mead, L.R.; Papanicolaou, N. Maximum entropy in the problem of moments. J. Math. Phys. 1984, 25, 2404–2417.
 Han et al. (2015) Han, I.; Malioutov, D.; Shin, J. Largescale logdeterminant computation through stochastic Chebyshev expansions. In Proceedings of the 32nd International Conference on International Conference on Machine Learning, Lille, France, 6–11 July 2015; pp. 908–917.
 Dong et al. (2017) Dong, K.; Eriksson, D.; Nickisch, H.; Bindel, D.; Wilson, A.G. Scalable Log Determinants for Gaussian Process Kernel Learning. In Proceedings of the 31st Conference on Neural Information Processing Systems, Long Beach, CA, USA, 4–9 December 2017; pp. 6330–6340.
 Zhang and Leithead (2007) Zhang, Y.; Leithead, W.E. Approximate implementation of the logarithm of the matrix determinant in Gaussian process regression. J. Stat. Comput. Simul. 2007, 77, 329–348.
 HernándezLobato et al. (2014) HernándezLobato, J.M.; Hoffman, M.W.; Ghahramani, Z. Predictive entropy search for efficient global optimization of blackbox functions. In Proceedings of the 27st Conference on Neural Information Processing Systems, Montreal, QC, Canada, 8–13 December 2014; pp. 918–926.
 Wang and Jegelka (2017) Wang, Z.; Jegelka, S. Maxvalue Entropy Search for Efficient Bayesian Optimization. arXiv 2017, doi:arXiv:1703.01968.
 Ru et al. (2018) Ru, B.; McLeod, M.; Granziol, D.; Osborne, M.A. Fast Informationtheoretic Bayesian Optimisation. In Proceedings of the 2018 International Conference on Machine Learning, Stockholm, Sweden, 10–15 July 2018; pp. 4381–4389.
 Fox and Roberts (2012) Fox, C.W.; Roberts, S.J. A tutorial on variational Bayesian inference. Artif. Intell. Rev. 2012, 38, 85–95.
 Kulesza (2012) Kulesza, A. Determinantal Point Processes for Machine Learning. Found. Trends Mach. Learn. 2012, 5, 123–286.
 Pressé et al. (2013) Pressé, S.; Ghosh, K.; Lee, J.; Dill, K.A. Principles of Maximum Entropy and Maximum Caliber in Statistical Physics. Rev. Mod. Phys. 2013, 85, 1115–1141.
 Jaynes (1957) Jaynes, E.T. Information Theory and Statistical Mechanics. Phys. Rev. 1957, 106, 620–630.
 Giffin et al. (2016) Giffin, A.; Cafaro, C.; Ali, S.A. Application of the Maximum Relative Entropy method to the Physics of Ferromagnetic Materials. Phys. A Stat. Mech. Appl. 2016, 455, 11 – 26.
 Neri and Schneider (2012) Neri, C.; Schneider, L. Maximum Entropy Distributions inferred from Option Portfolios on an Asset. Finance Stoch. 2012, 16, 293–318.

Bretthorst (2013)
Bretthorst, G.L.
The maximum entropy method of moments and Bayesian probability theory.
AIP Conf. Proc. 2013, 1553, 3–15.  Beal et al. (2003) Beal, M.J. Variational algorithms for approximate Bayesian inference. Master’s Thesis, University of London, London, UK, 2003.

Caticha (2012)
Caticha, A.
Entropic Inference and the Foundations of Physics (monograph commissioned by the 11th Brazilian Meeting on Bayesian Statistics–EBEB2012, 2012.
 Boyd and Vandenberghe (2009) Boyd, S.P.; Vandenberghe, L. Convex Optimization; Cambridge University Press, Cambridge, UK, 2009.
 Hutchinson (1990) Hutchinson, M.F. A stochastic estimator of the trace of the influence matrix for laplacian smoothing splines. Commun. Stat. Simul. Comput. 1990, 19, 433–450.
 Skilling (1989) Skilling, J. The eigenvalues of megadimensional matrices. In Maximum Entropy and Bayesian Methods; Springer, Berlin, Germany, 1989; pp. 455–466.
 Friedman et al. (2008) Friedman, J.; Hastie, T.; Tibshirani, R. Sparse inverse covariance estimation with the graphical lasso. Biostatistics 2008, 9, 432–441.
 Rasmussen and Williams (2006) Rasmussen, C.E.; Williams, C.K. Gaussian Processes for Machine Learning; The MIT Press: Cambridge, MA, USA, 2006, pp. 715–719.
 MacKay (2003) MacKay, D.J. Information Theory, Inference and Learning Algorithms; Cambridge University Press: Cambridge, UK, 2003.
 Van Aelst and Rousseeuw (2009) Van Aelst, S.; Rousseeuw, P. Minimum volume ellipsoid. Wiley Interdiscip. Rev. Comput. Stat. 2009, 1, 71–82.
 Wainwright and Jordan (2006) Wainwright, M.J.; Jordan, M.I. Logdeterminant relaxation for approximate inference in discrete Markov random fields. IEEE Trans. Signal Process. 2006, 54, 2099–2109.
 Gershgorin (1931) Gershgorin, S.A. Über die Abgrenzung der Eigenwerte einer Matrix. Izvestija Akademii Nauk SSSR Serija Matematika 1931, 6, 749–754.
 Ubaru et al. (2016) Ubaru, S.; Chen, J.; Saad, Y. Fast Estimation of via Stochastic Lanczos Quadrature. SIAM J. Matrix Anal. Appl. 2016, 38, 1075–1099.
 Granziol and Roberts (2017) Granziol, D.; Roberts, S. An Information and Field Theoretic approach to the Grand Canonical Ensemble. arXiv 2017, [arXiv:1703.10099].
 Fitzsimons et al. (2017) Fitzsimons, J.; Granziol, D.; Cutajar, K.; Osborne, M.; Filippone, M.; Roberts, S. Entropic Trace Estimates for Log Determinants. arXiv 2017, [arXiv:1704.07223].
 Davis and Hu (2011) Davis, T.A.; Hu, Y. The University of Florida sparse matrix collection. ACM Trans. Math. Softw. 2011, 38, 1.
 Hennig et al. (2015) Hennig, P.; Osborne, M.A.; Girolami, M. Probabilistic numerics and uncertainty in computations. Proc. R. Soc. A 2015, 471, 20150142.
 Bergstra et al. (2011) Bergstra, J.S.; Bardenet, R.; Bengio, Y.; Kégl, B. Algorithms for hyperparameter optimization. In Proceedings of the 24th International Conference on Neural Information Processing Systems, Granada, Spain, 12–15 December 2011; pp. 2546–2554.
 Snoek et al. (2012) Snoek, J.; Larochelle, H.; Adams, R.P. Practical bayesian optimization of machine learning algorithms. In Proceedings of the 25th International Conference on Neural Information Processing Systems, Lake Tahoe, NV, USA, 3–6 December 2012; pp. 2951–2959.
 Thornton et al. (2013) Thornton, C.; Hutter, F.; Hoos, H.H.; LeytonBrown, K. AutoWEKA: Combined selection and hyperparameter optimization of classification algorithms. In Proceedings of the 19th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, Chicago, IL, USA, 11–14 August 2013; pp. 847–855.

Hoffman et al. (2014)
Hoffman, M.; Shahriari, B.; Freitas, N.
On correlation and budget constraints in modelbased bandit
optimization with application to automatic machine learning.
In Proceedings of the 17th International Conference on Artificial Intelligence and Statistics, Reykjavik, Iceland, 22–24 April 2014; pp. 365–374.
 Lizotte et al. (2007) Lizotte, D.J.; Wang, T.; Bowling, M.H.; Schuurmans, D. Automatic Gait Optimization with Gaussian Process Regression. IJCAI 2007, 7, 944–949.
 MartinezCantin et al. (2007) MartinezCantin, R.; de Freitas, N.; Doucet, A.; Castellanos, J.A. Active policy learning for robot planning and exploration under uncertainty. Robotics Sci. Syst. 2007, 3, 321–328.
 Azimi et al. (2012) Azimi, J.; Jalali, A.; Fern, X. Hybrid batch Bayesian optimization. arXiv 2012, doi:arXiv:1202.5597.
 Brochu et al. (2010) Brochu, E.; Cora, V.M.; de Freitas, N. A tutorial on Bayesian optimization of expensive cost functions, with application to active user modeling and hierarchical reinforcement learning. arXiv 2010, doi:arXiv:1012.2599.
 Hennig and Schuler (2012) Hennig, P.; Schuler, C.J. Entropy search for informationefficient global optimization. J. Mach. Learn. Res. 2012, 13, 1809–1837.

Hershey and Olsen (2007)
Hershey, J.R.; Olsen, P.A.
Approximating the Kullback Leibler divergence between Gaussian mixture models.
In Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing, Honolulu, HI, USA, 15–20 April 2007.  Huber et al. (2008) Huber, M.F.; Bailey, T.; DurrantWhyte, H.; Hanebeck, U.D. On entropy approximation for Gaussian mixture random vectors. In Proceedings of the IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems, Seoul, South Korea, 20–22 August 2008; pp. 181–188.
 Molga and Smutnicki (2005) Molga, M.; Smutnicki, C. Test functions for optimization needs. Available online: http://www.zsd.ict.pwr.wroc.pl/files/docs/functions.pdf (available online 30 May 2005).
 Dixon (1978) Dixon, L.C.W. The global optimization problem. An introduction. Toward Glob. Optim. 1978, 2, 1–15.
 Billingsley (2012) Billingsley, P. Probability and Measure; Wiley: Hoboken, NJ, USA, 2012.
Comments
There are no comments yet.