Boltzmann machine learning (BML) Ackley et al. (1985)
has been actively studied in the field of machine learning and also in statistical mechanics. In statistical mechanics, the problem of BML is sometimes referred to as theinverse Ising problem, because a Boltzmann machine is the same as an Ising model, and BML can be regarded as an inverse problem for the Ising model. The framework of the usual BML is as follows. Given a set of observed data points (e.g., spin snapshots), we estimate appropriate values of the parameters, the external field and couplings, of our Boltzmann machine through maximum likelihood (ML) estimation (cf. Sec. II.1). Because BML involves intractable multiple summations (i.e., evaluation of the partition function), many approximations for it were proposed from the viewpoint of statistical mechanics Roudi et al. (2009): for example, methods based on mean-field approximations (such as the Plefka expansion Plefka (1982) and the cluster variation method Pelizzola (2005)) Kappen and Rodríguez (1998); Tanaka (1998); Yasuda and Horiguchi (2006); Sessak and Monasson (2009); Yasuda and Tanaka (2009); Ricci-Tersenghi (2012); Furtlehner (2013) and methods based on other approximations Sohl-Dickstein et al. (2011); Yasuda (2015).
In this study, we focus on another type of learning problem. We consider prior distributions of parameters of the Boltzmann machine and assume that the prior distributions are governed by some hyperparameters. The introduction of the prior distributions is strongly connected with the regularized ML estimation (cf. Sec. II.1). As mentioned above, the aim of the usual BML is to optimize the values of the parameters of the Boltzmann machine by using a set of observed data points. Meanwhile, the aim of the problem investigated in this study is the estimation of appropriate values of the hyperparameters from the dataset without estimating specific values of the parameters. One way to allow us to accomplish this from the Bayesian point of view is the empirical Bayes method (or also called type-II ML estimation or evidence approximation) MacKay (1992); Bishop (2006) (cf. Sec. II.2). The schemes of the usual BML and of our problem are illustrated in Fig. 1.
However, the evaluation of the likelihood function in the empirical Bayes method is again intractable, because it involves intractable multiple integrations of the partition function. In this study, we analyze the empirical Bayes method for fully-connected Boltzmann machines, using statistical mechanical techniques based on the replica method Mezard et al. (1987); Nishimori (2001) and the Plefka expansion to derive an algorithm for it. We consider two types of cases of the prior distribution of : the cases of Gaussian and Laplace priors.
The rest of this paper is organized as follows. The formulations of the usual BML and the empirical Bayes method are presented in Sec. II. In Sec. III, we describe our statistical mechanical analysis for the empirical Bayes method. The proposed inference algorithm obtained from our analysis is shown in Sec. III.3 with its pseudocode. In Sec. IV, we examine our proposed method through numerical experiments. Finally, the summary and some discussions are presented in Sec. V.
Ii Boltzmann Machine and Empirical Bayes Method
ii.1 Boltzmann machine and prior distributions
Consider a fully-connected Boltzmann machine with Ising variables Ackley et al. (1985):
where is the sum over all the distinct pairs of variables; i.e., . is the partition function defined by
where is the sum over all the possible configurations of ; i.e., . The parameters, and , denote the external field and couplings, respectively.
Given observed data points, , we define the log-likelihood function:
Maximizing the log-likelihood function with respect to and (i.e., the ML estimation) just corresponds to the BML (or the inverse Ising problem), i.e.,
Now, we introduce prior distributions for the parameters and as and
respectively. and are the hyperparameters of these prior distributions. One of the most important motivations for introducing the prior distributions is for a Bayesian interpretation of the regularized ML estimation Bishop (2006). Given the observed dataset , by using the prior distributions, the posterior distribution of and is expressed as
The distribution in the denominator in Eq. (5), , is sometimes referred to as the evidence. By using the posterior distribution, the maximum a posteriori (MAP) estimation of the parameters is obtained as
The MAP estimation in Eq. (6) corresponds to the regularized ML estimation, in which and work as a penalty. For example, (i) when the prior distribution of is the Gaussian prior,
corresponds to the regularization term, and corresponds to its coefficient; (ii) when the prior distribution of is the Laplace prior,
corresponds to the regularization term, and
again corresponds to its coefficient. The variances of these prior distributions are identical,. In this study, as a simple test case, we use these two prior distributions for and
where is the Dirac delta function, for .
ii.2 Framework of the empirical Bayes method
Using the empirical Bayes method, we can infer the values of the hyperparameters, and , from the observed dataset . We define a marginal log-likelihood function as
where is the average over the prior distributions; i.e.,
We refer to the marginal log-likelihood function as the empirical Bayes likelihood function in this study. From the perspective of the empirical Bayes method, the optimal values of the hyperparameters, and , are obtained by maximizing of the empirical Bayes likelihood function; i.e.,
The marginal log-likelihood function can be rewritten as
Consider the case . In this case, by using the saddle point evaluation, Eq. (13) is reduced to
In this case, the empirical Bayes’ estimates thus converge to the maximum likelihood estimates of the hyperparameters in the prior distributions in which the maximum likelihood estimates of the parameters (i.e., the solution to the BML) are inserted. This indicates that the parameter estimations can be conducted independently of the hyperparameter estimation. In this study, we do not concern ourselves with this trivial case.
Iii Statistical Mechanical Analysis
The empirical Bayes likelihood function in Eq. (11) involves intractable multiple integrations. In this section, we evaluate the empirical Bayes likelihood function using a statistical mechanical analysis. We consider the two types of the prior distribution of : one is the Gaussian prior in Eq. (8), and the other is the Laplace prior in Eq. (9).
iii.1 Replica method
The empirical Bayes likelihood function in Eq. (11) can be represented as
are the sample averages of the observed data points. We assume that is a natural number, and therefore Eq. (15) can be expressed as
where are replica indices, and is the Ising variable on site in the th replica. is the set of all the Ising variables in the replicated system, and is the sum over all the possible configurations of ; i.e., . We evaluate under the assumption that us a natural number, after which we take the limit of of the evaluation result to obtain the empirical Bayes likelihood function (this is the so-called replica trick).
is the Hamiltonian of the replicated system, where is the sum over all the distinct pairs of replicas; i.e., .
iii.2 Plefka expansion
Because the replicated free energy in Eq. (19) includes intractable multiple summations, an approximation is needed to proceed with our evaluation. In this section, we approximate the replicated free energy using the Plefka expansion Plefka (1982). In brief, the Plefka expansion is the perturbative expansion in a Gibbs free energy that is a dual form of a corresponding Helmholtz free energy.
The Gibbs free energy is obtained as
The derivation of this Gibbs free energy is described in Appendix A. It is noteworthy that this type of expression of the Gibbs free energy implies the replica-symmetric (RS) assumption. To take the replica-symmetry breaking (RSB) into account, explicit treatments of overlaps between different replicas are needed Yasuda et al. (2012). By expanding around , we obtain
where is the negative mean-field entropy defined by
iii.3 Inference algorithm
From Eq. (31), is immediately obtained as follows: (i) when and or when and , , (ii) when and , , and (iii) elsewhere. Here, we ignore the case , because it hardly occurs in realistic settings. By using Eqs. (30) and (31), we can obtain the solution to the empirical Bayes inference without any iterative processes. The pseudocode of the proposed procedure is shown in Algorithm 1.
In the proposed method, the value of does not affect the determination of . Many mean-field-based methods for BML (e.g., listed in Sec. I) have similar procedures, in which are determined separately from . This is seen as one of the common properties of the mean-field-based methods for BML including the current empirical Bayes problem.
iii.4 Evaluation based on Laplace prior
where . Here, we assume
By using the perturbative approximation,
we obtain the approximation of Eq. (32) as
The right-hand side of this equation coincides with in Eq. (17). This means that the empirical Bayes inference based on the Laplace prior in Eq. (9) is (approximately) equivalent to that based on the Gaussian prior in Eq. (8) (i.e., ) when the assumption of Eq. (33) is justified. Thus, we can also use the algorithm presented in Sec. III.3 for the case of the Laplace prior.
Iv Numerical Experiments
In this section, we describe the results of our numerical experiments. In these experiments, the observed dataset are generated from the generative Boltzmann machine, which has the same form as Eq. (1), by using annealed importance sampling Neal (2001). The parameters of the generative Boltzmann machine are drawn from the prior distributions in Eqs. (4) and (10). That is, we consider the model-matched case (i.e., the generative and learning models are identical).
In the following, we use the notations and
. The standard deviations of the Gaussian prior in Eq. (8) and of the Laplace prior in Eq. (9) are then . We express the hyperparameters for the generative Boltzmann machine by and .
iv.1 Gaussian prior case
Here, we consider the case in which the prior distribution of is the Gaussian prior in Eq. (8). In this case, the Boltzmann machine corresponds to the Sherrington-Kirkpatrick (SK) model Sherrington and Kirkpatrick (1975), and therefore it shows the spin-glass transition at when (i.e., when ).
First, we consider the case . We show the scatter plots for the estimation of for various when and in Fig. 2.
The detailed values of the plots for some values are shown in Tab. 1.
When , our estimates of are in good agreement with . This implies that the validity of our perturbative approximation is lost in the spin-glass phase, as is often the case with many mean-field approximations. Fig. 3 shows the scatter plots for various .
A smaller causes to be overestimated and a larger causes it to be underestimated. At least in our experiments, the optimal value of seems to be when . Our method can estimate together with . The results for the estimation of when and are shown in Fig. 4.
Figs. 4(a) and (b) show the average of (i.e., the mean absolute error (MAE)) and the standard deviation of over 300 experiments, respectively. The MAE and standard deviation drastically increase in the region .
Next, we consider the cases . The scatter plots for the estimation of for various values when and are shown in Fig. 5.
The appropriate values of when and “approximately” seem to be and , respectively. The detailed values of these plots for some values are shown in Tabs. 2 and 3. The results for the estimation of when and and when and are shown in Figs. 6 and 7, respectively.
The increases in the MAE and standard deviations occur earlier than for the case in Fig. 4.
One of the largest qualitative differences between the cases and is the scale of . In the case , the optimal was scaled by with respect to (i.e., ). Meanwhile, in the case , the optimal is scaled by with respect to (i.e., ). This change of scale can be understood from a scale evaluation for the terms in the empirical Bayes likelihood function in Eq. (24). The detailed reasoning is given in Appendix C.
iv.2 Laplace prior case
V Summary and Discussions
In this study, we proposed a hyperparameters inference algorithm by analyzing the empirical Bayes likelihood function in Eq. (11) using the replica method and the Plefka expansion. The validity of our method was examined in numerical experiments for the Gaussian and Laplace priors, which demonstrated the existence of an appropriate scale in the size of the dataset that can accurately recover the values of the hyperparameters.
However, some problems remain. The first one is the scale of . In our experiments, we found that an appropriate is scaled by when or by when . However, such scales seem to be unnatural, because they should not appear in the original framework of the empirical Bayes method. As discussed in Sec. II.2, when , maximizing the empirical Bayes likelihood function is reduced to the maximum likelihood estimation of the prior distributions for the solution to BML. This must lead to the correct and , because the solution to BML is perfect when . Therefore, such unnatural scales appear due to our approximation, which is also supported by a scale analysis given in Appendix C. An improvement of the approximation (e.g., by evaluating the leading terms in the Plefka expansion or using some other approximations) might reduce these unnatural behaviors.
The second problem is the optimal setting . Empirically, we found that when and that it decreases as increases (e.g., when and when ). As can be seen in the results of our experiments, the solution to our method is robust for the choice of when is small () and is sensitive to it when is large (), where . The estimation of is very important for our method, and it will make our method more practical. This problem would be strongly related to the first problem.
The third problem is the degradation of the estimation accuracy in the spin-glass phase. In our experiments, the estimation accuracies of and were obviously degraded in the spin-glass phase. This means that our Plefka expansion based on the RS assumption loses its validity in the spin-glass phase. In Ref. Yasuda et al. (2012), a Plefka expansion for the one-step RSB was proposed. Employing this expansion instead of the current expansion could reduce the degradation in the spin-glass phase. These three problems should be addressed in our future studies.
In this study, we used fully-connected Boltzmann machines whose variables are all visible. We are also interested in an extension of our method to other types of Boltzmann machines such as Boltzmann machines having specific structures or hidden variables. Furthermore, we considered the model-matched case (i.e., the case in which the generative mode and learning model are the same model) in the current study, but model-mismatched cases are more practical and important.
Appendix A Gibbs Free Energy
In this appendix, we derive the Gibbs free energy for the replicated (Helmholtz) free energy in Eq. (19).
The replicated free energy is obtained by minimizing the variational free energy, defined by
under the normalization constraint, i.e., , where is a test distribution over , and is the Hamiltonian for the replicated system defined in Eq. (20).
The Gibbs free energy is obtained by adding new constraints to the minimization of . Here, we add the relation as the constraint. By using Lagrange multipliers, the Gibbs free energy is obtained as
where “” denotes the extremum with respect to the assigned parameters. By performing the extremum operation with respect to and in Eq. (35), we obtain
The replicated free energy in Eq. (19) coincides with the extremum of this Gibbs free energy with respect to ; i.e.,
Appendix B Derivation of Coefficients of Plefka Expansion
The Plefka expansion considered in this study can be obtained by expanding the Gibbs free energy in Eq. (21) around .
When , we have
where is defined in Eq. (23).
For the derivations of the coefficients and , we decompose in Eq. (21) into two parts:
Coefficient is defined by
The derivative leads to