## 1 Introduction

The Boltzmann machine has received considerable attention particularly after the publication of the seminal paper by Hinton and Salakhutdinov on autoencoder with stacked restricted Boltzmann machines

[21], which leads to today’s success of and expectation to deep learning

[54, 13] as well as a wide range of applications of Boltzmann machines such as collaborative filtering [1], classification of images and documents [33], and human choice [45, 47]. The Boltzmann machine is a stochastic (generative) model that can represent a probability distribution over binary patterns and others (see Section 2). The stochastic or generative capability of the Boltzmann machine has not been fully exploited in today’s deep learning. For further advancement of the field, it is important to understand basics of the Boltzmann machine particularly from probabilistic perspectives. In this paper, we review fundamental properties of the Boltzmann machines with particular emphasis on probabilistic representations that allow intuitive interpretations in terms of probabilities.A core of this paper is in the learning rules based on gradients or stochastic gradients for Boltzmann machines (Section 3-Section 4). These learning rules maximize the log likelihood of given dataset or minimize the Kullback-Leibler (KL) divergence to a target distribution. In particular, Boltzmann machines admit concise mathematical representations for its gradients and Hessians. For example, Hessians can be represented with covariance matrices.

The exact learning rules, however, turn out to be computationally intractable for general Boltzmann machines. We then review approximate learning methods such as Gibbs sampler and contrastive divergence in Section 5.

We also review other models that are related to the Boltzmann machine in Section 6. For example, the Markov random field is a generalization of the Boltzmann machine. We also discuss how to deal with real valued distributions by modifying the Boltzmann machine.

The intractability of exact learning of the Boltzmann machine motivates tractable energy-based learning. Some of the approximate learning methods for the Boltzmann machine may be considered as a form of energy-based learning. As a practical example, we review an energy-based model for face detection in Section

7.This survey paper is based on a personal note prepared for the first of the four parts of a tutorial given at the 26th International Joint Conference on Artificial Intelligence (IJCAI-17) held in Melbourne, Australia on August 21, 2017. See a tutorial webpage

^{1}

^{1}1https://researcher.watson.ibm.com/researcher/view_group.php?id =7834 for information about the tutorial. A survey corresponding to the third part of the tutorial (Boltzmann machines for time-series) can be found in [43].

## 2 The Boltzmann machine

A Boltzmann machine is a network of units that are connected to each other (see Figure 1). Let be the number of units. Each unit takes a binary value (0 or 1). Let

be the random variable representing the value of the

-th unit for. We use a column vector

to denote the random values of the units. The Boltzmann machine has two types of parameters: bias and weight. Let be the bias for the -th unit for , and let be the weight between unit and unit for . We use a column vector to denote the bias for all units and a matrix to denote the weight for all pairs of units. Namely, the -the element of is . We let for and for the pair of units that are disconnected each other. The parameters are collectively denoted by(1) |

which we also denote as .

The energy of the Boltzmann machine is defined by

(2) | ||||

(3) |

From the energy, the Boltzmann machine defines the probability distribution over binary patterns as follows:

(4) |

where the summation with respect to is over all of the possible bit binary values. Namely, the higher the energy of a pattern , the less likely that the

is generated. For a moment, we do not address the computational aspect of the denominator, which involves a summation of

terms. This denominator is also known as a partition function:(5) |

A Boltzmann machine can be used to model the probability distribution, , of target patterns. Namely, by optimally setting the values of , we approximate with . Here, some of the units of the Boltzmann machine are allowed to be hidden, which means that those units do not directly correspond to the target patterns (see Figure 2). The units that directly correspond to the target patterns are called visible. The primary purpose of the hidden units is to allow particular dependency between visible units, which cannot be represented solely with visible units. The visible units may be divided into input and output (see Figure 2). Then the Boltzmann machine can be used to model the conditional distribution of the output patterns given an input pattern.

## 3 Learning a generative model

Now we consider the problem of optimally setting the values of in a way that best approximates a given . Specifically, we seek to minimize the Kullback-Leibler (KL) divergence from to [2]:

(6) | ||||

(7) |

The first term of (7) is independent of . It thus suffices to maximize the negation of the second term:

(8) |

A special case of is the empirical distribution of the patterns in a given training dataset:

(9) |

where is the number of the patterns in , Then the objective function (8) becomes

(10) | ||||

(11) |

which is the log-likelihood of with respect to when multiplied by . By defining

(12) |

we can represent as follows:

(13) |

To find the optimal values of , we take the gradient of with respect to :

(14) |

### 3.1 All of the units are visible

We start with the simplest case where all of the units are visible (see Figure 2). Then the energy of the Boltzmann machine is simply given by (3), and the probability distribution is given by (4).

#### 3.1.1 Gradient

We will derive a specific representation of to examine the form of in this case:

(15) | ||||

(16) | ||||

(17) | ||||

(18) |

where the summation with respect to is over all of the possible binary patterns, similar to the summation with respect to . Here, (18) follows from (4) and (17).

Plugging the last expression into (14), we obtain

(19) | ||||

(20) | ||||

(21) |

The last expression allows intuitive interpretation of a gradient-based method for increasing the value of :

(22) |

where is the learning rate (or the step size). Namely, for each pattern , we compare against . If is greater than , we update in a way that it increases the energy so that the becomes less likely to be generated with . If is smaller than , we update in a way that decreases.

We will also write (20) as follows:

(23) |

where is the expectation with respect to , is the expectation with respect to , and is the vector of random variables denoting the values of the units. Note that the expression of the gradient in (23) holds for any form of energy, as long as the energy is used to define the probability as in (4).

Now we take into account the specific form of the energy given by (3). Taking the derivative with respect to each parameter, we obtain

(24) | ||||

(25) |

for and . From (23), we then find

(26) | ||||

(27) |

where is the random variable denoting the value of the -th unit for each . Notice that the expected value of is the same as the probability of , because is binary. In general, exact evaluation of or is computationally intractable, but we will not be concerned with this computational aspect until Section 5.

A gradient ascent method is thus to iteratively update the parameters as follows:

(28) | ||||

(29) |

for and . Intuitively, controls how likely that the -th unit takes the value 1, and controls how likely that the -th unit and the -th unit simultaneously take the value 1. For example, when is smaller than , we increase to increase . This form of learning rule appears frequently in the context of Boltzmann machines. Namely, we compare our prediction against the target and update in a way that gets closer to .

#### 3.1.2 Stochastic gradient

We now rewrite (20) as follows:

(30) |

Namely, is given by the expected value of , where the first is distributed with respect to . Recall that the second term is an expectation with respect to . This suggests stochastic gradient methods [3, 28, 9, 61, 50]. At each step, we sample a pattern according to and update according to the stochastic gradient:

(31) | ||||

where | ||||

(32) |

When the target distribution is the empirical distribution given by the training data , we only need to take a sample from uniformly at random.

The stochastic gradient method based on (31)-(32) allows intuitive interpretation. At each step, we sample a pattern according to the target distribution (or from the training data) and update in a way that the energy of the sampled pattern is reduced. At the same time, the energy of every pattern is increased, where the amount of the increase is proportional to the probability for the Boltzmann machine with the latest parameter to generate that pattern (see Figure 3).

Taking into account the specific form of the energy given by (3), we can derive the specific form of the stochastic gradient:

(33) | ||||

(34) |

which suggests a stochastic gradient method of iteratively sampling a pattern according to the target probability distribution and updating the parameters as follows:

(35) | ||||

(36) |

for and .

#### 3.1.3 Giving theoretical foundation for Hebb’s rule

The learning rule of (36

) has a paramount importance of providing a theoretical foundation for Hebb’s rule of learning in biological neural networks

[16]:When an axon of cell A is near enough to excite a cell B and repeatedly or persistently takes part in firing it, some growth process or metabolic change takes place in one or both cells such that A’s efficiency, as one of the cells firing B, is increased.

In short, “neurons wire together if they fire together”

[37].A unit of a Boltzmann machine corresponds to a neuron, and means that the -th neuron fires. When two neurons, and , fire (), the wiring weight between the two neurons gets stronger according to (36). Here, notice that we have as long as the values of are finite.

The learning rule of the Boltzmann machine also involves a mechanism beyond what is suggested by Hebb’s rule. Namely, the amount of the change in when the two neurons ( and ) fire depends on how likely those two neurons fire according to at that time. More specifically, it the two neurons are already expected to fire together (i.e., ), we increase only by a small amount (i.e., ) even if the two neurons fire together (i.e., ).

Without this additional term ( or ) in (36), all of the parameters monotonically increases. If with nonzero probability in , then diverges to almost surely. Otherwise, stays unchanged from the initial value. Likewise, if with nonzero probability in , then diverges to almost surely. Otherwise, stays unchanged.

What is important is that this additional term is formally derived instead of being introduced in an ad hoc manner. Specifically, the learning rule is derived from a stochastic model (i.e., a Boltzmann machine) and an objective function (i.e., minimizing the KL divergence to the target distribution or maximizing the log-likelihood of training data) by taking the gradient with respect to the parameters.

#### 3.1.4 Hessian

We now derive the Hessian of to examine its landscape. Starting from the expression in (27), we obtain

(37) | ||||

(38) | ||||

(39) | ||||

(40) |

where the last expression is obtained from (18) and (25). The last expression consists of expectations with respect to and can be represented conveniently as follows:

(41) | ||||

(42) |

where denotes the covariance between random variables and with respect to . Likewise, we have

(43) | ||||

(44) |

Therefore, the Hessian of is a covariance matrix:

(45) |

where we use to denote a covariance matrix with respect to . When is finite, this covariance matrix is positive semidefinite, and is concave. This justifies (stochastic) gradient based approaches to optimizing . This concavity has been known [19], but I am not aware of the literature that explicitly represent the Hessian with a covariance matrix.

#### 3.1.5 Summary

Consider a Boltzmann machine with parameters . When all of the units of the Boltzmann machine are visible, the Boltzmann machine defines a probability distribution of -bit binary patterns by

(46) |

where the energy is

(47) |

The KL divergence from to can be minimized (or the log-likelihood of the target data having the empirical distribution can be maximized) by maximizing

(48) |

The gradient and the Hessian of is given by

(49) | ||||

(50) |

where denotes the vector of the random variables representing the value of a unit or the product of the values of a pair of units:

(51) |

### 3.2 Some of the units are hidden

In this section, we consider Boltzmann machines that have both visible units and hidden units. Let be the number of visible units and be the number of hidden units.

#### 3.2.1 Necessity of hidden units

We first study the necessity of hidden units [2]. The Boltzmann machine with units have

(52) |

parameters. This Boltzmann machine is used to model -bit binary patterns. There are possible -bit binary patterns, and the general distribution of -bit patterns assigns probabilities to those patterns. We need

(53) |

parameters to characterize this general distribution.

The number of parameters of the Boltzmann machine is smaller than the number of parameters needed to characterize the general distribution as long as . This suggests that the probability distribution that can be represented by the Boltzmann machine is limited. One way to extend the flexibility of the Boltzmann machine is the use of hidden units.

#### 3.2.2 Free energy

Let denote the visible values (i.e., the values of visible units), denote the hidden values, and denote the values of all units. We write the marginal probability distribution of the visible values as follows:

(54) |

where the summation is over all of the possible binary patterns of the hidden values, and

(55) |

Here, we write energy as follows:

(56) | ||||

(57) |

Now, we define free energy as follows:

(58) |

We can then represent in a way similar to the case where all of the units are visible, replacing energy with free energy:

(59) | ||||

(60) | ||||

(61) |

#### 3.2.3 Gradient

In (20), we simply replace energy with free energy to obtain the gradient of our objective function when some of the units are hidden:

(62) | ||||

(63) |

What we then need is the gradient of free energy:

(64) | ||||

(65) | ||||

(66) |

where is the conditional probability that the hidden values are given that the visible values are :

(67) | ||||

(68) | ||||

(69) | ||||

(70) |

Observe in (66) that the gradient of free energy is expected gradient of energy, where the expectation is with respect to the conditional distribution of hidden values given the visible values.

We thus obtain

(71) | ||||

(72) |

The first term in the last expression (except the minus sign) is the expected value of the gradient of energy, where the expectation is with respect to the distribution defined with and . Specifically, the visible values follow , and given the visible values, , the hidden values follow . We will write this expectation with . The second term is expectation with respect to , which we denote with . Because the energy (56) has the form equivalent to (47), can then be represented analogously to (49):

(73) | ||||

(74) |

where is the vector of random values of the visible units, is the vector of for , and is defined analogously to (51) for all of the (visible or hidden) units:

(75) |

where for , and for , where is the random variable denoting the -th hidden value.

#### 3.2.4 Stochastic gradient

The expression with (72) suggests stochastic gradient analogous to (31)-(32). Observe that can be represented as

(76) |

where is the expected value of the gradient of the energy when both visible values and hidden values follow , and is the corresponding conditional expectation when the hidden values follow given the visible values .

A stochastic gradient method is then to sample visible values, , according to and update according to the stochastic gradient:

(77) |

By taking into account the specific form of the energy, we find the following specific update rule:

(78) | ||||

(79) |

where each unit ( or ) may be either visible or hidden. Specifically, let be the number of hidden units and be the number of visible units. Then . Here, denotes the value of the -th unit, which may be visible or hidden. When the -th unit is visible, its expected value is simply , and . When both and are visible, we have .

Namely, we have

(80) | ||||

for a visible unit , | ||||

(81) | ||||

for a hidden unit , | ||||

(82) | ||||

for a pair of visible units , | ||||

(83) | ||||

for a pair of a visible unit and a hidden unit , and | ||||

(84) |

for a pair of hidden units .

#### 3.2.5 Hessian

We now derive the Hessian of when some of the units are hidden. From the gradient of in (72), we can write the partial derivatives as follows:

(85) | ||||

(86) |

where recall that for and for .

When all of the units are visible, the first term in (27) is the expectation with respect to the target distribution and is independent of . Now that some of the units are hidden, the corresponding first term in (85) depends on . Here, the first term is expectation, where the visible units follow the target distribution, and the hidden units follow the conditional distribution with respect to the Boltzmann machine with parameters given the visible values.

Comments

There are no comments yet.