Bayesian machine learning for Boltzmann machine in quantum-enhanced feature spaces

12/20/2019 ∙ by Yusen Wu, et al. ∙ 0

Bayesian learning is ubiquitous for implementing classification and regression tasks, however, it is accompanied by computationally intractable limitations when the feature spaces become extremely large. Aiming to solve this problem, we develop a quantum bayesian learning framework of the restricted Boltzmann machine in the quantum-enhanced feature spaces. Our framework provides the encoding phase to map the real data and Boltzmann weight onto the quantum feature spaces and the training phase to learn an optimal inference function. Specifically, the training phase provides a physical quantity to measure the posterior distribution in quantum feature spaces, and this measure is utilized to design the quantum maximum a posterior (QMAP) algorithm and the quantum predictive distribution estimator (QPDE). It is shown that both quantum algorithms achieve exponential speed-up over their classical counterparts. Furthermore, it is interesting to note that our framework can figure out the classical bayesian learning tasks, i.e. processing the classical data and outputting corresponding classical labels. And a simulation, which is performed on an open-source software framework for quantum computing, illustrates that our algorithms show almost the same classification performance compared to their classical counterparts. Noting that the proposed quantum algorithms utilize the shallow circuit, our work is expected to be implemented on the noisy intermediate-scale quantum (NISQ) devices, and is one of the promising candidates to achieve quantum supremacy.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 5

page 7

page 8

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

Results

Restricted boltzmann machine via bayesian machine learning. We first discribe the restricted Boltzmann machine (RBM) model and the methodology to train it by utilizing bayesian framework [33]. The RBM model is a two-layer, bipartite neural network, which is a ”restricted version” of the Boltzmann machine with only inter-connections between hidden layers and visible layers. The visible layer encodes the input data into a dimensional binary string, and the hidden layer is composed of

stochastic binary variables. Then the joint probability of

is expressed as , where the potential function . The Boltzmann weights compose a symmetry matrix, and the formalized factor .

Given the training data , the Boltzmann weights can be estimated by utilizing the maximum a posterior (MAP) within the bayesian learning framework [29]:

(1)

The probability is the priori distribution of the Boltzmann weights, and

is the likely-hood probability distribution, which can be computed as

on the RBM model.

Meanwhile, in some application scenarios, the predictive distribution of the new data point prevails over the Boltzmann parameter itself, and we can directly express the predictive distribution by sum of with the weight parameter :

(2)

In general, the distribution

can not be analytically computed, therefore researchers often utilize Markov Chain Monte Carlo (MCMC) method to estimate this distribution. Nevertheless, if the machine learning tasks process the data on a high dimensional feature space

, estimating the posterior distribution and predictive distribution will take a large amount of computational overheads which is extremely hard for classical computing. We therefore choose the quantum state space as the feature space, and we design a quantum bayesian framework to handle this issue.
Encoding phase of bayesian learning framework. The quantum bayesian framework contains two fundamental components, namely the encoding phase and the training phase. The first stage of encoding phase indicates the procedure that maps the real data onto the quantum feature space which is special designed for quantum bayesian learning.

Figure 1: Circuits of generating the feature states and , respectively.

In quantum settings, the feature map is an injective encoding of classical information into a quantum state on an qubits register, s.t. . The family of feature map circuit can be defined as , where the unitary operator

(3)

The notation indicates a subset that belongs to set , and are non-linear functions of the input data [34], and other choices of the feature maps are also possible such as squeezed vacuum state [28]. For the supervised classification tasks, the training data is stored into two parts , namely data parts and label parts. Then the feature state is defined as:

(4)

where the operator encodes the label parts into the last qubits, and the binary string indicates the . For instance, if handling the binary classification task, the label is marked by when , otherwise .

On the other hand, the predictive data , which does not have its label, is encoded by the quantum feature map:

(5)

Still taking the binary classification into consideration, the feature state of predictive data becomes . Fig.1 illustrates the quantum circuits of these two quantum feature maps respectively.

The second subroutine aims at encoding the parameter into a density matrix , whose structure satisfies connection rules between the visible nodes and hidden nodes. To achieve this, we propose a parallel hardware-efficient ansatz to represent the priori density matrix of the RBM model (see Methods).
Training phase of quantum bayesian framework. We then discuss the training phase of our framework. First, we deliver a quantum algorithm for maximum the posterior distribution, and this quantum algorithm is designed according to the measure of a physical quantity. We find the overlap between the priori density matrix and likelyhood density matrix can be recognized as a principle to evaluate the posterior distribution in quantum settings, furthermore this physical quantity can be efficiently computed by quantum computer but hard for classical computer [35], and this measure reveals the quantum advantages. The validity for choosing this measure as well as the method for computing the likelyhood density matrix are presented in Methods.

Specifically, the quantum maximum posterior distribution (QMAP) algorithm can be regarded as a procedure that adjusts the parameter to achieve

(6)

The QMAP algorithm starts with estimating the measure, s.t. , by swap test technique [30]. Utilizing this information, QMAP algorithm is followed by simultaneous perturbation stochastic approximation (SPSA) approach [31] to find out the optimal parameter . For every step of the optimization, we sample from

symmetrical Bernoulli distributions

using preassigned elements from a sequence converging to zero, s.t., . The gradient at is approximated using energy evaluations at , and is constructed as

(7)

Noting that this gradient approximation only requires two estimations of the energy, regardless of the number of variables in . The value of can be obtained by swap test technique, then the parameters are updated as

(8)

The step length is selected based on experience, and the optimal solution convergences rapidly after several iterations in this iterative algorithm. So far, the QMAP algorithm obtains the optimal parameter which can be used to classify the predictive data.

The classification rules of QMAP algorithm are interpreted as follows. After determining the optimal solution , the likelyhood distribution of a test data point can be computed by the QCL subroutine (see Methods), where the training feature state should be substituted by the testing feature state . Then we utilize computational basis to measure the last registers of . Taking binary classification as an example, we select basis to measure the third register. If the probability , then is assigned to the class , otherwise .

The QMAP algorithm involves swap test technique as well as the QCL subroutine proposed in Methods, which actually takes gate complexity. The parameter is the number of utilized qubits or the scale of the quantum system, indicates the error taken by swap test, represents the number of data points in the data set, and is the iterative times of SPSA approach. Given the real data set and the corresponding feature space , then the QMAP algorithm achieves the exponential speed-up under the assumption of .

Figure 2: (a) Experimental classification result by QMAP algorithm. The blue line chart shows the classification accuracies for different depths, and each depth involves training 400 data points and testing 40 data points for 10 times, where the mean values for the 10 times are represented by the square-shaped filled dots. And the inset blue bars shows the probability distribution of with depth 10 and accuracy . (b) The accuracy distribution of 10 experimental results for each depth.
Figure 3: (a) Illustration of the classification accuracy utilizing QPDE, the number of sampling times vary from 20 to 80 with variation length 20. The sampling interval of parameter is , and the depth of variational quantum circuit for generating state equals to 5. (b) Illustration of the classification success with the changing of layer depth, from 5 to 10, the success curve witnesses from 0.849 to 0.854. (c) Success rates by QPDE for sampling 40, 60, 80 times, respectively.

Then we discuss another inference method in our framework. The quantum predictive distribution estimator (QPDE) can directly construct a predictive distribution rather than deliver the Boltamann weight, and this result naturally corresponds to a quantum state which represents such distribution. Given a predictive visible data (without label), the QPDE aims at obtaining the corresponding predictive distribution

(9)

where is the likelyhood distribution of the predicted data that has been introduced above. We indicate the fact that traversing all the possible state can be achieved by modifying the rotation parameter accompanied by an appropriate error in Theorem 2 (see Methods). Generally, the predictive distribution can not be calculated analytically, unless priori and likelyhood distribution belong to the same conjugate distribution family. Furthermore, the likelyhood density matrix , in which visible data has been mapped on the quantum feature space, is computationally hard by classical means because of the construction of the feature map and overleap . Thus we also believe that function (9) may not be estimated directly by Markov chain Monte Carlo (MCMC) method efficiently [34].

Figure 4: (a) Example data used for QMAP algorithm. The data labels (red for +1 label and blue for -1 label). (b) Decision bound provided by QMAP algorithm with 10 layers depth. (c) Evolution of the quantum posterior distribution after iterations of using SPSA algorithm. This graph corresponds to the quantum circuit with layers depth.

Combining MCMC method and quantum technique, the QPDE provides a method to estimate function (9). We first select and prepare a sampling distribution which satisfies the condition , and there are some possible choices of

such as uniform distribution, multivariate Gaussian distribution and multivariate Laplasian distribution. Then the function (

9) can be rewritten as

(10)

This is followed by obtaining a set of samples (where ) drawn independently from the distribution , which allows the function (9) to be approximated by a finite sum

(11)

where

(12)

Noting that QPDE does not directly achieve , but it just measures the last qubits and then performs simple calculations on the measurement results. We still deliver a binary classification as an illustrative example. In detail, parameters can be calculated by the swap test, then one can measure the last qubit of by computational basis , where the probability can be estimated by simply counting the measurement results. Finally is assigned to the class if , otherwise class is assigned.

Noting that QPDE also needs the supportive of QCL subroutine, meanwhile sampling from the distribution takes additive complexity of times. Therefore, QPDE takes time, where is the number of data points in real dataset, represents the error taken by swap test.

Figure 5: Wave-function evolutions by running QPDE for different sampling times on the two testing data points and , shown in the left side (a and b) and right side (c and d), respectively. There are eight amplitudes in the wavefunction and the and .
Figure 6: Illustration of the quantum restricted boltzmann machine (QRBM) which is composed of 6 qubits. In detail, qubits are utilized to encode hidden nodes in the form of superposition . The qubits and are the visible nodes, from which encode the visible data via nonlinear operator and represents the corresponding label of .

Experimental results. The experimental preparation of the QMAP algorithm is illustrated as follows. We utilize hidden node and visible nodes to construct a QRBM to implement a binary classification task. At first, we encode the classical training data onto the quantum qubits and by utilizing the quantum feature map , which is illustrated in Fig.6. Then we invoke the QCL subroutine to add the hidden node onto the RBM model and computing the likelyhood distribution at the same time. Finally, QMAP algorithm can find out the optimal parameter with the help of SPSA algorithm and swap test technique, and the evolution of quantum posterior distribution is illustrated as Fig.4(c). The optimal parameter can be further examined by the testing data set. We set the iterative parameter , afterwards the test results are illustrated as Fig.2. Our quantum algorithm yields success for the test data set. The original data distribution and the decision boundary of the QMAP algorithm are illustrated as Fig.4(a) and (b).

For the experiments of QPDE, the preparation stage is to obtain a set of samples from a uniform distribution and the function (9) can be evaluated as

(13)

where the parameter , , and . We first implement the quantum variational classifier with sampling times, respectively, on the quantum simulation processor. We expect the classification success rate goes up with increasing the number of samplings. We utilize training sets consisting of data points per label. And we take 40 testing data points for each class. Two of the testing data and ’s wavefunction are shown in Fig.5 for different number of samplings. And the classification success is also illustrated in Fig.3(a). We observe that the upper bound of the accuracy converges to albeit with more optimization steps. The number of layer depth of priori distribution is set to , and sampling interval is .

Second, we implement the quantum variational circuit for different layer depth from to on the quantum simulation processor. We observe that the classification accuracy shows a slight rise from to with layer depth varying from to , and maintains a constant regardless of increasing the layer depth. The sampling interval is set to and the number of sampling times is .

Finally, we also implement the experiments on varying the sampling interval from to when executing to different sampling times on the layer depth quantum circuit. It is interesting to note that larger number of sampling times with larger interval actually yield higher success rate up to .

According to the experiments results, compared with QPDE, the QMAP algorithm has a larger complexity, but achieves higher success rate up to . Therefore, we in practice have to face a tradeoff between complexity and success rate before choosing a more suitable one between the two algorithms.

Discussion

In summary, we implement a quantum bayesian learning framework in the exponentially large feature spaces. Our framework provides two variational quantum algorithms, which build upon the realization that the posterior and predictive distributions are hard to estimate classically in the quantum-state feature spaces. Both of them can handle the classical dataset and output the classical predictive label, and it is shown that our algorithms are able to be exponentially faster than the classical counterparts. We hope our work can inspire more bayesian machine learning algorithms accessible to NISQ devices. .

Methods

Parallel hardware-efficient ansatz. In this section, we present a quantum technique to generate the priori probability in the pattern of quantum state. To process the data in the high dimensional quantum-enhanced feature space with dimension, researchers propose the hardware efficient ansatz to approximate an arbitrary quantum state with an affordable error [21]. To represent the relationship between the visible nodes and hidden nodes, we propose the parallel hardware-efficient ansatz whose fundamental quantum circuit is illustrated in Fig.7(a). If the RBM model has hidden nodes , the quantum priori quantum state can be prepared by

(14)

in which . The shallow quantum circuit can be achieved by appending layers of single-qubit unitaries and entangling gates, and each layer contains an additional set of entanglers across all the qubits used. The operator is a circuit of repeated entanglers, and interleave them with layers comprised of local single qubit rotations: . It is interesting to note that is only confined to invoking and , of course, there are other ways to construct the rotations. The rotation angles are sampled from a classical distribution (such as uniform distribution, multivariate Gaussian distribution and multivariate Laplasian distribution), and the entangler comes from the graph model , which is uniquely determined by the structure of the RBM:

(15)

The implementing of the parallel hardware efficient ansatz depends on the Hadamard gate and controlled unitary , i.e.,

(16)

where . Therefore the priori probability distribution can be computed by measuring the above state on the basis , and the oblivious amplitude amplification technique [3] can realize the

Figure 7: (a) Illustration of the quantum circuit of by introducing ancilla qubits. And the gate complexity of is . (b) Variational circuit used for encoding the boltzmann weight between the hidden nodes and visible nodes. We choose the hardware-efficient ansatz for the variational unitary , with entanglement layer . (c) The fundamental structure of .

polynomial acceleration in this measuring step.
Density matrix of likelyhood distribution. In the feature space, the posterior distribution of RBM can be further simplified as

(17)

where denotes the -th column vector of . Noting that is actually an overlap, or the inner product, between the vectors and [33]. Given the weight state and feature state , we attempt to estimate there overlap using quantum computer. The implementing of the following quantum algorithm depends on two two quantum techniques provided in Supplementary materials. The first technique [38], the combination of amplitude estimation [36] and swap test, promises our proposed algorithm enabling calculating that overlap and immediately encoding it into the register, and the second one interprets the procedure of linear combination of unitary. Drawing supportive from these two techniques, we can implement the following quantum algorithm that computes likelyhood distribution in quantum settings.
Subroutine (QCL): Quantum algorithm for computing likelyhood density matrix
Input:
A quantum restricted boltzmann machine composed of visible nodes and hidden nodes; qubits initialized to to encode visible nodes; qubits initialized to to encode hidden nodes; an auxiliary qubit; and training data set .
Output:Likely-hood distribution

  1. For any training data , we first construct quantum state in the first system, and then apply Hardmard gate to the auxiliary qubit (the second system) resulting in the state of the whole system

    (18)

    The operation is performed onto the the third system controlled by the second register, then we obtain

    (19)
  2. According to the linear combination of unitary (LCU) technique, we have

    (20)
  3. Considering the QRBM model with hidden nodes, for the first hidden node and corresponding weight states . Construct the controlled operator and apply it on the second and the third system, where the parameter is determined by the previous phase. The resulting state becomes

    (21)
  4. Perform Hadamard gate on the second system, then the system turns to the following state by invoking the analogue of swap test technique (see Supplementary material), s.t.

    (22)

    where .

  5. Perform exponential operator onto the third system, then add ancillary qubit and utilize controlled rotation procedure, s.t. , so the state becomes

    (23)

    after measuring ancillary qubit with . The parameter is the number of qubits to describe the overlap , and the controlled rotation therefore takes complexity of .

  6. Uncomputing the step 3 and step 4, we have the state

    (24)

    Then repeat the steps 3-5 adding the hidden nodes respectively, we finally obtain the likely-hood state of the training data under the quantum restricted boltzmann machine model.

    (25)

    in which the probability parameter is the likely-hood probability of the data . We therefore obtain the likely-hood distribution density operator .

To save the sources of qubits, actually, one can also measure the third system and save each on the classical memory without utilizing amplitude estimation technique, and finally designs the unitary operator

(26)

and perform it on the initial state to achieve the state of Eq.(25) when implementing the experiments. Thus the procedure does not depend on the auxiliary qubit and amplitude estimation algorithm necessarily, furthermore this schema does not submerge the quantum supremacy taken by the feature state. On the other hand, we should also point that the number of hidden nodes is a constant which does not depend on the scale of visible data set . In fact, our experiment only sets 1 hidden nodes but manifests a descent performance.
Theorem 1: Suppose the ansatz can be generated in the way of , and there are enough layers in the quantum circuit , s.t. can generate an arbitrary state, then the physical quantity can be recognized as a measure to represent posterior distribution in quantum settings.
Proof: If we expand the physical quantity under the computational basis, then we have

(27)

If we traverse all the possible , this physical quantity can be simplified as:

(28)
(29)

Therefore, the physical quantity, s.t. the overlap between likelyhood and priori density matrixes, can be recognized as a measure to represent posterior distribution in quantum settings.
Theorem 2: If the ansatz can be prepared by , then modifying the rotation parameter can actually traverse all the possible weight parameter with an affordable error.
Proof: Suppose we perform a sequence of gates intended to approximate some other sequence of gates: . Then it turns out that the error caused by the entire sequence of imperfect gates is at most the sum of the errors in the individual gates:

(30)

Now we consider the local single qubit rotation , then the difference between and can be measured as

(31)
(32)
(33)

Combining the theorem, therefore the difference between and can be measured as

(34)

Thus the microelement , which implies the fact that modifying the rotation parameter can actually traverse all the possible weight states with an exponential small error.

References

Acknowledgments

This work is supported by NSFC (Grant Nos. 61672110, 61671082, 61976024, 61972048), and the Fundamental Research Funds for the Central Universities (Grant No.2019XD-A01). We thank Bailing Zhang, Shijie Pan and Binbin Cai on discussing the writing thoughts, and we also acknowledge using of the HiQ for this work.

Author contributions

Y.W. contributed to the initiation of the idea. All authors wrote and reviewed the manuscript.