When Machine Learning Meets Quantum Computers: A Case Study

12/18/2020 ∙ by Weiwen Jiang, et al. ∙ ibm University of Notre Dame 106

Along with the development of AI democratization, the machine learning approach, in particular neural networks, has been applied to wide-range applications. In different application scenarios, the neural network will be accelerated on the tailored computing platform. The acceleration of neural networks on classical computing platforms, such as CPU, GPU, FPGA, ASIC, has been widely studied; however, when the scale of the application consistently grows up, the memory bottleneck becomes obvious, widely known as memory-wall. In response to such a challenge, advanced quantum computing, which can represent 2^N states with N quantum bits (qubits), is regarded as a promising solution. It is imminent to know how to design the quantum circuit for accelerating neural networks. Most recently, there are initial works studying how to map neural networks to actual quantum processors. To better understand the state-of-the-art design and inspire new design methodology, this paper carries out a case study to demonstrate an end-to-end implementation. On the neural network side, we employ the multilayer perceptron to complete image classification tasks using the standard and widely used MNIST dataset. On the quantum computing side, we target IBM Quantum processors, which can be programmed and simulated by using IBM Qiskit. This work targets the acceleration of the inference phase of a trained neural network on the quantum processor. Along with the case study, we will demonstrate the typical procedure for mapping neural networks to quantum circuits.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

Grimmer

Quantum machine learning has three basic models of learning:

exact learning based on membership queries

Probably Approximately Correct (PAC)

Agnostic learning

Exact learning

In this model, the purpose of learning is to find a function that corresponds as exactly as possible to the unknown function. Here it is possible to make inquiries and get accurate answers about the value of the unknown function for different values of the arguments. The efficiency of quantum algorithms with respect to classical algorithms in this case depends on how the efficiency of learning is measured. If the measure of efficiency is the number of queries made, then quantum algorithms overtake classical ones only polynomially, but if the measure of efficiency is learning time, then there are such classes of functions for which quantum algorithms are much faster than classical ones, provided that quantum queries (that is queries that are in quantum superposition of classical queries) can be made. I played free penny slots no download around until I realized what really matters to me.

Authors

page 1

page 2

page 3

page 4

page 5

page 6

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1. Introduction

In the past few years, we have witnessed many breakthroughs in both machine learning and quantum computing research fields. On machine learning, the automated machine learning (AutoML) (Zoph and Le, 2016; Zoph et al., 2018) significantly reduces the cost of designing neural networks to achieve AI democratization. On quantum computing, the scale of the actual quantum computers has been rapidly evolving (e.g., IBM (IBM, 2020) recently announced to debut quantum computer with 1,121 quantum bits (qubits) in 2023). Such two research fields, however, have met the bottlenecks when applying the theoretical knowledge in practice. With the large-size inputs, the size of machine learning models (i.e., neural networks) significantly exceed the resource provided by the classical computing platform (e.g., GPU and FPGA); on the other hand, the development of quantum applications is far behind the development of quantum hardware, that is, it lacks killer applications to take full advantage of high-parallelism provided by a quantum computer. As a result, it is natural to see the emerging of a new research field, quantum machine learning.

Like applying machine learning to the classical hardware accelerators, when machine learning meets quantum computers, there will be tons of opportunities along with the challenges. The development of machine leering on the classical hardware accelerator experienced two phases: (1) the design of neural network tailored hardware (Zhang et al., 2015; Jiang et al., 2018; Zhang et al., 2018b; Jiang et al., 2019a; Li et al., 2016, 2017), and (2) the co-design of neural network and hardware accelerator (Jiang et al., 2019c; Jiang et al., 2019b; Yang et al., 2020a; Bian et al., 2020; Jiang et al., 2020a; Ding et al., 2020; Wu et al., 2019; Cai et al., 2018; Tan et al., 2019; Hao et al., 2019b, a; Zeng et al., 2020; Wu et al., 2020). To best exploit the power of the quantum computer, it would be essential to conduct the co-design of neural network and quantum circuits design; however, with the different basic logic gates between quantum circuit and classical circuit designs, it is still unclear how to design a quantum accelerator for the neural network.

In this work, we aim to fix such a missing link by providing an open-source design framework. In general, the full acceleration system will be divided into three parts, the data pre-processing and data post-processing on a classical computer, and the neural network accelerator on the quantum circuit. In the quantum circuit, it will further include the quantum state preparation and the quantum computing-based neural computation. In the following of this paper, we will introduce all the above components in detail and demonstrate the implementation using IBM Qiskit for quantum circuit design and Pytorch for the machine learning model process.

The remainder of the paper is organized as follows. Section 2 presents an overview of the full system. Section 3 presents the case study on the MNIST dataset. Insights are discussed in Section 4. Finally, concluding remarks are given in Section 5.

2. Overview

Figure 1. Illustration of three different types of computing schemes: (a) classical computing “C” based neural computation, where is weights; (b) quantum computing “Q” based neural computation, where is the quantum-state preparation and is the neural computation; (c) hybrid quantum-classical computing “Q+C” based neural computation.

Figure 1 demonstrates three types of neural network design: (1) the classical hardware accelerator; (2) the pure quantum computing based accelerator; (3) the hybrid quantum and classical accelerator. All of these accelerators follow the same flow that the data will be first pre-processed, then the neural computation is accelerated, and finally, the output data will go through the post-processing to obtain the final results.

2.1. Classical acceleration

After the success of deep neural networks (e.g., Alexnet (Krizhevsky et al., 2017) and VGGNet (Simonyan and Zisserman, 2014)) in achieving high accuracy, designing hardware accelerator became the hot topic in accelerating the execution of deep neural networks. On the application-specific integrated circuit (ASIC), works (Du et al., 2015; Zhang et al., 2018a; Zhang and Garg, 2018; Zhang et al., 2019b; Chen et al., 2016) studied how to design neural network accelerator using different dataflows, including weight stationery, output stationery, etc. By selecting dataflow for a dedicated neural computation, it can maximize the data reuse to reduce the data movement and accelerate the process, which derived the co-design of neural network and ASICs (Yang et al., 2020b).

On the FPGA, work (Zhang et al., 2015) first proposed the tiling based design to accelerate the neural computation, and works (Jiang et al., 2018; Zhang et al., 2018b; Jiang et al., 2019a; Li et al., 2015) gave different designs and extended the implementation to multiple FPGAs. Driven by the AutoML, work (Jiang et al., 2019c) proposed the first co-design framework to involve the FPGA implementation into the search loop, so that both software accuracy and hardware efficiency can be maximized. The co-design philosophy also applied in other designs (Zhang et al., 2019a; Jiang et al., 2020d; Hao et al., 2019b, a) and in this direction, there exist many research works in further integrating the model compression into consideration (Lu et al., 2019; Jiang et al., 2020c), accelerating the search process (Li et al., 2020; Zhang et al., 2020),

2.2. Pure quantum computing

Most recently, the emerging works in using the quantum circuit to accelerate neural computation. The typical work include (Francesco et al., 2019; Tacchino et al., 2020; Jiang et al., 2020b), among which the work (Jiang et al., 2020b) first demonstrates the potential quantum advantage that can be achieved by using a co-design philosophy. These works encode data to either qubits (Francesco et al., 2019) or qubit states (Jiang et al., 2020b) and use superconducting-based quantum computers to run neural networks. These methods have the following limitations: Due to the short decoherence times in the superconducting-based quantum computers

, the condition logic is not supported in the computing process. This makes it hard to implement a function that is not differentiable at all points, like the commonly used Rectified Linear Unit (ReLU) in machine learning models. However, it also has advantages, such as the design can be directly evaluated on an actual quantum computer, and there is no communication between the quantum-classical interface during the computation.

In the quantum circuit design, it includes two components: for quantum states preparation and for neural computation, as shown in Figure 1(b). After the component , it will measure the quantum qubits to extract the output data, which will be further sent to the data post-processing unit to obtain the final results.

2.3. Hybrid quantum-classical computing

To overcome the disadvantage of pure quantum computing and take full use of classical computing, the hybrid quantum-classical computing for machine learning tasks is proposed (Broughton et al., 2020)

. It establishes a computing paradigm where different neurons can be implemented on either quantum or classical

computers, as demonstrated in Figure 1(c). This brings flexibility in implementing functions (e.g., ReLU). However, at the same time, it will lead to massive data transfer between quantum and classical computers.

2.4. Our Focus in The Case Study

This work focus on providing a full workflow, starting from the data pre-processing, going through quantum computing acceleration, and ending with the data post-processing. We will apply the MNIST data set as an example to carry out a case study.

Computing architecture and neural operation can affect the design. In this work, for the computing architecture, we focus on the pure quantum computing design, since it can be easily extended to the hybrid quantum-classical design by connecting the inputs and output of the quantum acceleration to the traditional classical accelerator; for the neural network, we focus on the multi-layer perceptron, which is the basic operation for a large number of neural computation, like the convolution.

3. Case Study on MNIST Dataset

In this section, we will demonstrate the detailed implementation of four components in the pure quantum computing based neural computation as shown in Figure 1(b): data pre-processing, quantum state preparation (), neural computation (), and data post-processing.

3.1. Data Pre-Processing

The first step of the whole procedure is to prepare the quantum data to be encoded to the quantum states. Kindly note in order to utilize qubits to represent

data, it has constraints on the numbers; more specifically, if a vector

of

data can be arranged in the first column of a unitary matrix

, then for the initial state of , we can obtain by conducting , where represents the zero state with qubits.

1import torch
2import numpy as np
3import torchvision.transforms as transforms
4# Input: img_size=4 to represent the resolution of 4*4
5class ToQuantumData(object):
6    def __call__(self, tensor):
7        device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
8        data = tensor.to(device)
9        input_vec = data.view(-1)
10        vec_len = input_vec.size()[0]
11        input_matrix = torch.zeros(vec_len, vec_len)
12        input_matrix[0] = input_vec
13        input_matrix = np.float64(input_matrix.transpose(0,1))
14        u, s, v = np.linalg.svd(input_matrix)
15        output_matrix = torch.tensor(np.dot(u, v))
16        output_data = output_matrix[:, 0].view(1, img_size,img_size)
17        return output_data
18# Similarly, we have "class ToQuantumMatrix(object)" which return the output_matrix
19transform = transforms.Compose([transforms.Resize((img_size, img_size)), transforms.ToTensor(), ToQuantumData()])
20# transform = transforms.Compose([transforms.Resize((img_size,img_size)),transforms.ToTensor(),transforms.Normalize((0.1307,), (0.3081,)), ToQuantumData()])
Listing 1: Converting classical data to qautnum data

Listing 1

demonstrates the data conversion from the classical data to quantum data. We utilize the transforms in torchvision to complete the data conversation. More specifically, we create the ToQuantumData class in Line 5. It will receive a tensor (the original data) as input (Line 6). We apply Singular Value Decomposition (svd) provided by np.linalg to obtain the unitary matrix output_matrix (Line 14), then we extract the first vector from output_matrix as the output_data (Line 16), where the output_matrix represents

and the output_data represents . After we build the ToQuantumData class, we will integrate it into one “transform” variable, which can further include the data pre-processing functions, such as image resize (Line 20) and data normalization (Line 21). In creating the data loader, we can apply the “transform” to the dataset (e.g., we can obtain train data by using “train_data=datasets.MNIST(root=datapath, train=True,download=True, transform=transform)”).

3.2. : Quantum State Preparation

Theoretically, with the unitary matrix , we can directly operate the oracle on the quantum circuit to change states from the zero state to . This process is widely known as quantum-state preparation. The efficiency of quantum-state preparation can significantly affect the complexity of the whole circuit, and therefore, it is quite important to improve the efficiency of such a process. In general, there are two typical ways to perform the quantum-state preparation: (1) quantum random access memory (qRAM) (Lvovsky et al., 2009) based approach (Allcock et al., 2020; Kerenidis and Prakash, 2016) and (2) computing based approach (Sanders et al., 2019; Grover, 2000; Bausch, 2020). Let’s first see the qRAM-based approach, where the vector in will be stored in a binary-tree based structure in qRAM, which can be queried in quantum superposition and can generate the states efficiently. In IBM Qiskit, it provides the initialization function to perform quantum-state preparation, which is based on the method in (Shende et al., 2006).

1from qiskit import QuantumRegister, QuantumCircuit, ClassicalRegister
2from qiskit.extensions import XGate, UnitaryGate
3from qiskit import Aer, execute
4import qiskit
5# Input: a 4*4 matrix (data) holding 16 input data
6inp = QuantumRegister(4,"in_qbit")
7circ = QuantumCircuit(inp)
8data_matrix = Q_InputMatrix = ToQuantumMatrix()(data.flatten())
9circ.append(UnitaryGate(data_matrix, label="Input"), inp[0:4])
10# Using StatevectorSimulator from the Aer provider
11simulator = Aer.get_backend(’statevector_simulator’)
12result = execute(circ, simulator).result()
13statevector = result.get_statevector(circ)
14print(statevector)
Listing 2: Quantum-State Preparation in IBM Qiskit

In Listing 2, we give the codes to initialize the quantum states, using the unitary matrix which is converted from the original data in Listing 1(see Line 18). In this code snippet, we first create a 4-qubit QuantumRegister “inp” (line 6) and the quantum circuit (line 7). Then, we convert the input data to data_matrix, which is then employed to initialize the circuit using function UnitaryGate from qiskit.extensions. Finally, from line 10 to line 14, we output the states of all qubits to verify the correctness.

3.3. : Neural Computation

Now, we have encoded the image data (16 inputs) onto 4 qubits. The next step is to perform the neural computation, that is, the weighted sum with quadratic function using the given binary weights . Neural computation is the key component in quantum machine learning implementation. To clearly introduce this component, we first consider the computation of the hidden layer, which can be further divided into two stages: (1) multiplying inputs and weights, and (2) applying the quadratic function on the weighted sum. Then, we will present the computation of the output layer to obtain the final results.

Computation of one neural in the hidden layer

Stage 1: multiplying inputs and weights. Since the weight is given, it is pre-determined. We use the quantum gate to operate the weights with the inputs. The quantum gates applied here include the gate and the 3-controlled-Z gate with 3 trigger qubits. The function of such a 3-controlled-Z is to flip the sign of state , and the function of gate is to swap one state to another state.

For example, if the weight for state is . We operate it on the input follows three steps. First, we swap the amplitude of state to state using two gates on the first two qubits. Then, in the second step, we apply controlled-Z gate to flip the sign of the state . Finally, in the third step, we swap the amplitude of state back to state using two gates on the first two qubits. Therefore, we can transverse all weights and apply the above three steps to flip the sign of corresponding states. Kindly note that since the non-linear function is a quadratic function, if the number of is larger than , we can flip all signs of weights to minimize the number of gates to be put in the circuit.

1def cccz(circ, q1, q2, q3, q4, aux1, aux2):
2    # Apply Z-gate to a state controlled by 4 qubits
3    circ.ccx(q1, q2, aux1)
4    circ.ccx(q3, aux1, aux2)
5    circ.cz(aux2, q4)
6    # cleaning the aux bits
7    circ.ccx(q3, aux1, aux2)
8    circ.ccx(q1, q2, aux1)
9    return circ
10def neg_weight_gate(circ,qubits,aux,state):
11    for idx in range(len(state)):
12        if state[idx]==’0’:
13            circ.x(qubits[idx])
14    cccz(circ,qubits[0],qubits[1],qubits[2],qubits[3],aux[0],aux[1])
15    for idx in range(len(state)):
16        if state[idx]==’0’:
17            circ.x(qubits[idx])
18# input: weight vector, weight_1_1
19aux = QuantumRegister(2,"aux_qbit")
20circ.add_register(aux)
21if weight_1_1.sum()<0:
22    weight_1_1 = weight_1_1*-1
23for idx in range(weight_1_1.flatten().size()[0]):
24    if weight_1_1[idx]==-1:
25        state = "{0:b}".format(idx).zfill(4)
26        neg_weight_gate(circ,inp,aux,state)
27        circ.barrier()
28print(circ)
Listing 3: Multiplying inputs and weights on quantum

Listing 3 demonstrates the procedure of multiplying inputs and weights. In the list, the function cccz utilizing the basic quantum logic gates to realize the 3-controlled-Z gate with 3 control qubits. The involved basic gates include Toffoli gate (i.e., CCX) and controlled-Z gate (i.e., CZ). Since such a function needs auxiliary (a.k.a., ancilla) qubits, we include 2 additional qubits (i.e., ) in the quantum circuit (i.e., ), as shown in Lines 19-20.

The function neg_weights_gate flips the sign of the given state, applying the 3-step process. Lines 11-13 complete the first step to swap the amplitude of the given state to the state of . Then, the cccz gate is applied to complete the second step. Finally, from line 15 to line 17, the amplitude is swap back to the given state.

With the above two functions, we traverse the weights to assign the sign to each state from Lines 21-27. Kindly note that, after this operation, the states vector changed from the initial state to where the states have the weights.

Stage 2: applying a quadratic function on the weighted sum. In this stage, it also follows 3 steps to complete the function. In the first step, we apply the Hadamard (H) gates on all qubits to accumulates all states to the zero states. Then, the second step swap the amplitude of zero state and the one-state . Finally, the last step applies the N-control-X gate to extract the amplitude to one output qubit

, in which the probability of

is equal to the square of the weighted sum.

In the first step, the H gates can be applied to accumulate the amplitude of states, because the first row of is and the performs the multiplication between the matrix and the state vector . As a result, the amplitude of will be the weighted sum with the coefficient of .

1def ccccx(circ, q1, q2, q3, q4, q5, aux1, aux2):
2    circ.ccx(q1, q2, aux1)
3    circ.ccx(q3, q4, aux2)
4    circ.ccx(aux2, aux1, q5)
5    # cleaning the aux bits
6    circ.ccx(q3, q4, aux2)
7    circ.ccx(q1, q2, aux1)
8    return circ
9# input: circ after stage 1
10hidden_neuron = QuantumRegister(1,"out_qbit")
11circ.add_register(hidden_neuron)
12circ.h(inp)
13circ.x(inp)
14ccccx(circ,inp[0],inp[1],inp[2],inp[3],hidden_neuron,aux[0],aux[1])
Listing 4: Applying quadratic function on the weighted sum

Listing 4 demonstrates the implementation of the quadratic function on the weighted sum on Qiskit. In the list, function ccccx is based on the basic Toffoli gate (i.e., CCX) to implement a 4-control-X gate to swap the amplitude between the zero state and the one-state . In Line 14, is an additional output qubit in the quantum circuit (i.e., ) to hold the result for the neural computation, which is added in Lines 10-11.

For a neural network with neurons in the hidden layer, it has sets of weights. We can apply the above neural computation on set of weights to obtain output qubits.

Computation of one neuron in the output layer

With these output qubits, we have two choices: (1) go to the classical computer and then encode the output of these outputs to

qubits and then repeat these computations for the hidden layer to obtain the final results; (2) continuously use these qubits to directly compute the outputs, but the fundamental computation needs to be changed to the multiplication between random variables because the data associated with a qubit represents the probability of the qubit to be

state.

In the following, we demonstrate the implementation of the second choices (fundamental details please refer to (Jiang et al., 2020b; Tacchino et al., 2020)). In this example, we follow the network structure with 2 neurons in the hidden layer. In addition, we consider there is only one parameter for the normalization function using one additional qubit for each output neuron. Let be the outputs of 2 neurons in the hidden layer; let be the weights for the output neuron in the layer; let norm_flag_1 and norm_para_1 be the normalization related parameters for the output neuron. Then, we have the following implementation.

1# Additional registers
2inter_q_1 = QuantumRegister(1,"inter_q_1_qbits")
3norm_q_1 = QuantumRegister(1,"norm_q_1_qbits")
4out_q_1 = QuantumRegister(1,"out_q_1_qbits")
5circ.add_register(inter_q_1,norm_q_1,out_q_1)
6circ.barrier()
7# Input and weight multiplication
8if weight_2_1.sum()<0:
9    weight_2_1 = weight_2_1*-1
10idx = 0
11for idx in range(weight_2_1.flatten().size()[0]):
12    if weight_2_1[idx]==-1:
13        circ.x(hidden_neurons[idx])
14circ.h(inter_q_1)
15circ.cz(hidden_neurons[0],inter_q_1)
16circ.x(inter_q_1)
17circ.cz(hidden_neurons[1],inter_q_1)
18circ.x(inter_q_1)
19# quadratic function on weighted sum
20circ.h(inter_q_1)
21circ.x(inter_q_1)
22circ.barrier()
23# normalization for two cases
24norm_init_rad = float(norm_para_1.sqrt().arcsin()*2)
25circ.ry(norm_init_rad,norm_q_1)
26if norm_flag_1:
27    circ.cx(inter_q_1,out_q_1)
28    circ.x(inter_q_1)
29    circ.ccx(inter_q_1,norm_q_1,out_q_1)
30else:
31    circ.ccx(inter_q_1,norm_q_1,out_q_1)
32# Recove the inputs for the next neuron computation
33for idx in range(weight_2_1.flatten().size()[0]):
34    if weight_2_1[idx]==-1:
35        circ.x(hidden_neurons[idx])
Listing 5: Implementation of the second layer neural computation without measurement after the first layer

In the above list, it follows the 2-stage pattern for the computation in the hidden layer. If we modify all sub-index to , then we can obtain the quantum circuit for the second output neuron.

3.4. Data Post-Processing

After all outputs are computed and stored in the out_q_1 and out_q_2 qubits, we can then measure the output qubits, run a simulation or execute on the IBM Q processors, and finally obtain the classification as follows.

1from qiskit.tools.monitor import job_monitor
2
3def fire_ibmq(circuit,shots,Simulation = False,backend_name=’ibmq_essex’):
4    count_set = []
5    if not Simulation:
6        provider = IBMQ.get_provider(’ibm-q-academic’)
7        backend = provider.get_backend(backend_name)
8    else:
9        backend = Aer.get_backend(’qasm_simulator’)
10    job_ibm_q = execute(circuit, backend, shots=shots)
11    job_monitor(job_ibm_q)
12    result_ibm_q = job_ibm_q.result()
13    counts = result_ibm_q.get_counts()
14    return counts
15
16def analyze(counts):
17    mycount = {}
18    for i in range(2):
19        mycount[i] = 0
20    for k,v in counts.items():
21        bits = len(k)
22        for i in range(bits):
23            if k[bits-1-i] == "1":
24                if i in mycount.keys():
25                    mycount[i] += v
26                else:
27                    mycount[i] = v
28    return mycount,bits
29
30qc_shots=8192
31counts = fire_ibmq(circ,qc_shots,True)
32(mycount,bits) = analyze(counts)
33class_prob=[]
34for b in range(bits):
35    class_prob.append(float(mycount[b])/qc_shots)
36class_prob.index(max(class_prob))
Listing 6: Extract the classification results

Listing 6 demonstrate the above three tasks. The fire_ibmq function can execute the constructed circuit in either simulation or a given IBM Q processor backend. The parameter “shots” defines the number of execution to be executed. Finally, the counts for each state will be returned. On the implementation, the probability of each qubit (instead of each state) gives the probability to choose the corresponding class. Therefore, we create the “analyze” function to get the probability for each qubits. Finally, we obtain the classification results by extracting the index of the max probability in the “class_prob” set.

Kindly note that the Listing 6 can also be applied for the hybrid quantum-classical computing.

4. Insights

From the study of implementing neural networks onto the quantum circuits, there are several insights in terms of achieving quantum advantages, listed as follows.

  • Data encoding: this case study encodes data to quantum qubits, which provides the opportunity to achieve quantum advantage for conducting inference for each input. An alternative way is to encode data to qubits, however, with the consideration that each data needs to be operated in the neural computation, such an encoding approach can hardly achieve the quantum advantage.

  • Quantum-state preparation: by encoding data to quantum qubits, we can achieve quantum advantage only if the quantum-state preparation can be efficiently conducted with complexity at .

  • Quantum computing-based neural computation: Neural computation can also become the performance bottleneck, using the design in Listing 3 to flip one sign at each time, it requires gates in the worst case. To overcome this, (Jiang et al., 2020b) proposed a co-design approach to reduce the number of gates to .

5. Conclusion

This work demonstrates the framework in implementing neural networks onto quantum circuits. It is composed of three main components, including data pre-processing, neural computation acceleration, and data post-processing. Based on such a working flow, the data will be first encoded to quantum states and then operated to complete the operations in a neural network. The source codes can be found in https://github.com/weiwenjiang/QML_tutorial

Acknowledgements

This work is partially supported by IBM and University of Notre Dame (IBM-ND) Quantum program, and in part by the IBM-ILLINOIS Center for Cognitive Computing Systems Research.

References