A review of Quantum Neural Networks: Methods, Models, Dilemma

09/04/2021
by   Renxin Zhao, et al.
0

The rapid development of quantum computer hardware has laid the hardware foundation for the realization of QNN. Due to quantum properties, QNN shows higher storage capacity and computational efficiency compared to its classical counterparts. This article will review the development of QNN in the past six years from three parts: implementation methods, quantum circuit models, and difficulties faced. Among them, the first part, the implementation method, mainly refers to some underlying algorithms and theoretical frameworks for constructing QNN models, such as VQA. The second part introduces several quantum circuit models of QNN, including QBM, QCVNN and so on. The third part describes some of the main difficult problems currently encountered. In short, this field is still in the exploratory stage, full of magic and practical significance.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

12/16/2020

Variational Quantum Algorithms

Applications such as simulating large quantum systems or solving large-s...
05/25/2021

A Quantum Hopfield Associative Memory Implemented on an Actual Quantum Processor

In this work, we present a Quantum Hopfield Associative Memory (QHAM) an...
11/06/2019

XACC: A System-Level Software Infrastructure for Heterogeneous Quantum-Classical Computing

Quantum programming techniques and software have advanced significantly ...
09/14/2021

Formal Methods for Quantum Programs: A Survey

While recent progress in quantum hardware open the door for significant ...
04/16/2022

Reducing the Depth of Quantum FLT-Based Inversion Circuit

Works on quantum computing and cryptanalysis has increased significantly...
12/12/2021

An Optimized Quantum Implementation of ISD on Scalable Quantum Resources

The security of code based constructions is usually assessed by Informat...
03/12/2022

Peel | Pile? Cross-Framework Portability of Quantum Software

In recent years, various vendors have made quantum software frameworks a...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

Since Feynman first proposes the concept of quantum computers [1], technology giants and startups such as Google, IBM, and Microsoft have competed with each other, eager to make it a reality. In 2019, IBM launches a quantum processor with 53 qubits, which can be programmed by external researchers. In the same year, Google announces that its 53-bit chip called Sycamore has successfully implemented ”Quantum Supremacy”. According to report, Sycamore can complete the world’s fastest supercomputer IBM Summit in 200 seconds to complete calculations that take 10,000 years to complete, that is, quantum computers can complete tasks that are almost impossible on traditional computers [2]. However, this statement is quickly doubted by competitors including IBM. IBM bluntly says: according to the strict definition of Quantum Supremacy, which means surpassing the computing power of all traditional computers, Google’s goal of achieving Quantum Supremacy has not been achieved. Therefore, IBM issues an article criticizing Google’s claim that traditional computers take 10,000 years to complete is wrong [3]. Since IBM found that it only takes 2.5 days after the deduction, it also commented that Google has intensified the excessive hype about the current state of quantum technology [3]. There are still nearly ten years left until 2030, which is called the first year of commercial use of quantum computing, Quantum computers that can be produced at this stage are all within the scope defined by NISQ. NISQ refers to the fact that there are fewer qubits available on quantum processors recently, and quantum control is susceptible to noise, but it already has stable computing power and the ability to suppress decoherence [4]. In short, quantum computers are the hardware foundation for the development of QNN.

QNN is first proposed by [5] and has been widely used in image processing [6]-[8], speech recognition [9][10], disease prediction [11][12] and other fields. For the definition of QNN, there is no unified conclusion in the academic circles. QNN is a promising computing paradigm that combines quantum computing and neuroscience [13]. Specifically, QNN establishes a connection between the neural network model and quantum theory through the analogy between the two-level qubits, the basic unit of quantum computing, and the active/resting states in the complex signal transmission process in nerve cells [14]. At current stage, it can also be defined as a sub-category of VQA, consisting of quantum circuits containing parameterized gate operations [15][16]. Obviously, the definition of QNN can be completely different according to different construction methods [17]-[22]. In order to further clarify the precise meaning of QNN, [23] puts forward the following three preconditions: (1) The initial state of the system can encode any binary string; (2) The calculation process can reflect the calculation principle of the neural network; (3) The evolution of the system is based on quantum effects and conforms to the basic theories of quantum mechanics. However, most of the QNNs models currently proposed are discussed on the level of mathematical calculations, and there are problems such as unclear physical feasibility, not following the evolution of quantum effects, and not having the characteristics of neural network computing. As a result, the real QNNs have not been realized [23].

From the perspective of historical development, QNN has roughly gone through two main stages, namely the early stage and the near-term quantum processor stage. In the early days, QNN could not be implemented on quantum computers due to hardware conditions. Most models were proposed based on the related physical processes of quantum computing, and did not describe specific quantum processor structures such as qubits and quantum circuits. Typical representatives are QNN based on multiple world views [24], QNN based on interactive quantum dots [25], QNN based on simple dissipative quantum gates [26], QNN analogous to CNN [27], and so on. Compared with earlier research results, the recently proposed QNN has a broader meaning. The term QNN is more often used to refer to a computational model with a networked structure and trainable parameters that is realized by a quantum circuit or quantum system [28]. In addition, the research of the QNN model also emphasizes the physical feasibility of the model. In the recent quantum processor stage, some emerging models such as QBM [29]-[31], QCVNN [32]-[34], QGAN [35][36], QGNN [37]-[39], QRNN [40][41], QTNN [42], QP[43]-[49], etc. [50]-[60] will be introduced in subsequent sections.

QNN has surprising quantum advantages. But at this stage, the contradiction between dissipative dynamics in neural computing and unitary dynamics in quantum computing is still a topic worthy of in-depth study [61]-[67]. Furthermore, the current QNN can only be trained for some small samples at low latitudes, and the prediction accuracy and generalization performance in large data sets is still an open problem [68][69]. In addition, it is also found that the barren plateau phenomenon is easily formed in the parameter space of the QNN exponential level [70]-[73].

Finally, the main work of this article is summarized. In Section II, the composition method of QNN will be introduced, so that readers have a preliminary understanding of the formation of QNN. In Section III, the QNN quantum circuit model for the past six years will be introduced. In Section IV, some open issues and related attempts will be introduced.

Ii Some Construction Methods of QNN

Many related reviews are very enthusiastic about the QNN model, but they do not systematically tell us how to build a QNN. This will be an interesting topic. In fact, it is extremely difficult to systematically summarize all the methods from 1995 to 2021, so this section mainly reviews the relatively mainstream methods in the past 6 years.

Ii-a Vqa

VQC is a rotating quantum gate circuit with free parameters, which is used to approximate, optimize, and classify various numerical tasks. The algorithm based on VQC is called VQA, which is a classical-quantum hybrid algorithm, because the parameter optimization process usually takes place on classical computers

[15].

The similarity between VQC and CNN is that they both approximate the objective function by learning parameters, but VQC has quantum characteristics. That is, all quantum gate operations are reversible linear operations, and quantum circuits use entanglement layers instead of activation functions to achieve multilayer structures. Therefore, VQC has been used to replace the existing CNN

[16][33][41].

A typical example is that, [16] defines QNN as a subset of VQA and gives a general expression of QNN quantum circuit (see Fig. 1).

Fig. 1: QNN based on VQA framework modified from [16].

In Fig. 1, in the first step, the quantum feature map method encodes the input information which is usually in the form of classical data, into the S-qubit Hilbert space. This step successfully realizes the transition from classical state to quantum state. Subsequently, the VQA containing parameterized gate operations optimized for specific tasks will play a role to evolve the obtained quantum state into a new state, namely , which is similar to the classical machine learning process [16]. After the effect of VQA, the final output

of QNN is extracted by quantum measurement. Before sending the information to the loss function, the measurement results

are usually converted into corresponding labels or predictions through classic post-processing. The purpose of this is to filter the parameters that can minimize the loss function for VQA.

The VQA framework is one of the mainstream methods for designing QNN. But it also inevitably inherited some of VQA’s own shortcomings. For example, the QNN framework proposed by [16], in some cases, is facing the crisis of barren plateau. However, [16] does not give specific solutions. Additionally, [16] does not investigate the measurement efficiency of quantum circuits. Therefore, the QNN design under the VQA framework is still worth exploring.

Ii-B Cv

The idea of CV comes from [28]

.CV is a method for encoding quantum information with continuous degrees of freedom, and its specific form is VQC with a hierarchical structure including continuous parameterized gates. This structure has two outstanding points, namely the affine transformation realized by the Gaussian gate and the nonlinear activation function realized by the non-Gaussian gate. Based on the special structure of the CV framework, highly non-linear transformations can be encoded while retaining complete unity. The QNN framework based on CV is shown in Fig.

2.

Fig. 2: A single layer QNN based on CV framework modified from [28].

Fig. 2 shows the l-th layer of the QNN model based on the CV framework. In Fig. 2 the universal N-port linear optical interferometers contain rotation gates as well as beamsplitter. In this figure, represents squeeze operators, collective displacement is marked by , and some non-Gaussian gates such as cubic phase or Kerr gates are represented by the symbol . are collective gate variables and free parameters in the network, in which can have a fixed value. The first interferometer , the local squeeze gate S, the second interferometer and the local displacement D are used for affine transformation, and the last local non-Gaussian gate is used for the final nonlinear transformation.

Being able to handle continuous variables is the bright spot of the CV-based QNN model, but one difficulty is how to realize the non-Gaussian gate, and to ensure that it has sufficient certainty and tunability. In this regard, [28] does not give any further explanation. Moreover, [28] has only done numerical experiments, and there is no practical application case yet.

Ii-C Swap Test and Phase Estimation

[17] and [18]

both suggest the method of swap test and phase estimation to build QNN. In the implementation scheme of

[17]

, a single qubit controls entire input information of the neuron during the swap test, which is not conducive to the physical realization. Unlike

[17], the quantum neuron in [18] adopts the design of multi-phase estimation and multi-qubit SWAP test.

Swap Test [19] first proposes swap test (see Fig. 3).

Fig. 3: Quantum circuit of a swap test modified from [18].

The meaning of this circuit is to know the square of the inner product of the qubits and

, by measuring the probability that the first qubit is in the state

or . Fig. 3 has two Hadamard gates and a controlled swap operator. Assume that all states change to after going through the first Hadamard gate. Subsequently, is further transformed into under the action of the controlled swap operator. Finally, apply the Hadamard gate again, and is obtained. Performing projection measurement on ancilla qubits, we can know that the probabilities of and are and , respectively. Therefore, the square of the inner product of qubits and can be expressed as , where represents the probability of when the ancilla qubit is in state .

Phase Estimation Assuming that

is an eigenvector of the unitary operator

U

, the corresponding eigenvalue is

and is undetermined. Our goal is to obtain the estimated value of through the phase estimation algorithm. As can be seen from Fig. 4, there are two quantum registers.

Fig. 4: Quantum phase estimation modified from [18].

The first one contains t initial qubits with all states , and the second one starts in state . The phase estimation algorithm is implemented in three steps. In the first step, the circuit first applies the Hadamard transformation to the first register, and then applies a controlled-U gates to the second register, where U

is raised to successive powers of two. The second step is to apply the inverse quantum Fourier transform represented by IQFT in Fig.

4 to the first register. The third step is to read the state of the first register by measuring on the basis of calculations.

The QNN framework based on swap test and phase estimation proposed by [18] is shown in Fig. 5. This QNN framework converts the numerical sample into a quantum superposition state, and then obtains the inner product of the sample and the weight through the swap test, and then further maps the obtained inner product to the output of the quantum neuron through phase estimation. According to reports, since the framework does not need to record or store any measurement results, it will not waste classical computing resources. Although the model is more feasible to implement, in the case of multiple inputs, the input space will increase exponentially. Whether there will be a barren plateau is a question for further analysis.

Fig. 5: QNN based on swap test and phase estimation framework modified from [18].

Ii-D Rus

[20] uses the RUS quantum circuit [21] to achieve an approximate threshold activation function on the phase of the qubit, and fully maintain the unitary characteristics of quantum evolution (see Fig. 6).

Fig. 6: Repeat-until-success (RUS) circuit for realizing rotation with an angle . [20].

A classic neuron can be seen as a function, which takes n variables and maps them to the output value , where and b are synaptic weights and biases, respectively.

is a non-linear activation function, such as step function, sigmoid function, tanh function, etc. We constrain the output value

o to be -1 and 1, that is .

In order to map the above to the quantum framework, [20] introduces some necessary quantum states:

1., where is a scalar;

2. is a rotational quantum operation related to Pauli Y operator.

Extreme cases, such as and , will be mapped to quantum states and , respectively, and will be regarded as a quantum neuron superimposed by and . Take as the control state, and utilize to the ancilla qubit conditional on the i-th qubit, and then apply to the ancilla qubit. This is equivalent to applying to the ancilla qubit conditioned on the state of the input neuron. Rotation is performed by [20].

Fig. 6 depicts a circuit that carries out , where is a nonlinear activation function. The measurement result of the ancilla qubit demonstrates the influence of the RUS circuit on the output qubit. The measurement returns to , denoting that the rotation of is successfully achieved to the output qubit. On the contrary, if it is , is rotated on the output qubit. At this time, needs to be used to offset this rotation. Then the circuit keeps repeating until is detected on the ancilla qubit, which is why it is called RUS

The highlight of [20] is to use quantum circuits to approximate nonlinear functions, that is, to solve nonlinear problems by linear means. This unifies nonlinear neural computing and linear quantum computing, and meets the basic requirements of [23]

. It is also worth mentioning that RUS is a flexible way. Quantum neurons constructed with RUS also have the potential to construct various machine learning paradigms, involving supervised learning, reinforcement learning and so on.

Ii-E Quantum generalization

[22] puts forward a quantum generalization method for CNN, that is, the reformation of the perceptron can be explained by each reversible and unitary transformation in QNN. Through the use of numerical simulations, it has been proven that gradient descent can be used to train for a given objective function. Minimizing the difference between the expected output and the output of the quantum circuit is the purpose of training. One feasible physical approach is to apply quantum photonics.

Although the theory of [22] is universal, it ignores the nonlinear problem when discussing the quantum generalization of QNN.

Iii Advances of QNN Models for Near-term Quantum Processor

Iii-a Qbm

[29] provides a new idea for the realization of QBM. Adopting the input represented by the quantum state, employing quantum gates for data training and parameter updating, by modeling the quantum circuits of visible layers and hidden layers, the global optimal solution can be turned up by QRBM. (see Fig. 7).

Fig. 7: The quantum circuit of QRBM modified from [29].

In Fig. 7, the visible layer variable is expressed as , where

(1)

and . The hidden layer is denoted as , where

(2)

and . The quantum register has p qubits. The Hadamard gate in Fig. 7 is used for preprocessing [29]. The coefficients of the visible layer variables are changed with the phases of a series of quantum rotation gates. The quantum state of each variable is switched through the CNOT gate, and the entire variable in the visible layer is summed into one qubit [29]. After passing through the Hadamard gate again, the quantum state of a qubit in the hidden layer is obtained to represent the output.

In recent years, QBM models based on variable quantum algorithm have also been proposed. [30] proposes a variational virtual-time simulation based on NISQ equipment to realize BM learning. It is different from the previous method of preparing thermal equilibrium, but uses a pure state whose distribution simulates the thermal equilibrium distribution. It has been proved that NISQ equipment has the potential of effective use in BM learning. [31] prepares the Gibbs state and evaluates the analytical gradient of the loss function based on the variable quantum virtual time evolution technology. Numerical simulations and experiments are carried out on IBM Q to prove the approximation of the variational Gibbs state. Compared with [30] and [31], [29]

realized a pioneering effort to realize QBM with quantum gates, explored the appropriate number of hidden layers, and tested the pattern recognition performance of gearboxes with different hidden layers.

Iii-B Qcvnn

QCVNN has received extensive attention in the past three years. [32] first mentiones the term QCVNN. In [32], the input information is represented by qubits, which are trained under the CVNN framework, and the probability of a certain characteristic state is obtained by measurement. But what [32] puts forward is only a theoretical model, which does not have the feasibility of quantum circuits.

[33] designs a quantum circuit model of QCVNN, which implements convolution and pooling transformation similar to CVNN for processing one-dimensional quantum data. The quantum circuit structure of QCVNN is shown in Fig. 8, including several repeated convolutional layers and pooling layers, as well as a fully connected layer.

Fig. 8: The structure of QCVNN modified from [33].

The convolutional layer imposes a finite depth of quasi-local unitary transformation, and each unitary transformation parameter in the same convolutional layer is the same. The pooling layer measures some qubits and applies classical controlled unitary transformation on adjacent qubits. The parameters of the unitary transformation depend on the measurement results of adjacent qubits. After multi-layer convolution and pooling transformation, when there are few remaining qubits, a unitary transformation is applied as a fully connected layer, and the specified qubits are measured to obtain the judgment result of the network. Each convolutional layer and pooling layer of the network share the unitary transformation of the same parameter, so for n-bit input qubits, it only has parameters of the order of , which can be efficiently trained and deployed on quantum computers in near term. In addition, the pooling layer can reduce the dimensionality of the data and introduce a certain non-linearity through partial measurement. It is worth noting that as the convolutional layer increases, the number of qubits spanned by the convolution operation will also increase. Therefore, the deployment of the QCVNN model on a quantum computer requires the ability to implement two-qubit gates and projection measurements at various distances.

Fig. 9: The quantum circuit for the convolutional layer in deep QCVNN model modified from [34].

[34] introduces a deep QCVNN model for image recognition (see Fig. 9). The general working principle of the model is that the input image will be encoded into quantum states with basis encoding, and then these quantum states undergo a series of parameter-related unitary transformations to realize the quantum evolution process. Unlike CVNN, this model omits the pooling layer, only retains the quantum convolution layer and quantum classification layer, and increases the convolution step size to to finish sub-sampling. This is advantageous to remove the intermediate measurement, so as to achieve the purpose of quickly reducing the dimension of the generated feature map. Finally, the corresponding classification label is acquired through quantum measurement. For some details, for example, in the quantum convolution layer, the quantum preparation of the input image and the related convolution kernels is performed by the QRAM algorithm. For another example, the quantum inner product operation of the kernel working area requires the support of a quantum multiplier and a controlled Hadamard gate rotation operation. Furthermore, the conversion between amplitude coding and base coding and nonlinear mapping are determined by the quantum phase estimation algorithm. Finally, by separating the desired state and the intermediate state, non-computational operations are employed to obtain the input state of the next layer.

The proposal of the deep QCVNN model proves the feasibility and efficiency in multi-type image recognition. However, this model can only limit the size of the input image to the range of . Once it is not in this range, the image needs to be scaled additionally. For the image scaling problem, [34] does not give a good quantum version solution. Furthermore, the optional step size and nucleus still need to be further probed.

Comparing [33] and [34], [33] only gives a QCVNN model for low-dimensional input. Although [33] also mentiones that the proposed model is extensible, it does not elaborate on the expansion steps. Relatively speaking, the input dimension discussed by deep QCVNN is higher.

Iii-C Qgan

The concept of quantum generative adversarial learning can be traced back to [35]. [35] discourses the operating efficiency of quantum generative adversarial learning in a variety of situations, such as the training data is classical data or quantum data, and whether the discriminator and generator use quantum processors to run. Reportedly, when the training data is quantum data, the quantum confrontation network may show an exponential advantage over the classical confrontation network. However, [35] does not give a specific quantum circuit scheme, nor does it conduct a rigorous mathematical derivation.

A quantum circuit version of GAN is proposed by [36]. The schematic diagram is shown in Fig. 10 and the structure diagram is shown in Fig. 11.

Fig. 10: The schematic diagram of QGAN modified from [36].
Fig. 11: The general structure diagram of QGAN modified from [36].

[36] assumes that there is a data source (S) and a label is given, and the density matrix is output to a register comprising n subsystems, namely

(3)

In [36], the essence of the Generator (G

) is VQC, and its gate is parameterized by the vector

. Regarding the label and the additional state as inputs, G generates a quantum state

(4)

In (4), is output on a register containing n subsystems, similar to the real data [36]. The additional input state has a dual role. On the one hand, it can be seen as an unstructured noise source that supplies entropy in the distribution of the generated data. On the other hand, it can be regarded as a control for G.

The training signal of G is given by the Discriminator (D), which consists of an independent quantum circuit parameterized by the vector . The function of D is to judge whether the given input state comes from S or G. During this period, G will deceive D continuously, and try its best to make D believe that its output is True (T, ”Fake” denoted as F). Assuming the input is from S, the output register of D will output , otherwise it will output . In the internal work area, D can also perform operations. So as to cpompel G to comply with the provided label, D is also given a copy of the unchanged label.

Adversarial tasks can describe the optimization goals of QGAN

(5)

After clarifying G, D and optimization goals, [36] gives the general structure of QGAN, as shown in Fig. 11. The initial states are respectively defined on the registers labeled Label RG, Out RG and Bath RG, which can be applied by S or the parametrized G . The initial resource state defined in the Out D, Bath D and Label D registers and the information from S are available for D to use. D announces whether the result is or in the Out D register. The expected value is proportional to the probability that D will output .

[36] proved the feasibility of QGAN’s explicit quantum circuit through a simple numerical experiment. QGAN has more extensive characterization capabilities than the classic version, for example, it can learn to generate encrypted data.

Iii-D Qgnn

A deep learning architecture that can directly learn end-to-end, Quantum Space Graph Convolutional Neural Network (QSGCNN), was proposed by

[37] to classify graphs of any size. The main idea is to transform each graph into a fixed-size vertex grid structure through the transfer alignment between graphs, and use the proposed quantum space graph convolution operation to propagate the vertex features of the grid. According to reports, the QSGCNN model not only retains the original map features, but also bridges the gap between the spatial map convolutional layer and the traditional convolutional neural network layer, and can better distinguish different structures [37].

Quantum Walking Neural Network (QWNN), a graph neural network structure based on quantum random walk, constructs diffusion operators by learning quantum walks on the graph, and then applies them to graph structure data [38]. QWNN can adapt to the space of the entire graph or the time of walking. The final diffusion matrix can be cached after learning has converged, so that it can be quickly advanced in the network. However, due to the constant shift operation and coin insertion in the learning process, this model is significantly slower than other models. Space complexity is called a problem worthy of continued analysis.

The above models belong to the theoretical framework. The quantum circuit model of QGNN was studied by [39], as shown in Fig. 12.

Fig. 12: A QGCN quantum circuit model modified from [39].

Quantum state preparation, quantum graph convolutional layer, quantum pooling layer and quantum measurements constitute the quantum circuit model of Fig. 12. In the state preparation stage, the image data is effectively encoded into a quantum state by the amplitude encoding method. The normalized classical vector can be represented by the quantum state as follows:

(6)

In the same way, a classic matrix with that satisfies can be encoded as by expanding the Hilbert space accordingly. In the quantum graph convolutional layer, the constructed dual-qubit unitary operation U realizes local connection. The number of layers of the quantum graph convolutional layer indicates the order of node aggregation, so the unitary operations of the same layer have the same parameters, which reflects the characteristics of parameter sharing. In the quantum pooling layer, quantum measurement is added to reduce the feature dimension, achieving the same effect as the classical pooling layer. Note, however, that not all qubits are measured, but a part of it is measured. Based on the measurement results, it is determined whether to perform unitary transformation on adjacent qubits. Finally, after multi-layer convolution and pooling transformation, the specified qubit can be measured by quantum, and the expected value can be obtained. The results show that the structure can effectively capture node connectivity and learn the hidden layer representation of node features.

The model proposed by [39] can effectively deal with the problem of graph-level classification. The four major structures can effectively capture the connectivity of nodes, but currently only the node information is used, and the characteristics of edges are not studied.

Iii-E Qrnn

[40] constructs a parameterized quantum circuit similar to the RNN structure. Some qubits in the circuit are used to memorize past data, while other qubits are measured and initialized at each time step to obtain predictions and encode a new input data. The specific structure is shown in Fig. 13.

Fig. 13: Structure of QRNN for a single time step modified from [40].

In Fig. 13, there are two groups of qubits, denoted by and respectively. The qubits of group A have never been measured, and they are used to retain past information. The qubits of group B are measured every time step t, and initialized at the same time to output prediction and input to the system. The time step here is composed of three parts, namely the encoding part, the evolution part and the measurement part. In the encoding part, is used for the initial state (abbreviate as ), and the training data is encoded into the quantum state of the B group of qubits. The information about has been saved in group A as the density matrix generated in the previous step. In the evolution part, the parameterized unitary operator acts on the entire qubit, which can transfer information from group B to group A. Here, use the evolved and to represent the simplified density matrices of A and B, respectively. In the measurement part, first measure the expected value of a group of observations in group B to obtain

(7)

Then, the expected value is transformed into a certain function g, and the prediction of is obtained. [40] points out that the transformation g can be chosen arbitrarily, for example, g can be a linear combination of . Finally, the qubits in group B are initialized to . After repeating these three parts many times, ’s prediction can be obtained. After the prediction is obtained, the cost function L is calculated, which represents the difference between the training data and the predicted data obtained by QRNN. The parameter is optimized by a standard optimizer running on a classic computer to minimize L.

The QRNN model proposed by [40] is a parameterized quantum circuit with a recursive structure. The performance of the circuit is determined by three parts: (1) different data encoding units (2) the structure of the parameterized quantum circuit (3) the optimizer used to train the circuit. For the first point, a simple door is used as a demonstration in the article. For the second point, we can still explore further. For the third factor, we can learn from the method of VQA. But an unresolved question is whether QRNN is better than the classic one. This problem requires the establishment of some indicators for further analysis and experimentation.

Iii-F Qtnn

[42] is the first to explore quantum tensor neural networks for recent quantum processors (see Fig. 14).

Fig. 14: Quantum tensor networks modified from [42]. (a) The discriminative network; (b) the generative network.

The author proposes a QNN with a tensor network structure, which can be used for discrimination tasks and generation tasks. The neural network model has a fixed tree structure quantum circuit, in which the parameters of the unitary transformation are not fixed initially, and the training algorithm is a quantum-classical hybrid algorithm, that is, searching for suitable parameters with the aid of a classical algorithm.

In Fig. 14 (a), the discriminant model encodes the input data as a product state of n qubits, as the input quantum state

(8)

The quantum circuit of the discriminant model is similar to a multilayer tree structure. After applying the unitary transformation, half of the qubits are ignored, and the remaining qubits continue to participate in the operation of the next layer of nodes. Finally, one or more qubits are used as output qubits, and the most probable measurement result is used as the judgment result of the input data by the neural network. In the training process, the classical algorithm is used to compare the discrimination result with the real label, and the circuit parameters are updated according to the error.

The generative model adopts a structure almost opposite to the discriminant model, as shown in Fig. 14 (b). The newly added qubit is combined with the original qubit to participate in the calculation of the lower unitary transformation node. After generating the required number of qubits, the data information generated by the QNN model is obtained through measurement. When training the network, the quantum circuit parameters are also adjusted through classical algorithms according to the generated results and label errors.

The tensor network provides an increasingly complex natural hierarchical structure of quantum states, which can reduce the number of qubits required to process high-dimensional data with the support of dedicated algorithms. In addition, it can alleviate the problems related to random initial parameters, and it seems to have application potential in the noise recovery ability of machine learning algorithms. In addition, tensor network is a very promising framework because it has achieved a careful balance between expressive power and computational efficiency, and has rich theoretical understanding and powerful optimization algorithms.

Iii-G Qp

QP is one of the relatively mature models. The smallest organ of QP is a quantum neuron [20]. [43] establishes a model of neuron and concludes: a single quantum neuron can perform an XOR function that cannot be achieved by a classical neuron, and has the same computing power as a double-layer perceptron. Quantum neurons also have variants such as feedback quantum neurons [44] and artificial spike quantum neurons [45].

In view of the core position of neurons in the multilayer perceptron,

[46] proposes an artificial neuron that can be carried out an real quantum processor. The circuit it provides is as follows.

Fig. 15: Scheme of the quantum algorithm for the implementation of the artificial neuron model on a quantum processor, modified from [46]

Fig. 15 outlines the principle. The system starts from the state

, and after passing through the unitary matrix

, is transformed into the input quantum state . undergoes transformation , transfers the result to an ancilla qubit, and finally performs quantum measurement on it to assess the activation state of the perceptron.

More specifically, the input vector and weight are restricted to the binary value . Given any input and weight vector, use m coefficients required to define the general wave function of N qubits to encode the m-dimensional input vector.

(9)

Next, two quantum states are defined

(10)

where . As described in (10), the factor can be used to encode the m-dimensional classical vector as a uniform weighted superposition of the complete calculation basis .

First, encode the input value in to prepare the state. Assuming that the initial state of the qubit is , a unitary transformation is performed such that

(11)

In principle, any unitary matrix with in the first column can be used for this purpose. In a more general case, starting from a blank register to prepare the input state may be replaced by the quantum memory stored before is directly called.

In the second step, the quantum register is used to calculate the inner product between and . By defining the unitary transformation and rotating the weighted quantum state to

(12)

, the task can be performed effectively. Any unitary matrix with in the last row meets this condition. If is added after , the overall N-qubit quantum state becomes

(13)

According to (12), the scalar product between two quantum states is

(14)

According to the definition of (10), the scalar product of the input vector and the weight vector is . Therefore, the desired result is contained in the coefficient of the final state , which reaches a normalization factor.

To extract such information, an ancilla qubit (a) initially set to state is used. There is a multi-control NOT gate between the N-coded qubit and target a leading to

(15)

The required nonlinearity of the threshold function of the perceptron output is obtained immediately by performing a quantum measurement. Measuring the state of ancilla qubits on the basis of the calculation produces output (i.e., an active perceptron) with probability . It is important to note that once the inner product information is stored on ancilla, fine threshold functions can be applied. We also note that both parallel and antiparallel vectors produce perceptron activation, while orthogonal vectors always cause ancilla to be measured in the state.

This method has been experimentally verified on the IBM quantum computer, and has taken a solid step from theory to practice. However, subject to limited input qubits, the situation with multiple qubits is not clear. In addition, as the number of input bits increases, the demand for quantum gates is getting higher and higher, which may cause unpredictable problems.

In addition to neuron models, a large number of perceptron models have also been proposed in recent years.

[47] proposed a simple QNN with periodic activation function (see Fig. 16), which only requires qubits and quantum gates, where n is the number of input parameters and k is the number of weights applied to these parameters. The corresponding quantum circuit is drawn as follows.

Fig. 16: The proposed QNN with 1-output and n-input parameters, modified from [47].

In this circuit with 1-output and n-input parameters, will be initialized to an equal superposition state, so that the system qubit has an equal effect on the first qubit that produces the output. In the beginning, the initial input of the circuit is determined by

(16)

where is the jth vector in the standard basis. Hadamard gate and controlled are applied to the first qubit, and the state becomes

(17)

The parameter represents the phase value of the j-th eigenvalue of . After passing through the second Hadamard gate, the final state is read as follows:

(18)

If the first qubit is measured, the probability of and being and can be obtained from (18) as

(19)
(20)

If a threshold function is applied to the output, then

(21)

After multiple measurements, z is described as the success probability of the expected output, that is, .

The advantage of model of [47] is that a deviation term can be added on this basis to adjust the threshold of the model. In addition, the multi-input generalization ability of the model can have a variety of means, for example, the network can be generalized by sequentially applying s.

Since QP is a discrete binary input in many cases, [48] expands the input to a continuous input mode. In order to construct a more complex QP, [49] designs a Multi-Dimensional Input QP (MDIQP) and implemented it by using ancilla qubit input control changes combined with phase estimation and learning algorithms. MDIQP can process quantum information and classify multi-dimensional data that may not be linearly.

Iii-H Others

QCPNN D. Ventura introduces the idea of competitive learning in QNN, and proposes a quantum competitive learning system based on Hamming distance metric [50]. The competitive learning network completes the classification task by comparing the input pattern and the pattern prototype encoded in the network weight. The basic idea is to determine the prototype that is most similar to the input mode (according to some indicators), and the class associated with the prototype is output as the class of the input mode. Based on the classic Hamming neural network algorithm, [51] incorporates quantum theory to obtain a new quantum Hamming neural network based on competitive thinking. This kind of network does not rely on a complete model. Even if the pattern is incomplete, it can still be effectively trained, thereby increasing the probability of pattern recognition. And these unneeded patterns can be further employed as new models for computational training. [52] uses the entanglement measure after the unitary operator to compete between neurons to find the winner category on the basis of winner-takes-all.

QSONN SONN is an artificial neural network that adopts an unsupervised competitive learning mechanism. It discovers the internal laws of the input data by adjusting the network parameters and structure through self-organization. [53] earlier proposes a quantum version of SONN, which can perform self-organizing and automatic pattern classification, without the need for a dotted line to store the given pattern, and by modifying the value of the quantum register corresponding to the classification. In order to enhance the clustering ability of QSONN, [54] projects the cluster samples and weights from the competitive layer to the qubits on the Bloch sphere. The winning node can be known by calculating the spherical distance from the sample to the weight. Finally, the samples on the Bloch sphere are updated iteratively according to the weight values of the winning node and its neighborhood until convergence. In addition, following the classic parallel bidirectional self-organizing neural network, [55] proposes its quantum version.

CELL [56] introduces the CELL model in 1996. This model is constructed using coupled quantum dot cells in an architecture instead of copying Boolean logic and using physical neighbor connections [56]. In the proposal of [57], the quantum cellular automaton is regarded as the core neuron cell, and the two-layer quantum cellular automata array forms a three-dimensional CELL which has the structure of A clone template, B clone template and threshold. And the validity of its model is proved in image processing. [58] proposes a fractional-order image encryption CELL model, which uses deformed fractional Fourier transform to solve the problem of insufficient non-linearity. The specific principle is as follows: the input image is processed by the first chaotic random phase mask, and then processed by the first chaotic random phase mask. Finally, the encrypted image is generated in the second chaotic random phase mask as well as the second deformed fractional Fourier transform in sequence. The cryptographic system shows strong resistance to a variety of potential attacks.

QWLNN [59] mentions QWLNN in 2008. [60] defines a QWLNN architecture learning algorithm based on quantum superposition. The architecture and parameters of this model depend on many factors such as the number of training modes, the structure of the selector, etc.

Iv Challenges and Outlook

At this stage, although large-scale general-purpose quantum computers have not yet been truly implemented, the recent maturity of quantum processor technology has provided conditions for simple verification of various quantum algorithms. In recent years, benefiting from commercial quantum computers developed by companies such as IBM, researchers can remotely manipulate dozens of qubits through the Internet, build simple quantum circuits, and realize small-scale quantum network systems. On the one hand, it provides a simple experimental verification platform for various QNN models and learning algorithms. On the other hand, it also regulates a strict system framework for QNN theory research, that is, the QNN model and its learning algorithm must be oriented to real quantum circuits and be strictly designed under the quantum mechanics system. In this sense, the research work of QNN still has a long way to go, and the following key scientific issues urgently require further research.

Iv-a Linear and non-linear

Activation function (such as sigmoid or tanh function), one of core elements in neural networks, has nonlinear characteristics. Its existence makes collective dynamics present dissipative characteristics and attractor-based, and makes it easier for neural networks to capture highly non-trivial patterns [61]-[63]. But this is also the point of divergence from the linear unitary dynamics of quantum computing. Therefore, one core question of QNN is whether it is possible to design a framework to unify the non-linear dynamics of CNN with the unitary dynamics of QNN

In order to solve this problem, the following suggestions can be used for reference: (1) Use simple dissipative quantum gates. (2) Explore the connection between quantum measurements and activation functions. (3) Using quantum circuits to approximate or fit nonlinear functions.

Dissipative quantum gates [26] introduces a nonlinear, irreversible, and dissipative operator. This operator can be intuitively regarded as a contraction operator, evolving the general state into a single (stable) state, and the nonlinearity depends only on its amplitude and not on the phase. When designing a QNN, there is an irreversible operator behind the reversible unitary operator. This method has a certain degree of feasibility from a theoretical point of view, but it is very difficult at the level of implementation.

Quantum measurements [64] designs a QNN model based on quantum measurement, which attempts to integrate the reversible unitary structure of quantum evolution with the irreversible nonlinear dynamics of neural networks. The author uses an open quantum walk to try to replace the step function or the sigmoid activation function through quantum measurement, and find a quantum form to capture the two main characteristics of the Hopfield network, dissipation and nonlinearity.

Quantum circuits The interpretation of nonlinear activation functions by quantum circuits is currently a popular practice. Especially the application of RUS technology to solve the problem of nonlinear activation function [20][65]-[67].

Iv-B Verification of quantum superiority

Limited by the current level of quantum computing hardware, QNN can only perform experiments on low-dimensional and small sample problems, and it is difficult to verify its advantages over CNN. In response to this key issue, it is necessary to establish a unified quantitative index and calculation model to accurately compare the operating complexity and resource requirements of QNN and CNN and to strictly prove the superiority of quantum computing compared to classical computing. In addition, it is necessary to strictly verify the prediction accuracy and generalization performance of the QNN on a large benchmark data set. At present, there are few related studies. [68] and [69] have made an in-depth discussion on the superiority of quantum optimization algorithms for recent quantum processors compared to classical optimization algorithms. Perhaps we can be inspired by them.

Iv-C Barren plateau

What the Barren Plateau wants to express is that when the amount of qubits is comparatively large, the current QNN framework is easily changed and cannot be effectively trained, that is, the objective function will become very flat, making the gradient difficult to estimate [70]

. The root cause of this phenomenon is: according to the objective function constructed by the current quantum circuit (satisfying t-design), the mean value of the gradient of the circuit parameters (some rotation angles) is 0. And the variance exponentially decreases as the total of qubits increases

[70].

[71] extends the Barren Plateau Theorem from a single 2-design circuit to any parameterized quantum circuit, and gives reasonable presumptions so that certain integrals can be expressed as ZX-graphs and calculated using ZX-calculus. The results show that there is a barren plateau for hardware-efficient ansatz and ansatz inspired by MPS, while for QCVNN ansatz and tree tensor network ansatz, there is no barren plateau [71].

VQA is a commonly useful method of constructing QNN, which optimizes the parameters through the parameterized quantum circuit , with the purpose of minimizing the cost function C. Considering the connection between it and the barren plateau, [72] points out that even if is very shallow, defining C with a globally observable value will result in a barren plateau. However, as long as the depth of is , defining C with a locally observable value will lead to a polynomial vanishing gradient in the worst case, thus establishing a connection between locality and trainability.

In order to solve the problem of the barren plateau, it seems to be a good choice to cut from the perspective of initialization. In the scheme proposed by [73], the first step is to randomly select some initial parameter values, and then select the remaining ones. Such circuits constitute a sequence of shallow blocks, and each shallow block calculates the identity, which controls the effective depth of the circuit for a parameter update, so that they will not enter the barren plateau at the beginning of training.

The above references are only a useful attempt on the barren plateau, but the problem of the barren plateau has not been solved perfectly and is a problem worthy of study.

Acknowledgment

We would like to thank all the reviewers who provided valuable suggestions and Chen Zhaoyun, Ph.D., Department of Physics, University of Science and Technology of China.

References

  • [1] R. P. Feynman, “Simulating physics with computers,” International Journal of Theoretical Physics, vol. 21, no. 6, pp. 467-488, 1982.
  • [2] F. Arute et al., ”Quantum supremacy using a programmable superconducting processor,” Nature, vol. 574, no. 7779, pp. 505-510, 2019/10/01 2019.
  • [3] E. Pednault, J. Gunnels, D. Maslov, and J. Gambetta, “On quantum supremacy,” IBM Research Blog, vol. 21, 2019.
  • [4] J. Preskill, “Quantum computing in the NISQ era and beyond,” Quantum, vol. 2, pp. 79, 2018.
  • [5] S. C. Kak, ”Quantum Neural Computing,” Advances in Imaging and Electron Physics, P. W. Hawkes, ed., pp. 259-313: Elsevier, 1995.
  • [6] R. Parthasarathy and R. Bhowmik, ”Quantum Optical Convolutional Neural Network: A Novel Image Recognition Framework for Quantum Computing,” IEEE Access, pp. 1-1, 2021.
  • [7] D. Yumin, M. Wu, and J. Zhang, ”Recognition of Pneumonia Image Based on Improved Quantum Neural Network,” IEEE Access, vol. 8, pp. 224500-224512, 2020.
  • [8] G. Liu, W.-P. Ma, H. Cao, and L.-D. Lyu, ”A quantum Hopfield neural network model and image recognition,” Laser Physics Letters, vol. 17, no. 4, p. 045201, 2020/02/27 2020.
  • [9] L. Fu, and J. Dai, ”A Speech Recognition Based on Quantum Neural Networks Trained by IPSO.” pp. 477-481, 2009.
  • [10]

    C. H. H. Yang et al., ”Decentralizing Feature Extraction with Quantum Convolutional Neural Network for Automatic Speech Recognition,” in ICASSP 2021 - 2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 6-11 June 2021 2021.

  • [11] P. Kairon, and S. Bhattacharyya, ”COVID-19 Outbreak Prediction Using Quantum Neural Networks,” Intelligence Enabled Research: DoSIER 2020, S. Bhattacharyya, P. Dutta and K. Datta, eds., pp. 113-123, Singapore: Springer Singapore, 2021.
  • [12] E. El-shafeiy, A.-E. Hassanien, K.-M. Sallam et al., “Approach for Training Quantum Neural Network to Predict Severity of COVID-19 in Patients,” Computers, Materials & Continua, vol. 66, no. 2, pp. 1745-1755, 2021.
  • [13] A. A. Ezhov, and D. Ventura, ”Quantum Neural Networks,” Future Directions for Intelligent Systems and Information Sciences: The Future of Speech and Image Technologies, Brain Computers, WWW, and Bioinformatics, N. Kasabov, ed., pp. 213-235, Heidelberg: Physica-Verlag HD, 2000.
  • [14] V. Raul, T. Beatriz, and M. Hamed, ”A Quantum NeuroIS Data Analytics Architecture for the Usability Evaluation of Learning Management Systems,” Quantum-Inspired Intelligent Systems for Multimedia Data Analysis, B. Siddhartha, ed., pp. 277-299, Hershey, PA, USA: IGI Global, 2018.
  • [15] J. R. McClean, J. Romero, R. Babbush et al., “The theory of variational hybrid quantum-classical algorithms,” New Journal of Physics, vol. 18, no. 2, pp. 023023, 2016/02/04, 2016.
  • [16] A. Abbas, D. Sutter, C. Zoufal et al., “The power of quantum neural networks,” Nature Computational Science, vol. 1, no. 6, pp. 403-409, 2021/06/01, 2021.
  • [17] J. Zhao, Y.-H. Zhang, C.-P. Shao et al., “Building quantum neural networks based on a swap test,” Physical Review A, vol. 100, no. 1, pp. 012334, 07/23/, 2019.
  • [18] P. Li, and B. Wang, “Quantum neural networks model based on swap test and phase estimation,” Neural Networks, vol. 130, pp. 152-164, 2020/10/01/, 2020.
  • [19] H. Buhrman, R. Cleve, J. Watrous et al., “Quantum Fingerprinting,” Physical Review Letters, vol. 87, no. 16, pp. 167902, 09/26/, 2001.
  • [20] Y. Cao, G. G. Guerreschi, and A. Aspuru-Guzik, “Quantum neuron: an elementary building block for machine learning on quantum computers,” arXiv preprint arXiv:1711.11240, 2017.
  • [21] A. Paetznick, and K. Svore, “Repeat-until-success: non-deterministic decomposition of single-qubit unitaries,” Quantum Inf. Comput., vol. 14, pp. 1277-1301, 2014.
  • [22] K. H. Wan, O. Dahlsten, H. Kristjansson et al., “Quantum generalisation of feedforward neural networks,” NPJ QUANTUM INFORMATION, vol. 3, SEP 14, 2017.
  • [23] M. Schuld, I. Sinayskiy, and F. Petruccione, “The quest for a Quantum Neural Network,” Quantum Information Processing, vol. 13, no. 11, pp. 2567-2586, 2014/11/01, 2014.
  • [24] T. Menneer, and A. Narayanan, “Quantum-inspired neural networks,” Tech. Rep. R329, 1995.
  • [25] E. C. Behrman, L. Nash, J. E. Steck et al., “Simulations of quantum neural networks,” Information Sciences, vol. 128, no. 3-4, pp. 257-269, 2000.
  • [26] S. Gupta, and R. K. P. Zia, “Quantum Neural Networks,” Journal of Computer and System Sciences, vol. 63, no. 3, pp. 355-383, 2001/11/01/, 2001.
  • [27] M. V. Altaisky, “Quantum neural network,” arXiv preprint quant-ph/0107012, 2001.
  • [28] N. Killoran, T. R. Bromley, J. M. Arrazola et al., “Continuous-variable quantum neural networks,” Physical Review Research, vol. 1, no. 3, pp. 033063, 10/31/, 2019.
  • [29]

    P. Zhang, S. Li, and Y. Zhou, “An Algorithm of Quantum Restricted Boltzmann Machine Network Based on Quantum Gates and Its Application,” Shock and Vibration, vol. 2015, pp. 756969, 2015/09/15, 2015.

  • [30] Y. Shingu, Y. Seki, S. Watabe et al., “Boltzmann machine learning with a variational quantum algorithm,” arXiv preprint arXiv:2007.00876, 2020.
  • [31] C. Zoufal, A. Lucchi, and S. Woerner, “Variational quantum Boltzmann machines,” Quantum Machine Intelligence, vol. 3, no. 1, pp. 7, 2021/02/22, 2021.
  • [32] G. Chen, Y. Liu, J. Cao et al., ”Learning Music Emotions via Quantum Convolutional Neural Network,” Brain Informatics. pp. 49-58, 2017.
  • [33] I. Cong, S. Choi, and M. D. Lukin, “Quantum convolutional neural networks,” Nature Physics, vol. 15, no. 12, pp. 1273-1278, 2019/12/01, 2019.
  • [34] Y. Li, R.-G. Zhou, R. Xu et al., “A quantum deep convolutional neural network for image recognition,” Quantum Science and Technology, vol. 5, no. 4, pp. 044003, 2020/07/20, 2020.
  • [35] S. Lloyd, and C. Weedbrook, “Quantum Generative Adversarial Learning,” Physical Review Letters, vol. 121, no. 4, pp. 040502, 07/26/, 2018.
  • [36] P.-L. Dallaire-Demers, and N. Killoran, “Quantum generative adversarial networks,” Physical Review A, vol. 98, no. 1, pp. 012324, 07/23/, 2018.
  • [37] L. Bai, Y. Jiao, L. Rossi et al., “Graph Convolutional Neural Networks based on Quantum Vertex Saliency,” arXiv preprint arXiv:1809.01090, 2018.
  • [38] S. Dernbach, A. Mohseni-Kabir, S. Pal et al., ”Quantum Walk Neural Networks for Graph-Structured Data,” Complex Networks and Their Applications VII. pp. 182-193.
  • [39] J. Zheng, Q. Gao, and Y. Lv, “Quantum Graph Convolutional Neural Networks,” arXiv preprint arXiv:2107.03257, 2021.
  • [40] Y. Takaki, K. Mitarai, M. Negoro et al., “Learning temporal data with a variational quantum recurrent neural network,” Physical Review A, vol. 103, no. 5, pp. 052414, 2021.
  • [41] J. Bausch, “Recurrent quantum neural networks,” arXiv preprint arXiv:2006.14619, 2020.
  • [42] W. Huggins, P. Patil, B. Mitchell et al., “Towards quantum machine learning with tensor networks,” Quantum Science and Technology, vol. 4, no. 2, pp. 024001, 2019/01/09, 2019.
  • [43] L. Fei, and Z. Baoyu, ”A study of quantum neural networks.” pp. 539-542, 2003.
  • [44] L. Fei, Z. Shengmei, and Z. Baoyu, ”Feedback Quantum Neuron and Its Application.” pp. 867-871, 2005.
  • [45] L. B. Kristensen, M. Degroote, P. Wittek et al., “An artificial spiking quantum neuron,” npj Quantum Information, vol. 7, no. 1, pp. 1-7, 2021.
  • [46] F. Tacchino, C. Macchiavello, D. Gerace et al., “An artificial neuron implemented on an actual quantum processor,” npj Quantum Information, vol. 5, no. 1, pp. 1-8, 2019.
  • [47] A. Daskin, ”A Simple Quantum Neural Net with a Periodic Activation Function.” pp. 2887-2891, 2018.
  • [48] M. Maronese, and E. Prati, “A continuous rosenblatt quantum perceptron,” International Journal of Quantum Information, pp. 2140002, 2021.
  • [49] A. Y. Yamamoto, K. M. Sundqvist, P. Li, and H. R. Harris, “Simulation of a Multidimensional Input Quantum Perceptron,” Quantum Information Processing, vol. 17, no. 6, Jun, 2018.
  • [50] D. Ventura, ”Implementing competitive learning in a quantum system.” pp. 462-466 vol.1, 1999.
  • [51] M. Zidan, A. Sagheer, and N. Metwally, ”An autonomous competitive learning algorithm using quantum hamming neural networks.” pp. 1-7, 2015.
  • [52] M. Zidan, A.-H. Abdel-Aty, M. El-shafei, M. Feraig, Y. Al-Sbou, H. Eleuch, and M. Abdel-Aty, “Quantum Classification Algorithm Based on Competitive Learning Neural Network and Entanglement Measure,” Applied Sciences, vol. 9, no. 7, 2019.
  • [53] Z. Rigui, Z. Hongyuan, J. Nan, and D. Qiulin, ”Self-Organizing Quantum Neural Network.” pp. 1067-1072, 2006.
  • [54] Z. Li, and P. Li, “Clustering algorithm of quantum self-organization network,” Open Journal of Applied Sciences, vol. 5, no. 06, pp. 270, 2015.
  • [55] D. Konar, S. Bhattacharyya, B. K. Panigrahi, and M. K. Ghose, ”Chapter 5 - An efficient pure color image denoising using quantum parallel bidirectional self-organizing neural network architecture,” Quantum Inspired Computational Intelligence, S. Bhattacharyya, U. Maulik and P. Dutta, eds., pp. 149-205, Boston: Morgan Kaufmann, 2017.
  • [56] G. Toth, C. S. Lent, P. D. Tougaw, Y. Brazhnik, W. Weng, W. Porod, R.-W. Liu, and Y.-F. Huang, “Quantum cellular neural networks,” Superlattices and Microstructures, vol. 20, no. 4, pp. 473-478, 1996/12/01/, 1996.
  • [57] S. Wang, L. Cai, H. Cui, C. Feng, and X. Yang, ”Three-dimensional quantum cellular neural network and its application to image processing.” pp. 411-415, 2017.
  • [58] X. Liu, X. Jin, and Y. Zhao, ”Optical Image Encryption Using Fractional-Order Quantum Cellular Neural Networks in a Fractional Fourier Domain.” pp. 146-154, 2018.
  • [59] W. R. d. Oliveira, A. J. Silva, T. B. Ludermir, A. Leonel, W. R. Galindo, and J. C. C. Pereira, ”Quantum Logical Neural Networks.” pp. 147-152, 2008.
  • [60] A. J. da Silva, W. R. de Oliveira, and T. B. Ludermir, “Weightless neural network parameters and architecture selection in a quantum computer,” Neurocomputing, vol. 183, pp. 13-22, 2016/03/26/, 2016.
  • [61] M. I. Rabinovich, P. Varona, A. I. Selverston, and H. D. I. Abarbanel, “Dynamical principles in neuroscience,” Reviews of Modern Physics, vol. 78, no. 4, pp. 1213-1265, 11/14/, 2006.
  • [62] J. J. Hopfield, “Neural networks and physical systems with emergent collective computational abilities,” Proceedings of the National Academy of Sciences, vol. 79, no. 8, pp. 2554, 1982.
  • [63] G. E. Hinton, and R. R. Salakhutdinov, “Reducing the Dimensionality of Data with Neural Networks,” Science, vol. 313, no. 5786, pp. 504, 2006.
  • [64] M. Zak, and C. P. Williams, “Quantum Neural Nets,” International Journal of Theoretical Physics, vol. 37, no. 2, pp. 651-684, 1998/02/01, 1998.
  • [65] W. Hu, “Towards a real quantum neuron,” Natural Science, vol. 10, no. 3, pp. 99-109, 2018.
  • [66] F. M. d. P. Neto, T. B. Ludermir, W. R. d. Oliveira, and A. J. d. Silva, “Implementing Any Nonlinear Quantum Neuron,” IEEE Transactions on Neural Networks and Learning Systems, vol. 31, no. 9, pp. 3741-3746, 2020.
  • [67] S. Yan, H. Qi, and W. Cui, “Nonlinear quantum neuron: A fundamental building block for quantum neural networks,” Physical Review A, vol. 102, no. 5, pp. 052421, 2020.
  • [68] E. Farhi, and A. W. Harrow, “Quantum supremacy through the quantum approximate optimization algorithm,” arXiv preprint arXiv:1602.07674, 2016.
  • [69] L. Zhou, S.-T. Wang, S. Choi, H. Pichler, and M. D. Lukin, “Quantum Approximate Optimization Algorithm: Performance, Mechanism, and Implementation on Near-Term Devices,” Physical Review X, vol. 10, no. 2, pp. 021067, 06/24/, 2020.
  • [70] J. R. McClean, S. Boixo, V. N. Smelyanskiy, R. Babbush, and H. Neven, “Barren plateaus in quantum neural network training landscapes,” Nature Communications, vol. 9, no. 1, pp. 4812, 2018/11/16, 2018.
  • [71] C. Zhao, and X. Gao, “Analyzing the barren plateau phenomenon in training quantum neural network with the ZX-calculus,” Quantum, vol. 5, pp. 466, 2021.
  • [72] M. Cerezo, A. Sone, T. Volkoff, L. Cincio, and P. J. Coles, “Cost function dependent barren plateaus in shallow parametrized quantum circuits,” Nature Communications, vol. 12, no. 1, pp. 1791, 2021/03/19, 2021.
  • [73] E. Grant, L. Wossnig, M. Ostaszewski, and M. Benedetti, “An initialization strategy for addressing barren plateaus in parametrized quantum circuits,” Quantum, vol. 3, pp. 214, 2019.