Introduction
In the last several decades, artificial neural networks have become indispensable for signal processing, computer vision, and pattern recognition. They have achieved remarkable accuracy in visual pattern recognition. Deep neural networks, in their traditional form, are one of the most effective and farreaching artificial neural networks currently in use
[1, 2]. The existing deep learning algorithms are often efficient for training deep neural networks. However, there are limitations in deep neural networks’ scalability, adaptability, and usability
[3, 4]. Moreover, traditional deep neural networks require massive quantities of data to construct their intricate mappings through a slow training phase that impairs their ability to retrain and adapt to new input[5]. These problems are exacerbated by Moore’s law’s coming to an end as traditional computers approach their physical constraints (e.g., high computational complexity and energy consumption) that will stifle performance gains in the next decades. Furthermore, the sharp rise of artificial intelligence applications throughout various research domains comes at the cost of a massive carbon footprint
[6]. As a result, an emerging issue is finding alternative types of neural networks and computing platforms, including neuromorphic[7, 8, 9, 10] or nearterm quantum hardware[11, 12] for training them.Random Neural Networks (RNNs)[13, 14, 15]
based on a mathematical model and implementations using probabilistic MOS (PMOS) circuitry provide a promising pathway to tackle this challenge. RNNs mimic information processing in a biophysical neural network, which communicates and processes sparse and asynchronous binary signals in a massively parallel way. As with biological brain networks, inputs in the RNN are stochastic spike trains which are lossless and energyefficient. RNNs are the preferred method for many machinelearning applications
[16, 17, 18, 19], because they are incredibly energy efficient, use low power, and enable rapid inference and eventdriven data processing. Moreover, the classical and widely employed RNN architectures[19, 20] offer small memory requirements based on limiting approximations that can compact thousands of cells into equivalent transfer functions[16], limiting the training to a small number of parameters compared with classical deep neural network architectures[21, 1]. Numerous studies have been conducted on RNN applications in the domains of the Internet of Things and smart cities[22, 23], Cybersecurity[24], video streaming systems optimization[25], and computer vision[26]. RNNs are composed of neurons that receive stochastic spike signals from external sources such as sensory inputs or neurons. These stochastic spike signals arise from separate Poisson processes with positive rates for excitatory spike signals and negative rates for inhibitory spike signals. However, these signals are integervalued, hence offer an unbounded range of levels of excitation[20], unlike classical artificial neural networks, which are binary or limited to the region []. Over the last few decades, a variety of classical training methods for RNNs have been proposed, including reinforcement learning, constrained training prior to conversion, spiking variants of backpropagation, relaxation based on a cost function in Hopfield networks, and autoassociative learning for RNNs
[27, 25, 26]. RNN gradient descent learning has been suggested and studied in various applications[29, 19, 28]. Despite recent progress, a significant disadvantage of classical RNNs lies in a fact that their accuracy on standard pattern recognition benchmarks falls short of that of machine learning counterparts[1, 26, 30] due to the stochastic nature of the random spiking neurons. Another roadblock is a dearth of training algorithms that take advantage of the inherent characteristics of random spiking neurons, which include timeefficient codes and the ability to deal with noisy input and output data.We tackle these challenges by employing concepts from quantum information science (QIS), which comes with the promise of an exponential advantage for a variety of computational tasks. Recently, quantum computing (QC) has been leveraged for machine learning with the hope that the uncertainty in QC can be a great advantage for probabilitybased modelling in machine learning, inspiring new research for Noisy IntermediateScale Quantum (NISQ) devices. Quantum supremacy has shown that some specific tasks that are otherwise extremely lengthy or even unattainable on classical computers can be quickly solved by quantum computing in just a few seconds[12], saving enormous amounts in energy consumption, a unique “green” feature compared to classical supercomputers. In comparison to highperformance computing, quantum neural networks are still in their infancy owing to a lack of sufficient qubits for implementation on quantum hardware platforms and noise processes that restrict the number of quantum neural networks. Variational quantum circuits (VQCs)[31] have emerged as one of the most effective approaches to deep quantum learning in recent years when used with NISQ devices. With the advent of available quantum computing devices and quantum variational algorithms[31], quantum machine learning research has begun to focus on hybrid classicalquantum algorithms that can be performed in the nearterm Noisy IntermediateScale Quantum (NISQ) devices. Due to the limited number of qubits, these are paving the way for the eventual practical use of NISQ devices for machine learning applications. In fact, classical neural network models incorporating quantum circuits as subroutines preclude quantum information processing[32]. VQCs may be trained using classical optimization approaches, and they are believed to offer certain expressibility advantages over typical neural network topologies[33, 34]. Examples include quantum statebased Hopfield networks[35] and their classical implementations using quantumaccelerated matrix inversion, Recurrent Quantum Neural Networks[36], Quantum Convolutional Neural Networks[37], and Spiking Quantum Neural Networks[38, 39].
However, concrete deep quantum neural networks for widespread realworld applications like pattern recognition tasks have not been studied yet due to the limited number of qubits available in quantum simulators and a lack of faulttolerant systems[40, 41]. Hybrid classicalquantum algorithms using VQC[42, 43, 44, 45] can be ideally implemented and trained on classical hardware. We employ this workflow to demonstrate the potential quantum advantages that can be achieved on NISQ devices, even with a limited number of qubits.
In this paper we present a novel classicalquantum framework for RNNs. These random quantum neural networks (RQNNs) turn a limitation of NISQ devices into an advantage. They leverage the inherent noise in NISQ devices due to the limited number of qubits available to encode highdimensional classical data. The significant contributions of the presented work are summarized below.

We demonstrate the feasibility of RQNN using a VQC in the dressed quantum layer, a hybrid classicalquantum circuit with gate parameters optimized during training, demonstrating a significant difference from existing classical random neural network models[18, 14] in terms of training with random quantum spiking neurons for nearterm quantum devices.

Furthermore, the dressed quantum layer in the proposed RQNN model as a VQC can be trained with many parameters in the architecture, resulting in the benefits of flattening local minima, as demonstrated by the proposed RQNN model’s convergence.

Using the PennyLane quantum simulator, the proposed RQNN model is tested on the MNIST, FashionMNIST, and KMNIST datasets under noisy conditions. It outperforms classical RNNs[18], classical Spiking Neural Networks (SNNs)[30], and the classical CNN model (AlexNet[46]). Thereby, we introduce a unique and novel approach for solving daunting challenges in computer vision and pattern recognition.
The remaining sections of the manuscript are organized as follows: In the Results section, we provide details on the data set, experimental setting, and experimental outcomes. In the Discussion section, we assess the results achieved with our proposed RQNN framework and compare them with those of classical neural networks. Based on the superior accuracy achieved with RQNNs, we provide insight into the future paths of quantum machine learning research for NISQ devices. In section Methods, we provide technical details of our RQNN framework. To that end, we explain key concepts implemented such as our hybrid classicalquantum algorithms, RNNs, and VQC. Finally, the convergence of RQNNs is demonstrated in the Appendix.
Results
0.1 Data Set.
The MNIST dataset[21] has been widely used as a benchmark for various computer vision tasks prevalent in character recognition using classical RNNs[18, 14]. The MNIST dataset[21] comprises grayscale images of fashion objects organized into ten categories, each with images. The training set contains images, whereas the test set contains images. FashionMNIST[47] is a dataset^{2}^{2}2https://github.com/zalandoresearch/FashionMNIST which uses the same image size, data format, and training and testing split structure as the original MNIST dataset[21]. Kuzushi49^{3}^{3}3https://github.com/roiscodh/kmnist is a dataset produced from the KMNIST dataset[48], which can be used instead of MNIST, the most wellknown dataset in the machine learning community. We have decreased the size of the datasets as mentioned above to four classes for training in our experiments due to the limited number of qubits offered by the PennyLane Quantum simulator. Images are corrupted with salt and pepper noise with probability , to , and , to
, Gaussian noise with standard deviation
, to , and , to , Rayleigh noise with scale , to , and , to , uniform noise with low = and high = , to , and , to , and Perlin noise of resolution , and . The Sneaker, AnkleBoot, Bag, and Shirt classes are used for training in the FashionMNIST[47] dataset, whereas only , , , and digits are considered in the MNIST dataset[21]. The datasets used for noisy image classification are listed in Table LABEL:tab1: MNIST[21], FashionMNIST[47] and KMNIST[48].0.2 Experimental Settings.
Numerical experiments have been conducted using our proposed RQNN model, classical RNN[18], classical SNN[30], and AlexNet[46]
to classify grayscale images in noisy situations. Due to the limited number of qubits available in the PennyLane Quantum simulator
^{4}^{4}4https://pennylane.ai/, the size of the MNIST, FashionMNIST, and KMNIST datasets has been decreased to four classes for training. Hence, we consider all the models (proposed RQNN model, classical RNN[18], classical SNN[30] and AlexNet[46]) in fourclass classification problems due to the limited number of qubits available for RQNN. Hence, it differs visually from the standard classical RNN, SNN and AlexNet models. However, testing in noisy settings is carried out to demonstrate the suggested model’s robustness in dealing with noise using noisy versions of unseen test images. We conducted the experiments on the high performance computing facility of the HelmholtzZentrum DresdenRossendorf (HZDR) (Hemera cluster) which comprises an Nvidia Tesla GPU Cluster with GB RAM and Tensor cores with eight cores of Intel(R) Xeon(R) CPU E52683 v4 @ 2.1GHz. The proposed RQNN, classical RNN[18], and AlexNet[46] have been developed using Pytorch. In Figure 1, we illustrate the sample input (noisy images) with a dimension of from the KMNIST, MNIST, and FashionMNIST datasets.Seven hundred eightyfour input features () from noisy input images (FashionMNIST, MNIST, and KMNIST) are applied to the linear input layer of the multilayer classical RNN, RQNN, classical SNN[30] and
output features are generated after temporal pooling over spikes produced by the spiking version of the activation function ReLU. The preinput layer of the proposed RQNN framework receives
input features obtained after the temporal pooling layer acts on features from the preceding layer of the multilayer traditional RNN architecture[18]. The VQC layer employs a qubit count of . The fully connected (FC) parametrized quantum layers of the classical RNN[18] and SNN model[30], AlexNet[46], and the proposed RQNN model are rigorously trained using the Adam optimizer with a maximum of epochs, an initial learning rate of , and , and a minimum batch size of . The Figure 3 and Figure 4 show how the accuracy and loss of the proposed RQNN model improve when it is trained using fold crossvalidation.0.3 Image Classification with RQNN.
Recently, quantum computing has been leveraged for machine learning with the hope that the uncertainty in quantum computing can become a great advantage for probabilitybased modelling in machine learning, inspiring a new line of research at the intersection of quantum computing and neuromorphic computing. In the proposed RQNN architecture, a classical RNN based on a multilayer perceptron model is used as shown in Figure
2, followed by a dressed quantum circuit (VQC) with qubits. The dressed quantum layers (VQC layers) have a depth of . The RNN’s classical layers are based on the random spiked neural networks[18]. With the proposed RQNN model, input grayscale image features are initially fed into the classical layers of the RNN for image recognition. The random spikes are averaged across time with the temporal pooling layer followed by quantum encoding in terms of the dressed quantum layers.Following the fourclass classification, the quantum data is encoded into classical bits using a quantum measurement. We note that the proposed RQNN model is considered for fourclass classification problems in this work due to the limited number of qubits available. Hence, it differs visually from the standard RQNN model, as shown in Figure 2
. We employed a hybrid classicalquantum framework to evaluate the proposed RQNN model for noisy image classification on a PennyLane Quantum simulator (real quantum computing hardware). At the classification layers, we employed crossentropy loss to classify images. The loss function (
) is determined by the RQNN model’s hyperparameters
:(1) 
which is a proxy for determining the accuracy of the RQNN. For a given input angle , a fully connected (FC) layer is expected to produce with a target outcome of in terms of the set of network hyperparameters . The training accuracy and loss function of this RQNN model are shown in Figure 3 and Figure 4, respectively, using the MNIST[21], FashionMNIST[47] and KMNIST[48] datasets.
0.4 Experimental Results.
In the current experimental setup, the suggested RQNN, the classical RNN model[18], the classical SNN[30], and AlexNet[46] are applied, and the numerical results are reported in terms of a statistical analysis. In extensive tests, large sets of the MNIST[21], FashionMNIST[47], and KMNIST[48] datasets have been used as training, validation, and test sets. The proposed RQNN, the classical RNN model[18], the classical SNN[30], and AlexNet[46] are trained using images from the MNIST[21], FashionMNIST[47], and KMNIST[48]. They are, however, evaluated on unseen noisy versions of test images from the datasets to show that the suggested model is more resilient than its classical equivalent and classical neural networkbased models. Table LABEL:tab2 summarizes the numerical results obtained on the test datasets affected by salt and pepper noise with a probability , Gaussian noise with a standard deviation of , Rayleigh noise with a scale of , uniform noise with an intensity of , and Perlin noise with a resolution of , using the suggested RQNN, classical RNN model[18], classical SNN[30] and AlexNet[46] architectures. The presented work employs the mean accuracy (), dice similarity score (), positive prediction value (), and sensitivity () as evaluation measures. It has been observed from the numerical data reported in Table LABEL:tab2 that the proposed RQNN is superior to the classical RNN model[18], the classical SNN[30], and the AlexNet model[46] in handling all types of noise except Perlin noise. Furthermore, we have conducted a twosided paired Wilcoxon signedrank test[49] with a significance threshold of to show that the proposed RQNN model outperforms the classical RNN model[18], classical SNN[30], and AlexNet[46].
Using the twosided paired Wilcoxon signedrank test, a nonparametric statistical hypothesis test, the proposed RQNN is compared to the stateoftheart neural networks (classical RNN model
[18], classical SNN[30], and AlexNet[46]). The suggested RQNN model produces statistically significant outcomes in noisy settings, as evidenced by the Wilcoxon signedrank test. In the case of Perlin noise, classical RNN and classical SNN marginally outperform our proposed RQNN due to the inherent repeatable pseudorandom values of the noise within a specified range. However, the overall accuracy of RQNN is significantly greater than that of other stateoftheart methods, as shown in Table LABEL:tab2. Figures 5 and 6 illustrate the mean accuracy values of noisy image recognition using the proposed RQNN and its classical counterparts with varied noise intensity levels. Figures 5 and 6 indicate that the proposed hybrid classicalquantum neural network (RQNN) outperforms its classical counterpart (RNN[18]) in most noise types, as seen in the mean accuracy plots.Model  Dataset  Noise  ACC  DSC  PPV  SS 

RQNN  MNIST  SnP  0.965  0.939  0.985  0.888 
Gaussian  0.969  0.953  0.990  0.924  
Rayleigh  0.961  0.904  0.996  0.852  
Uniform  0.972  0.950  0.988  0.901  
Perlin  0.841  0.813  
FMNIST  SnP  0.915  0.851  0.927  0.775  
Gaussian  0.968  0.967  0.962  0.960  
Rayleigh  0.971  0.942  0.934  0.922  
Uniform  0.975  0.938  0.974  0.908  
Perlin  
KMNIST  SnP  0.973  0.710  0.885  0.608  
Gaussian  0.979  0.891  0.855  
Rayleigh  0.965  0.291  0.966  
Uniform  0.976  0.783  0.928  
Perlin  
RNN  MNIST  SnP  0.959  
Gaussian  0.971  0.956  
Rayleigh  0.903  
Uniform  0.986  0.891  
Perlin  0.933  0.840  0.893  
FMNIST  SnP  
Gaussian  
Rayleigh  
Uniform  0.973  
Perlin  
KMNIST  SnP  0.971  0.882  
Gaussian  0.979  0.934  0.853  
Rayleigh  0.943  0.968  
Uniform  0.781  0.684  
Perlin  0.931  0.572  
SNN  MNIST  SnP  
Gaussian  0.950  0.927  
Rayleigh  
Uniform  
Perlin  0.929  0.838  0.811  
FMNIST  SnP  
Gaussian  
Rayleigh  
Uniform  0.971  0.940  0.972  
Perlin  0.858  0.721  0.770  0.679  
KMNIST  SnP  
Gaussian  
Rayleigh  0.938  0.292  0.184  
Uniform  0.780  0.681  
Perlin  0.379  0.271  
AlexNet  MNIST  SnP  
Gaussian  
Rayleigh  
Uniform  
Perlin  
FMNIST  SnP  
Gaussian  
Rayleigh  
Uniform  
Perlin  
KMNIST  SnP  
Gaussian  
Rayleigh  
Uniform  
Perlin 
Discussion
We developed a novel class of supervised RQNNs based on a hybrid classicalquantum algorithm. The RQNN exhibits superior accuracy over conventional neural networks in noisy environments. The suggested RQNN model outperforms its classical counterpart (RNN)[18], classical SNNs[30], and the AlexNet model[46]. It is simply because the classical RNNs[18], classical SNN[30], and CNNs (AlexNet)[46] struggle to modify their network weights in noisy settings. Due to the intrinsically probabilistic character of qubits, the dressed quantum layer in the proposed RQNN efficiently approximates the noise with original pixel values, resulting in improved classification accuracy. Furthermore, the dressed quantum layer in the proposed RQNN model can introduce entanglement between the inputs and outputs of the random spiking neurons. The input state can be influenced in extremely nontrivial ways due to measurement backaction from typical output measurements.
More specifically, the RQNN is trained on smaller datasets to optimize each parameter in the VQC at different epochs. In contrast, all the parameters in classical deep learning models can be updated at every single epoch in parallel. While the RQNN yields a superior performance over classical RNN, SNN, and AlexNet on smaller datasets with similar training sets, RQNN is limited by the inherent challenges in the scalability of VQCs. However, we demonstrated here that our proposed RQNN framework yields superior accuracy despite the current technical limitations on the number of available qubits. Furthermore, we devise an efficient optimization method for random spiking quantum neurons by leveraging the intrinsic uncertainty in qubits by utilizing hybrid classicalquantum algorithms. Experiments employing the proposed RQNN model on the noisy MNIST, FashionMNIST and KMNIST datasets indicate that it outperforms its classical equivalents and deep learning models (AlexNet). The RQNN achieves recognition accuracy of over on the standard problem of visual character recognition on very large datasets. The quality of the observed results appears to improve as the network grows in size with the number of qubits. Hence, the proposed RQNN model appears to be a promising candidate for various computer vision applications and has the potential to revolutionize quantum machine learning research. However, it remains to be investigated how the proposed RQNN architecture performs in other computer vision applications. The proposed RQNN model’s robust experimental findings in handling noise with RQNN and its superior performance with respect to stateoftheart classical RNN models stimulate future applications in various applications such as pattern recognition, optimization, learning, and associative memory. Furthermore, the proposed RQNN model incorporates hybrid classicalquantum algorithms, which improves network accuracy in a noisy environment. This novel hybrid classicalquantum spiking random neural network is therefore suitable for implementation on nearterm quantum devices straight away.
Finally, in this study, the hyperparameters of the proposed RQNN model are not finetuned to attain similar accuracy to CNN models for image classification without noise. Despite this fact, the RQNN outperformed the RNN[18], classical SNN[30], and AlexNet[46] in terms of accuracy (), dice similarity (), positive predictive value (), and sensitivity () for unseen noisy test images. The suggested model is statistically significant and could be a viable candidate for deep learning algorithms for other computer vision challenges in the future. It is interesting to note that the proposed RQNN model requires fewer parameters to implement than its classical equivalents, thanks to the parametrized VQC in the dressed quantum layer, which decreases the number of training parameters. Nonetheless, we aim to build an RQNN model to optimize hybrid classicalquantum circuits with millions of parameters relatively efficiently (a serious problem for classical RNNs). It is also worth noting that in this work, the hybrid classicalquantum circuit is the leading contender for unitary random neural networks.
Methods
0.5 Hybrid ClassicalQuantum Random Neural Network Architecture.
The proposed hybrid classicalquantum RQNN model consists of a classical RNN[18, 14], a dressed quantum layer for classification (whose qubits are further measured to collapse to classical bits), and a final classical postprocessing layer for determining the results. In addition, as illustrated in Figure 2, a classical preprocessing layer is used as a connecting layer between the temporal pooling layer and the dressed quantum layer of the model design. The classical multilayer random neural network processes the input data with input dimensions exceeding the number of available qubits. It reduces the highdimensional images to lowdimensional feature representations. However, one of the uphill tasks in quantum machine learning in the NISQ era is to efficiently encode highdimensional classical bits into qubits in the dressed quantum layer. In our hybrid classicalquantum architecture, the classical RNN’s[18, 14] intermediate outputs are encoded as quantum states and sent to the adjacent VQC. A brief introduction to the authors’ previously invented classical RNN models[18, 14] is provided in the following section.
0.6 Classical Random Neural Networks (RNN).
The conventional RNNs[18, 14] are composed of neurons that receive excitatory (positive) and inhibitory (negative) spike impulses from external sources, sensory sources, or neurons. At time , each neuron’s internal state is designated as , i.e., a nonnegative integer. These spike signals can arrive from other neurons, since each neuron which is excited, i.e., whose internal state
will fire after an exponentially distributed random time of parameter
, or are generated separately by external Poisson processes with rates of for excitatory spike signals and for inhibitory spike signals to a neuron . If , then a negative spike to neuron at time has no effect on its internal state since .(2) 
The arrival of an excitatory spike, on the other hand, always increases the neuron’s internal state by one as follows.
(3) 
When neuron "fires", which occurs after an exponentially distributed delay of parameter for any neuron , the signal either travels toward neuron with probability as a positive signal, or travels towards neuron with probability as a negative signal, or exits the network with probability
. The transition probability of the Markov chain reflecting the flow of signals between neurons is as follows:
(4) 
and
(5) 
where is the RNN model’s number of neurons.
If neuron is excited (i.e. ), it will fire after an exponentially distributed time of parameter , and its internal state decreases by one ( with probability ), and it has the probability of delivering a positive or excitatory spike to neuron , resulting in with probability . Alternatively, it can transmit a negative or inhibitory spike to neuron with probability , resulting in if and if with probability . RNNs can also contain “triggers” which allow firing to propagate simultaneously through several cells.
0.7 Quantum Variational Circuit (VQC) of RQNN.
The major goal of this research is to present a universal RQNN model that employs qubits to provide a reasonably accurate and resilient classification of noisy images in hybrid classicalquantum settings. The quantum information processing on qubits using various quantum gate operations are detailed below.
The following is an illustration of a general quantum system with qubits[50].
(6) 
Where are complex amplitudes of each qubit in the quantum system subject to the fulfilment of the following condition.
(7) 
A qubit system of dimension [51] can be generated by combining a collection of base states (represented as ) containing , as
(8) 
Where designates the probability amplitude and . In contrast to conventional bits, the coherent quantum state exists in superposition with the eigenstates and . The Born rule[52] is used to determine the probability of in the eigenstate . We incorporated an amplitude encoding scheme to transform the classical bits into quantum bits or qubits[53].
(9) 
The VQC employed in this framework is divided into a trinity of sections: encoding, variational, and measurement. The encoding part consists of the Hadamard gate and single qubit rotation gates and , representing rotations along the xaxis and zaxis, respectively. To produce an impartial starting state, the Hadamard gate is introduced. The input state of VQC in the dressed quantum layer is labelled in quantum bits (qubits) in this proposed RQNN.
(10) 
where, is a quantum state in the RQNN model’s dressed quantum layer corresponding to the classical information from the classical layers of RNNs. The quantum state or qubit characteristics are sent to a Hadamard gate, , for equal superposition of the qubit states[50].
(11) 
The encoding phase, which comprises and rotations, is applied to this initial quantum state in the proposed VQC of the dressed quantum layer of RQNN. The CNOT gates are employed to create complementary quantum states. The and gates represent singlequbit rotations through angle (radians) around the and axes on the Bloch sphere projection, respectively. The angle of rotation is characterized by the inputs from the classical layer of the RNN, as shown in Figure 2. Subsequently, the encoded state is processed using variational quantum circuits with parameter optimization.
(12) 
and
(13) 
The variational component of the proposed VQC, which includes CNOT gates for entangled quantum states from individual qubits as well as and parameters for the universal single qubit unitary gate with parameters to be learned (). The weights in classical random neural networks can be compared to these circuit properties. In the VQC of the RQNN circuit, a single quasilocal unitary () gate is used in a transitionally invariant form for finite depth. The recovered data will next be processed classically using Softmax to calculate the probability of each potential class. The result is obtained using quantum observations or measurements on qubits. In the RQNN model, each qubit has a starting state (for example, an evenly balanced superposition of the two values, and ). The rotating gate increases the likelihood of receiving or . An
qubit vector probabilistically depicts the whole space of all the
combinations of incoming spikes to neurons at a particular instance. In terms of space, classical encoding data in the amplitudes of a superposition is the most economical method in utilizing qubits to encode dimensional incoming spikes. Applying the rotation operator to all qubits is the same as running a parallel search for the best state in the space. Compared to classical deep spiking neural network models, O() parameters characterize a set of qubitbased dressed quantum layers of RQNN and exponentially lower the number of parameters[18]. The training of the VQC relies on quantum backpropagation algorithms. Here, the number of VQC layers is derived by optimizing the circuits for optimal expressibility of the encoded qubits in Hilbert space. It aims to obviate “barren plateaus” problems of local minima[53]. The VQC’s convergence study is available in the Appendix.Appendix
0.8 Convergence Analysis of Random Quantum Neural Networks.
The loss function assists the dressed quantum layer of RQNN to converge, and the classification accuracy is gained after the network stabilizes. The RQNN adds the crossentropy loss function for classification, which is evaluated as follows.
(14) 
The true output is , but the intended result of the Fully Connected (FC) layers on the input for the network hyperparameter set is . For the rotation gates and of VQC in RQNN, the rotation angle (variational parameter () is and , respectively. The rotation gates and of VQC control the qubits and , respectively.
(15) 
(16) 
where,
(17) 
and
(18) 
Here, Equations 17 and 18 measure the phase change or angles and for the dressed quantum layer in DSQNet at epoch , respectively. Consider
(19) 
(20) 
and
(21) 
(22) 
Here, the optimal phase or angles for the rotation gates and are and , respectively. Concerning and , the loss function is differentiated as follows.
(23) 
(24) 
The gradients of the VQC parameters and are computed using parameter shift methods, as shown in [54].
(25) 
and
(26) 
Hence, and are measured qubits with and rotation angles, respectively. The changes in phase or angles for the rotation gate involved in updating the qubits are denoted by and , respectively. Finally, the following formula is used to update rotation angles.
(27) 
(28) 
The learning rates in the gradient descent process for updating the rotation angles are and .
0.9 Data availability
The MNIST[21], FashionMNIST[47] and KMNIST[48] data sets can be found in the following links:https://github.com/zalandoresearch/FashionMNIST, http://yann.lecun.com/exdb/mnist/ and https://github.com/roiscodh/kmnist, respectively.
0.10 Code Availability
The PyTorch source code https://github.com/darthsimpus/RQNN for executing the RQNN is made available on GitHub to reproduce the results reported in the manuscript.
References
References
 [1] Y. LeCun, Y. Bengio and G. Hinton, “Deep learning," Nature, vol. 521, pp. 436–444, 2015, doi: https://doi.org/10.1038/nature14539.
 [2] K. He, X. Zhang, S. Ren and J. Sun, “Deep Residual Learning for Image Recognition," 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 770–778, 2016, doi: 10.1109/CVPR.2016.90.
 [3] S. Madan, T. Henry, J. Dozier, et al., “When and how convolutional neural networks generalize to outofdistribution category–viewpoint combinations," Nat. Mach. Intell., vol. 4, pp. 146–153, 2022, doi: https://doi.org/10.1038/s42256021004375.

[4]
A. Laborieux, M. Ernoult, T. Hirtzlin and D. Querlioz, “Synaptic metaplasticity in binarized neural networks,"
Nat. Commun., vol. 12, no. 2549, 2021, doi: https://doi.org/10.1038/s4146702122768y.  [5] G. Karunaratne, M. Schmuck, M. Le Gallo, et al., “Robust highdimensional memoryaugmented neural networks," Nat. Commun., vol. 12, no. 2468, 2021, doi: https://doi.org/10.1038/s41467021223640.
 [6] P. Dhar, “The carbon impact of artificial intelligence," Nat. Mach. Intell., vol.2, pp. 423–425, 2020, doi: https://doi.org/10.1038/s4225602002199.

[7]
G. Indiveri, E. Chicca and R. Douglas, “A VLSI array of lowpower spiking neurons and bistable synapses with spiketiming dependent plasticity,"
IEEE Transactions on Neural Networks, vol. 17, no. 1, pp. 211–221, 2006, doi: 10.1109/TNN.2005.860850.  [8] J. Göltz, L. Kriener, A. Baumbach, et al., “Fast and energyefficient neuromorphic deep learning with firstspike times," Nat. Mach. Intell., vol. 3, pp. 823–835, 2021, doi: https://doi.org/10.1038/s4225602100388x.
 [9] K. Roy, A. Jaiswal and P. Panda, “Towards spikebased machine intelligence with neuromorphic computing," Nature, vol. 575, pp. 607–617, 2019, doi: https://doi.org/10.1038/s4158601916772.
 [10] P. Merolla, et al., “A million spikingneuron integrated circuit with a scalable communication network and interface," Science, vol. 345, pp. 668–673, 2014, doi: 10.1126/science.1254642.
 [11] N. Grzesiak, R. Blümel, K. Wright, et al., “Efficient arbitrary simultaneously entangling gates on a trappedion quantum computer," Nat. Commun., vol. 11, pp. 2963, 2020, doi: https://doi.org/10.1038/s41467020167909.
 [12] F. Arute, K. Arya, R. Babbush, et al., “Quantum supremacy using a programmable superconducting processor," Nature, vol. 574, pp. 505–510, 2019, doi: https://doi.org/10.1038/s4158601916665.
 [13] E. Gelenbe, “Random neural networks with negative and positive signals and product form solution," Neural Comput., vol. 1, no. 4, pp. 502–511, 1989, doi: 10.1162/neco.1989.1.4.502.
 [14] E. Gelenbe, “Stability of the random neural network model," Neural Comput., vol. 2, no. 2, pp. 239–247, 1990, doi: 10.1162/neco.1990.2.2.239.
 [15] H. Sompolinsky, A. Crisanti and H. J. Sommers, “Chaos in Random Neural Networks," Phys. Rev. Lett., vol. 61, no. 3, pp. 259–262, 1998, doi: 10.1103/PhysRevLett.61.259.
 [16] Y. Yin and E. Gelenbe, “Singlecell based random neural network for deep learning,” In. 2017 International Joint Conference on Neural Networks (IJCNN), pp. 86–93, 2017, doi: 10.1109/IJCNN.2017.7965840.
 [17] W. Serrano, E. Gelenbe and Y. Yin, “The Random Neural Network with Deep Learning Clusters in Smart Search,” Neurocomputing, vol. 396, pp. 394–405, 2020, doi: https://doi.org/10.1016/j.neucom.2018.05.134.
 [18] E. Gelenbe and Y. Yin, “Deep learning with random neural networks,” 2016 International Joint Conference on Neural Networks (IJCNN), pp. 1633–1638, 2016, doi: 10.1109/IJCNN.2016.7727393.
 [19] E. Gelenbe and K. F. Hussain, “Learning in the multiple class random neural network,” IEEE Transactions on Neural Networks, vol. 13, no. 6, pp. 1257–1267, 2002, doi: 10.1109/TNN.2002.804228.
 [20] E. Gelenbe, Z. H. Mao and Y. D. Li, “Function approximation with spiked random networks,” IEEE Transactions on Neural Networks, vol. 10, no. 1, pp. 3–9, 1999, doi: 10.1109/72.737488.
 [21] Y. Lecun, L. Bottou, Y. Bengio and P. Haffner, “Gradientbased learning applied to document recognition," In Proceedings of the IEEE, vol. 86, no. 11, pp. 2278–2324, 1998, doi: 10.1109/5.726791.
 [22] A. Javed, H. Larijani, A. Ahmadinia, R. Emmanuel, M. Mannion and D. Gibson, “Design and Implementation of a Cloud Enabled Random Neural NetworkBased Decentralized Smart Controller With Intelligent Sensor Nodes for HVAC," IEEE Internet of Things Journal, vol. 4, no. 2, pp. 393–403, 2017, doi: url10.1109/JIOT.2016.2627403.
 [23] A. Javed, H. Larijani, A. Ahmadinia and D. Gibson, “Smart Random Neural Network Controller for HVAC Using Cloud Computing Technology," IEEE Transactions on Industrial Informatics, vol. 13, no. 1, pp. 351–360, 2017, doi: 10.1109/TII.2016.2597746.
 [24] W. Serrano, “The blockchain random neural network for cybersecure IoT and 5G infrastructure in smart cities," Journal of Network and Computer Applications, vol. 175, no. 102909, 2021, doi: https://doi.org/10.1016/j.jnca.2020.102909.
 [25] S. Mohamed and G. Rubino, “A study of realtime packet video quality using random neural networks," IEEE Transactions on Circuits and Systems for Video Technology, vol. 12, no. 12, pp. 1071–1083, 2002, doi: 10.1109/TCSVT.2002.806808.
 [26] E. Gelenbe, Y. Feng, and K. R. R. Krishnan, “Neural network methods for volumetric magnetic resonance imaging of the human brain,” Proc. IEEE, vol. 84, no. 10, pp. 1488–1496, 1996, doi: 10.1109/5.537113.
 [27] E. Gelenbe, V. Koubi, and F. Pekergin, “Dynamical random neural network approach to the traveling salesman problem,” in Proc. IEEE Symp. Syst., Man, Cybern., pp. 630–635, 1993, doi: 10.1109/ICSMC.1993.384945.
 [28] S. Basterrech, S. Mohammed, G. Rubino and M. Soliman, “Levenberg—Marquardt Training Algorithms for Random Neural Networks," The Computer Journal, vol. 54, no. 1, pp. 125–135, 2011, doi: 10.1093/comjnl/bxp101.
 [29] E. Gelenbe, “Learning in the recurrent random neural network,” Neural Comput., vol. 5, no. 1, pp. 154–164, 1993, doi: 10.1162/neco.1993.5.1.154.
 [30] S. R. Kheradpishehab, M. Ganjtabesha, S. J. Thorpeb, and T. Masquelierb, “STDPbased spiking deep convolutional neural networks for object recognition," Neural Networks, vol. 99, pp. 55–67, 2018, doi: https://doi.org/10.1016/j.neunet.2017.12.005.
 [31] M. Cerezo, A. Arrasmith, R. Babbush, et al., “Variational quantum algorithms," Nat. Rev. Phys, vol. 3, pp. 625–644, 2021, doi: https://doi.org/10.1038/s42254021003489.
 [32] L. Gyongyosi and S. Imre, “Training Optimization for GateModel Quantum Neural Networks," Scientific Reports, vol. 9, no. 1, pp. 12679, 2019, doi: https://doi.org/10.1038/s4159801948892w.
 [33] A. Abbas, D. Sutter, C. Zoufal, et al., “The power of quantum neural networks" Nat Comput Sci, vol. 1, pp. 403–409, 2021, doi: https://doi.org/10.1038/s43588021000841.
 [34] M. Benedetti, E. Lloyd, S. Sack, and M. Fiorentini, “Parameterized quantum circuits as machine learning models," Quantum Sci. Technol., vol. 4, 2019, doi: https://doi.org/10.1088/20589565/ab4eb5.
 [35] P. Rebentrost, T. R. Bromley, C. Weedbrook, and S. Lloyd, “Quantum Hopfield neural network," In: Physical Review A, vol. 98, no. 4, pp. 042308, 2018, doi: 10.1103/PhysRevA.98.042308.
 [36] L. Behera, I. Kar, A. C. Elitzur, “Recurrent Quantum Neural Network and its Applications," In: The Emerging Physics of Consciousness. The Frontiers Collection. Springer, 2006, doi: https://doi.org/10.1007/3540367233_9.
 [37] I. Cong, S. Choi, and M. D. Lukin, “Quantum convolutional neural networks," Nature Physics, vol. 15, pp. 1273–1278, 2019, doi: https://doi.org/10.1038/s4156701906488.
 [38] L. B. Kristensen, M. Degroote, P. Wittek et al., “An artificial spiking quantum neuron," npj Quantum Inf, vol. 7, no. 59, 2021, doi: https://doi.org/10.1038/s41534021003817.
 [39] Y. Suna, Y. Zeng, T. Zhanga, “Quantum superposition inspired spiking neural network," vol. 24, no. 8, pp. 102880, 2021, doi: https://doi.org/10.1016/j.isci.2021.102880.
 [40] V. Aggarwal and A. R. Calderbank, “Boolean Functions, Projection Operators, and Quantum Error Correcting Codes," IEEE Trans. on Inf. Th., vol. 54, no. 4, pp. 1700–1707, 2018, doi: 10.1109/TIT.2008.917720.
 [41] A. M. Souza, J. Zhang, C. Ryan, and R. Laflamme, “Experimental magic state distillation for faulttolerant quantum computing," Nat. Commun., vol. 2, no. 169, 2011, doi: https://doi.org/10.1038/ncomms1166.
 [42] Y. Liang, W. Peng, ZhuJun. Zheng, O. Silvén, and G. ZhaoA, “A hybrid quantum–classical neural network with deep residual learning," Neural Networks, vol. 143, 2021, pp.133147, doi: https://doi.org/10.1016/j.neunet.2021.05.028.
 [43] S. YenChi Chen, ChihMin Huang, ChiaWei Hsing, and YingJer Kao, “An endtoend trainable hybrid classicalquantum classifier," Machine Learning: Science and Technology, vol. 2, no. 4, 2021, doi: https://doi.org/10.1088/26322153/ac104d.
 [44] J. Liu, K. H. Lim, K. L. Wood, et al, “Hybrid quantumclassical convolutional neural networks," Sci. China Phys. Mech. Astron., vol. 64, no. 290311, 2021, doi: https://doi.org/10.1007/s1143302117343.

[45]
A. Mari, T. R. Bromley, J. Izaac, M. Schuld, and N. Killoran, “Transfer learning in hybrid classicalquantum neural networks," vol. 4, pp. 340, 2020, doi:
https://doi.org/10.22331/q20201009340. 
[46]
A. Krizhevsky and I. Sutskever and G. E. Hinton, “ImageNet Classification with Deep Convolutional Neural Networks,"
Advances in Neural Information Processing Systems, vol. 25, 2012.  [47] H. Xiao and K. Rasul and R. Vollgraf, “FashionMNIST: a Novel Image Dataset for Benchmarking Machine Learning Algorithms," ArXiv, 2017, arXiv:1708.07747v2.
 [48] T. Clanuwat, M. BoberIrizar, A. Kitamoto, A. Lamb, K. Yamamoto and D. Ha, “Deep Learning for Classical Japanese Literature," In proc. 32nd Conference on Neural Information Processing Systems (NeurIPS 2018), Montréal, Canada, 2018, arXiv:1812.01718v1.
 [49] W. J. Conover, “Practical nonparametric statistics (3rd ed.)," John Wiley & Sons, Inc., pp. 350, 1999, ISBN: 0471160687.
 [50] D. Konar, S. Bhattacharyya, B. K. Panigrahi and E. C. Behrman, “QutritInspired Fully SelfSupervised Shallow Quantum Learning Network for Brain Tumor Segmentation," IEEE Transactions on Neural Networks and Learning Systems, 2021, pp. 112, doi: 10.1109/TNNLS.2021.3077188.
 [51] D. Konar, S. Bhattacharya, B. K. Panigrahi, and K. Nakamatsu, “A quantum bidirectional selforganizing neural network (QBDSONN) architecture for binary object extraction from a noisy perspective," Applied Soft Computing, vol. 46, pp. 731–752, 2016, doi: https://doi.org/10.1016/j.asoc.2015.12.040.
 [52] R. P. Feynman, R. B. Leighton, and M. Sands, “The feynman lectures on physics," Pearson AddisonWesley, 1965.

[53]
V. Havlíček, A. D. Córcoles, K. Temme, et al.
, “Supervised learning with quantumenhanced feature spaces,"
Nature, vol. 567, pp. 209–212, 2019, doi: https://doi.org/10.1038/s4158601909802.  [54] K. Mitarai, M. Negoro, M. Kitagawa and K. Fujii, “Quantum circuit learning," Physical Review A, vol. 98, no. 3, pp.032309, 2018, doi: 10.1103/PhysRevA.98.032309.
Acknowledgements
This work was partially supported by the Center of Advanced Systems Understanding (CASUS), financed by Germany’s Federal Ministry of Education and Research (BMBF) and by the Saxon state government out of the state budget approved by the Saxon State Parliament.
Author contributions statement
D. Konar, S. Bhandary and A. D. Sarma conceived the main methodology, implementations, experiments and analyzed the results, and prepared the original draft. E. Gelenbe developed the RNN, provided the theoretical framework and edited the manuscript. A. Cangi analyzed the results and edited the manuscript.
Competing interests
The authors hereby declare that no conflict of financial/personal interest or belief could affect their objectivity.