1 Introduction
Successful deep learning (DL) applications, such as automatic speech recognition (ASR)
[17], natural language processing (NLP)
[14], and computer vision
[31], highly rely on the hardware breakthrough of the graphical processing unit (GPU). However, the advancement of large DL models, such as GPT [3] and BERT [10], is faithfully attributed to significantly powerful computing capabilities that are only privileged to big companies which own numerous costly and industriallevel GPUs. With recent years witnessing a rapid development of nearterm intermediatescale quantum (NISQ) devices [23, 12, 13], the quantum computing hardware is expected to further speed up the classical DL algorithms by designing novel quantum machine learning (QML) models like quantum neural networks (QNN) on NISQ devices
[4, 16, 11, 15]. Unfortunately, two obstacles prevent the NISQ devices from applying to QNN in practice. The classical DL models cannot be directly deployed on a NISQ device without conversion to a quantum tensor format
[26, 5]. For another, the NISQ devices admit a few physical qubits such that insufficient qubits could be spared for the quantum error correction
[2, 12, 13], and more significantly, the representation power of QML is quite limited due to the small number of currently available qubits [25].To deal with the first challenge, the variational algorithm, namely the variational quantum circuit (VQC), was proposed to enable QNN simulated on NISQ devices and has attained even exponentially advantages over the DL counterparts on exclusively many tasks like ASR [33, 24], NLP [34]
, and reinforcement learning
[6]. As for the second challenge, distributed quantum machine learning systems, which consist of different quantum machines with limited quantum capabilities, can be set up to increase the quantum computing power. One particular distributed model architecture is called quantum federated learning (QFL), which aims at a decentralized computing architecture derived from classical federated learning (FL).The FL framework depends on the advances in hardware progress, making tiny DL systems practically powerful. For example, a speech recognition system on the cloud can transmit a global acoustic model to a user’s cell phone and then send the updating information back to the cloud without collecting the user’s private data on the centralized computing server. This methodology helps to build a privacypreserving DL system and inspires us to leverage quantum computing to expand machine learning capabilities. As shown in Figure 1, our proposed QFL and FL differ in the models utilized in the systems, where QFL is composed of VQC models rather than the classical DL counterparts for FL. More specifically, the QFL system consists of a global VQC model placed on the cloud, and local VQC models deployed on user devices. Moreover, the training process of QFL involves three key steps: (1) the global VQC parameter is transmitted to local participants’ devices; (2) the parameter of each local VQC is adaptively trained based on the participant’s data and then uploads the model gradients back to the cloud; (3) the uploaded gradients are aggregated to generate a centralized gradient to update the global model parameter .
Besides, a significantly inherent obstacle of QFL is bound up with the communication overhead among different VQC models. To reduce the communication cost, a more efficient training algorithm is expected to accelerate the convergence rate such that fewer counts of the global model update can be obtained. Therefore, in this work, we put forth a federated quantum learning algorithm for training QFL, namely federated quantum natural gradient descent (FQNGD). FQNGD is derived from the algorithm of quantum natural gradient descent (QNGD), which has demonstrated more efficient performance for training VQC models than the other stochastic gradient descent (SGD) methods in the training process of a single VQC.
1.1 Main Results
Our main contributions to this work are summarized as follows:

We design the FQNGD algorithm for the QFL system by extending the QNGD method for training a single VQC. The QNGD algorithm is developed by approximating the FubiniStudy metric tensor to create a specific VQC model structure.

We compare our FQNGD algorithm with the conventional SGD algorithms, and highlight the performance advantages of FQNGD over other SGD methods in theory.

Empirical study of experiments on the handwritten digit classification is conducted to corroborate our theoretical results. The experimental results can demonstrate the performance of the advantages of FQNGD for QFL.
2 Related Work
Konečnỳ et al. [19] first proposed the FL strategies to improve the communication efficiency of a distributed computing system, and McMahan et al. [22] set up the FL systems with the concerns in the use of big data and a largescale cloudbased DL [28]. Chen et al. [7] demonstrates the QFL architecture that is built based on the classical FL paradigm, where the central node holds a global VQC and receives the trained VQC parameters from participants’ local quantum devices. The algorithm of QNGD was discussed to provide an efficient training method for a single VQC [29]. In particular, Stokes et al. [29] firstly claimed that the FubiniStudy metric tensor can be used for the QNGD.
This work proposes the algorithm of FQNGD by extending the QNGD to the FL setting and highlighting the learning efficiency of FQNGD in a QFL architecture. Besides, compared with the work [7], in this work, the gradients of VQC are uploaded to the global model rather than the VQC parameters of local devices such that the updated gradients can be collected without getting access to the VQC parameters as shown in [7].
3 Results
3.1 Preliminaries
In this section, we first show the necessary preliminaries that compose the algorithmic foundations for FQNGD. More specifically, we first introduce the detailed VQC framework and then explicitly explain the steps of QNGD.
3.1.1 Variational Quantum Circuit
An illustration of VQC is shown in Figure 2, where the VQC model consists of three components: (a) tensor product encoding (TPE); (b) parametric quantum circuit (PQC); (c) measurement. The TPE initializes the input quantum states , , …, from the classical inputs , , …, , and the PQC operator transforms the quantum states , , …, into the output quantum states , , …, . The measurement outputs the expected observations , , …, .
In more detail, the TPE model was firstly proposed in [30]
and aims at converting a classical vector
x into a quantum state by building up a onetoone mapping as:(1) 
where refers to a singlequbit quantum gate rotated across axis and each is constraint to the domain of that results in a reversely onetoone conversion between x and .
Moreover, the PQC is equipped with the CNOT gates for quantum entanglement and trainable quantum gates , , and , where the qubit angles , , and are adjustable during the training process. The PQC framework in the green dash square is repeatedly copied to set up a deep model, and the number of the copied PQC frameworks is called the depth of VQC. The operation of the measurement outputs the classical expected observations , , …, from the quantum output states. The expected outputs are used to calculate the loss value and the gradient descents [27] that can be utilized to update the VQC model parameters by applying the backpropagation algorithm [32].
3.1.2 Quantum Natural Gradient Descent
As shown in Eq. (2), at the step , the standard gradient descent minimizes a loss function with respect to the parameters in a Euclidean space.
(2) 
The standard gradient descent algorithm admits each optimization step conducted in a Euclidean geometry on the parameter space. However, since the form of parameterization is not unique, different compositions of parameterizations are likely to distort distances within the optimization landscape. A better alternative method is to perform the gradient descent in the distribution space, namely natural gradient descent [1], which is dimensionfree and invariant concerning parameterization. In doing so, each optimization step chooses the optimum step size for the update of parameter , regardless of the choice of parameterization. Mathematically, the standard gradient descent is modified as follows:
(3) 
where denotes the Fisher information matrix that acts as a metric tensor, transforming the steepest gradient descent in the Euclidean parameter space to the steepest descent in the distribution space.
Since the standard Euclidean geometry is suboptimal for the optimization of quantum variational algorithms, a quantum analog has the following form:
(4) 
where refers to the pseudoinverse and is associated with the specific architecture of the quantum circuit. The coefficient can be calculated using the FubiniStudy metric tensor and it reduces to the Fisher information matrix in the classical limit [21].
3.2 Theoretical Results
3.2.1 Quantum Natural Gradient Descent for the VQC
Before employing the QFNGD for a quantum federated learning system, we concentrate on the use of QNGD for a single VQC. For simplicity, we leverage a blockdiagonal approximation to the FubiniStudy metric tensor for composing QNGD for the training of the VQC on the NISQ quantum hardware.
Given an initial quantum state and a PQC with layers, for we separately denote and as the unitary matrices associated with nonparameterized quantum gates and parameterized quantum ones.
Consider a variational quantum circuit as:
(5) 
Furthermore, any unitary quantum parametric gates can be rewritten as , where refers to the Hermitian generator of the gate . The approximation to the FubiniStudy metric tensor admits that for each parametric layer in the variational quantum circuit, the blockdiagonal submatrix of the FubiniStudy metric tensor is calculated by
(6) 
where
(7) 
denotes the quantum state before the application of the parameterized layer . Figure 4 illustrates a simplified version of VQC, where , are related to nonparametric gates, and and correspond to the parametric gates with adjustable parameters , respectively. Since there are two layers, each of which owns two free parameters, the blockdiagonal approximation is composed of two matrices, and . More specifically, and can be separately expressed as:
(8) 
and
(9) 
The elements of and compose as:
(10) 
Then, we employ the Eq. (4) to update the VQC parameter .
3.2.2 Federated Quantum Natural Gradient Descent
A QFL system can be simply built by setting up VQC models in an FL manner, where the QNGD algorithm is applied for each VQC and the uploaded gradients of all VQCs are aggregated to update the model parameters of the global VQC. In mathematical, the FQNGD can be summarized as:
(11) 
where and separately correspond to the model parameters of the global VQC and the
th VQC model at epoch
, and represents the amount of training data stored in the participant , and the sum of participants’ data is equivalent to .Compared with the SGD counterparts used for QFL, the FQNGD algorithm admits adaptive learning rates for the gradients such that the convergence rate could be accelerated according to the VQC model status.
3.2.3 Empirical Results
To demonstrate the FQNGD algorithm for QFL, we perform the binary and ternary classification tasks on the standard MNIST dataset
[9], specifically digits for the binary task and for the ternary one. There are a total of training data and test data for the binary classification, and training data and test data are assigned for the ternary classification. As for the setup of QFL in our experiments, the QFL system consists of identically local VQC participants, each of which owns the same amount of training data. The test data are stored in the global part and are used to evaluate the classification performance.The experiments compare our proposed FQNGD algorithm with the other three optimizers: the naive SGD optimizer, the Adagrad optimizer [20] and the Adam optimizer [18]
. The Adagrad optimizer is a gradient descent optimizer with a pastgradientdependent learning rate in each dimension. The Adam optimizer refers to the gradient descent method with an adaptive learning rate as well as adaptive first and second moments.
Methods  SGD  Adagrad  Adam  FQNGD 

Acc.  98.48  98.81  98.87  99.32 
Methods  SGD  Adagrad  Adam  FQNGD 

Acc.  97.86  98.63  98.71  99.12 
As shown in Figure 5, our simulation results suggest that our proposed FQNGD method is capable of achieving the fastest convergence rate compared with other optimization approaches. It means that the FQNGD method can lower the communication cost and also maintain the baseline performance of both binary and ternary classification on the MNIST dataset. Moreover, we evaluate the QFL performance in terms of classification accuracy. The FQNGD method outperforms the other counterparts with the highest accuracy values. In particular, the FQNGD is designed for the VQC model and can attain better empirical results than the Adam and Adagrad methods with adaptive learning rates over epochs.
4 Conclusion and Discussion
This work focuses on the design of the FQNGD algorithm for the QFL system in which multiple local VQC models are applied. The FQNGD is derived from training a single VQC based on QNGD, which relies on the blockdiagonal approximation of the FubiniStudy metric tensor to the VQC architecture. We put forth the FQNGD method to train the QFL system. Compared with other SGD optimization approaches with Adagrad and Adam optimizers, our experiments on the classification tasks on the MNIST dataset demonstrate the FQNGD method attains better empirical results than other SGD counterparts, while the FQNGD exhibits a faster convergence rate than the others, which implies that our FQNGD method suggests that it is capable of lowering the communication cost and can maintain the baseline empirical results.
Although this work focuses on the optimization methods for the QFL system, the decentralized deployment of highperformance QFL systems for adapting to the largescale dataset is left for our future investigation. In particular, it is essential to consider how to defend against malicious attacks from adversaries and also boost the robustness and integrity of the shared information among local participants. Besides, another quantum neural network like quantum convolutional neural networks (QCNN)
[8] is worth further study to constitute a QFL system.5 Method
In our study, the FubiniStudy metric tensor is typically defined in quantum mechanics’ notations of mixed states. To explicitly equate this notation to the homogeneous coordinates, let
(12) 
where is a set of orthonormal basis vectors for Hilbert space, the are complex numbers. is the standard notation for a point in the projective space of homogeneous coordinates. Then, given two points and in the space, the distance between and is
(13) 
The FubiniStudy metric is the natural metric for the geometrization of quantum mechanics, and much of the peculiar behavior of quantum mechanics including quantum entanglement can be attributed to the peculiarities of the FubiniStudy metric.
References
 [1] (1998) Natural Gradient Works Efficiently in Learning. Neural Computation 10 (2), pp. 251–276. Cited by: §3.1.2.
 [2] (2021) RealTime Error Correction for Quantum Computing. Physics 14, pp. 184. Cited by: §1.
 [3] (2020) Language Models Are FewShot Learners. Advances in Neural Information Processing Systems 33, pp. 1877–1901. Cited by: §1.
 [4] (2021) Variational Quantum Algorithms. Nature Reviews Physics 3 (9), pp. 625–644. Cited by: §1.

[5]
(2021)
An EndtoEnd Trainable Hybrid ClassicalQuantum Classifier
. Machine Learning: Science and Technology 2 (4), pp. 045021. Cited by: §1.  [6] (2020) Variational Quantum Circuits for Deep Reinforcement Learning. IEEE Access 8, pp. 141007–141024. Cited by: §1.
 [7] (2021) Federated Quantum Machine Learning. Entropy 23 (4), pp. 460. Cited by: §2, §2.
 [8] (2019) Quantum Convolutional Neural Networks. Nature Physics 15 (12), pp. 1273–1278. Cited by: §4.
 [9] (2012) The MNIST Database of Handwritten Digit Images for Machine Learning Research. IEEE Signal Processing Magazine 29 (6), pp. 141–142. Cited by: §3.2.3.
 [10] (2018) Bert: PreTraining of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:1810.04805. Cited by: §1.
 [11] (2021) Learnability of Quantum Neural Networks. PRX Quantum 2 (4), pp. 040337. Cited by: §1.
 [12] (2021) FaultTolerant Control of An ErrorCorrected Qubit. Nature 598 (7880), pp. 281–286. Cited by: §1.
 [13] (2021) Testing A Quantum ErrorCorrecting Code on Various Platforms. Science Bulletin 66 (1), pp. 29–35. Cited by: §1.
 [14] (2015) Advances in Natural Language Processing. Science 349 (6245), pp. 261–266. Cited by: §1.
 [15] (2022) Quantum Advantage in Learning from Experiments. Science 376 (6598), pp. 1182–1186. Cited by: §1.
 [16] (2021) Power of Data in Quantum Machine Learning. Nature Communications 12 (1), pp. 1–9. Cited by: §1.
 [17] (2014) A Historical Perspective of Speech Recognition. Communications of the ACM 57 (1), pp. 94–103. Cited by: §1.
 [18] (2015) Adam: A Method for Stochastic Optimization. In Proc. International Conference on Representation Learning, Cited by: §3.2.3.
 [19] (2016) Federated Learning: Strategies for Improving Communication Efficiency. arXiv preprint arXiv:1610.05492. Cited by: §2.
 [20] (2019) Adagrad—An Optimizer for Stochastic Gradient Descent. Int. J. Inf. Comput. Sci 6 (5), pp. 566–568. Cited by: §3.2.3.
 [21] (2019) Variational AnsatzBased Quantum Simulation of Imaginary Time Evolution. NPJ Quantum Information 5 (1), pp. 1–6. Cited by: §3.1.2.
 [22] (2017) CommunicationEfficient Learning of Deep Networks from Decentralized Data. In Artificial Intelligence and Statistics, pp. 1273–1282. Cited by: §2.
 [23] (201808) Quantum Computing in the NISQ Era and Beyond. Quantum 2, pp. 79. External Links: ISSN 2521327X Cited by: §1.

[24]
(2021)
ClassicaltoQuantum Transfer Learning for Spoken Command Recognition Based on Quantum Neural Networks
. Proc. IEEE International Conference on Acoustics, Speech and Signal Processing. Cited by: §1.  [25] (2022) Theoretical Error Performance Analysis for Variational Quantum Circuit Based Functional Regression. arXiv preprint arXiv:2206.04804. Cited by: §1.
 [26] (2022) QTNVQC: An EndtoEnd Learning Framework for Quantum Neural Networks. In NeurIPS 2021 Workshop on Quantum Tensor Networks in Machine Learning, Cited by: §1.
 [27] (2016) An Overview of Gradient Descent Optimization Algorithms. arXiv preprint arXiv:1609.04747. Cited by: §3.1.1.
 [28] (2015) PrivacyPreserving Deep Learning. In Proc. ACM SIGSAC Conference on Computer and Communications Security, pp. 1310–1321. Cited by: §2.
 [29] (2020) Quantum Natural Gradient. Quantum 4, pp. 269. Cited by: §2.
 [30] (2016) Supervised Learning with Tensor Networks. Advances in Neural Information Processing Systems 29. Cited by: §3.1.1.
 [31] (2018) Deep Learning for Computer Vision: A Brief Review. Computational intelligence and neuroscience 2018. Cited by: §1.
 [32] (1990) Backpropagation Through Time: What It Does and How to Do It?. Proceedings of the IEEE 78 (10), pp. 1550–1560. Cited by: §3.1.1.

[33]
(2021)
Decentralizing Feature Extraction with Quantum Convolutional Neural Network for Automatic Speech Recognition
. In Proc. IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 6523–6527. Cited by: §1.  [34] (2022) When BERT Meets Quantum Temporal Convolution Learning for Text Classification in Heterogeneous Computing. In Proc. IEEE International Conference on Acoustics, Speech and Signal Processing, Cited by: §1.