I Introduction
Quantum entanglement serves as a fundamental concept of longdistance quantum networks, quantum Internet, and future quantum communications [1, 2, 3, 4, 7]. Since the nocloning theorem makes it impossible to use the “copyandresend” mechanisms of traditional repeaters [7], in a quantum networking scenario the quantum repeaters have to transmit correlations in a different way [1, 2, 3, 4, 5]. The main task of quantum repeaters is to distribute quantum entanglement between distant points that will then serve as a fundamental base resource for quantum teleportation and other quantum protocols [1]. Since in an experimental scenario [15, 16, 17, 18, 19, 20, 21] the quantum links between nodes are noisy and entanglement fidelity decreases as hop distance increases, entanglement purification is applied to improve the entanglement fidelity between nodes [1, 3, 4, 5, 6]. Quantum nodes also perform internal quantum error correction that is a requirement for reliability and storage in quantum memories [1, 5, 6, 8]. Both entanglement purification and quantum error correction steps in local nodes are highcost tasks that require significant minimization [1, 3, 4, 5, 6, 15, 16, 19, 20, 21].
The shared entangled connections between nodes form entangled links. Significant attributes of these entangled links are entanglement fidelity, and correlation, in terms of relative entropy entanglement (for a definition, see Section AA). Entanglement fidelity is a crucial parameter. It serves as the primary objective function in our model, which is a subject of maximization. Maximizing the relative entropy of entanglement is the secondary objective function. Minimizing the cost of classical communications, which is required by the entanglement optimization method as an auxiliary objective function, is also considered.
Besides these attributes, the entangled links are characterized by the entanglement throughput that identifies the number of transmittable entangled systems per sec at a particular fidelity. In our model, the nodes are associated with an incoming entanglement throughput [1], that serves as a resource for the nodes to maximize the entanglement fidelity and the relative entropy of entanglement. The nodes receive and process the incoming entangled states. Each node performs purification and internal quantum error correction, and it stores the entangled systems in local quantum memories. The amount of input entangled systems in a node is therefore connected to the achievable maximal entanglement fidelity and correlation in the entangled states associated with that node. The objective of the proposed model is to reveal this connection and to define a framework for entanglement optimization in the quantum nodes of an arbitrary quantum network. The required input information for the optimization without loss of generality are the number of nodes, the number of fidelity types of the received entangled states, and the node characteristics. In a realistic setting, these cover the incoming entanglement throughput in a node and the costs of internal entanglement purification steps, internal quantum error corrections, and quantum memory usage.
In this work, an optimization framework for quantum networks is defined. The method aims to maximize the achievable entanglement fidelity and correlation of entangled systems, in parallel with the minimization of the cost of entanglement purification and quantum error correction steps in the quantum nodes of the network. The problem model is therefore defined as a multiobjective optimization. This paper aims to provide a model that utilizes the realistic parameters of the internal mechanisms of the nodes and the physical attributes of entanglement transmission. The proposed framework integrates the results of quantum Shannon theory, the theory of evolutionary multiobjective optimization algorithms [9, 10], and the mathematical modeling of seismic wave propagation [9, 10, 11, 12, 13, 14].
Inspired by the statistical distribution of seismic events and the modeling of wave propagations in nature, the model utilizes a Poisson distribution framework to find optimal solutions in the objective space. In the theory of earthquake analysis and spatial connection theory
[9, 10, 11, 12, 13, 14], Poisson distributions are crucial in finding new epicenters. Motivated by these findings, a Poisson model is proposed to find new solutions in the objective space that is defined by the multiobjective optimization problem. The solutions in the objective space are represented by epicenters with several locations around them that also represent solutions in the feasible space [9, 10]. The epicenters have a magnitude and seismic power operators that determine the distributions of the locations and fitness [9, 10] of locations around the epicenters. Epicenters with low magnitude generate high seismic power in the locations, whereas epicenters with high magnitude generate low seismic power in the locations. Epicenters are generated randomly in the feasible space, and each epicenter is weighted from which the magnitude and power are derived. By a general assumption, epicenters with lower magnitude produce more locations because the locations are closer to the epicenter. The locations are placed within a certain magnitude around the epicenters in the feasible space. The optimization framework involves a set of solutions to the Pareto optimal front by combining the concept of Pareto dominance and seismic wave propagations. The new epicenters are determined by a Poisson distribution in analogue to prediction theory in earthquake models. The mathematical model of epicenters allows us to find new solutions iteratively and to find a global optimum. The framework has low complexity that allows an efficient practical implementation to solve the defined multiobjective optimization problem.The multiobjective optimization problem model considers the fidelity and correlation of entanglement of entangled states available in the quantum nodes. The resources for the nodes are the incoming entangled states from the quantum links, and the already stored entangled quantum systems in the local quantum memories. In the optimization procedure, both memory consumptions and environmental effects, such as entanglement purification and quantum error correction steps, are considered to develop the cost functions. In particular, the amount of resource, in terms of number of available entangled systems, is a coefficient that can be improved by increasing the incoming number of entangled systems, such as the incoming entanglement throughput in a node. In the proposed model, the incoming entanglement fidelity is further divided into some classes, which allows us to differentiate the resources in the nodes with respect to their fidelity types. Therefore, the fidelity type serves as a quality index for the optimization procedure. The optimization aims to find the optimal incoming entanglement throughput for all nodes that leads to a maximization of entanglement fidelity and correlation of entangled states with respect to the relative entropy of entanglement, for all entangled connections in the quantum network.
The novel contributions of this paper are as follows:

A natureinspired, multiobjective optimization framework is conceived for quantum repeater networks.

The model considers the physical attributes of entanglement transmission and quantum memories to provide a realistic setting (realistic objective functions and cost functions).

The method fuses the results of quantum Shannon theory and theory of evolutionary multiobjective optimization algorithms.

The model maximizes the entanglement fidelity and relative entropy of entanglement for all entangled connections of the network. It minimizes the cost functions to reduce the costs of entanglement purification, error correction, and quantum memory usage.

The optimization framework allows a lowcomplexity implementation.
Ii Problem Statement
The problem statement is as follows. For a given quantum network with nodes, for all nodes , , the entanglement fidelity and relative entropy of entanglement for all entangled connections are maximized, and the cost of optimal purification and quantum error correction and the cost of memory usage for all nodes are minimized.
The network model is as follows: Let be the incoming number of received entangled states (incoming entanglement throughput) in a given quantum node , measured in the number of dimensional entangled states per sec at a particular entanglement fidelity [1, 3, 4].
Let be the number of nodes in the network, and let be the number of fidelity types , of the entangled states in the quantum network.
Let be the number of incoming entangled states in an node , , from fidelity type . In our model, represents the utilizable resources in a particular node . Thus, the task is to determine this value for all nodes in the quantum network to maximize the fidelity and relative entropy of shared entanglement for all entangled connections.
Let be an matrix
(1) 
The matrix describes the number of entangled states of each fidelity type for all nodes in the network, for all and .
Iia Objective Functions
For a given node , let be the primary objective function that identifies the cumulative entanglement fidelity (a sum of entanglement fidelities in ) after an entanglement purification and an optimal quantum error correction in . In our framework, for a node is defined as
(2) 
where is the quadratic regression coefficient, is the simple regression coefficient, is a constant, and is defined as
(3) 
where is an initialization value for in a particular node .
Then let be the secondary objective function that refers to the expected amount of cumulative relative entropy of entanglement (a sum of relative entropy of entanglement) in node , defined as
(4) 
where , , and are some regression coefficients, by definition.
Therefore, the aim is to find the values of for all and in (1), such that and are maximized for all .
Assuming that the fidelity of entanglement is dynamically changing and evolves over time, the quantum memory coefficient is introduced for the storage of entangled states from the fidelity type in a node as follows:
(5) 
where and are coefficients that describe the storage characteristic of entangled states with the fidelity type.
IiB Cost Functions
The cumulative entanglement fidelity (2) and cumulative relative entropy of entanglement (4) in a particular node are associated with a cost entanglement purification and a cost of optimal quantum error correction in , where is the cost function.
Then let be the total cost function for all of the fidelity types and for all of the nodes, as follows:
(6) 
where is a total cost of purification and error correction associated with the fidelity type of entangled states.
Let be a critical fidelity on the received quantum states. The entangled states are then decomposable into two sets and with fidelity bounds and as
(7) 
and
(8) 
For the quantum systems of , the highest fidelity is below the critical amount , and for set , the lowest fidelity is at least . Then let and identify the set of nodes for which condition (7) or (8) holds, respectively.
Let be the cost of quantum memory usage in node , defined as
(9) 
where is a constant, is a quality coefficient that identifies set (7) or (8) for a given node , and is the capacity coefficient of the quantum memory.
The main components of the network model are depicted in Fig. 1.
IiC Multiobjective Optimization
The optimization problem is as follows: the entanglement fidelity and the relative entropy of entanglement for all types of fidelity of stored entanglement for all nodes are maximized, while the cost of entanglement purification and quantum error correction is minimal, and the memory usage cost (required storage time) is also minimal. These requirements define a multiobjective optimization problem [9, 10].
Utilizing functions (2) and (4), the function subject of a maximization to yield maximal entanglement fidelity and maximal relative entropy of entanglement in all nodes of the network is defined via main objective function :
(10) 
Function should be maximized while cost functions (6) and (9) are minimized via functions and :
(11) 
and
(12) 
with the problem constraints [9, 10] , , and for all and . Constraint is defined as
(13) 
where is a cumulative lower bound on the required entanglement fidelity for all nodes, while is
(14) 
and constraint is
(15) 
where is an upper bound on the total cost function , while is
(16) 
For constraint , let be a differentiation of storage characteristic of entangled states from the fidelity type:
(17) 
where
(18) 
Then, is defined as
(19) 
where is an upper bound on the storage characteristic of entangled states from the fidelity type, while is evaluated via (17) as
(20) 
Iii Poisson Model for Entanglement Optimization
This section defines the Poisson entanglement optimization method, and it is applied to the solution of the multiobjective optimization problem of Section II.
Iiia Poisson Operators
The attributes of the Poisson operator are as follows:
IiiA1 Dispersion
The dispersion coefficient of an epicenter (solution in the feasible space ) determines the number of affected , , locations around an epicenter . The random locations around an epicenter also represent solutions in that help in increasing the diversity of population (a set of possible solutions) to find a global optimum. The diversity increment is therefore a tool to avoid an early convergence to a local optimum [9, 10].
The dispersion operator for an epicenter is defined as
(21) 
where is a control parameter, is an individual (epicenter) from the individuals (epicenters) in population , is the size of population , function is the fitness value (see Section AB1), is a maximum objective value among the individuals, and is a residual quantity.
Without loss of generality, assuming epicenters, the total number of locations is as
(22) 
IiiA2 Seismic Power and Magnitude
Assume that is a random location around . For , the Euclidean distance between an epicenter and the projection point of a location point , on the ellipsoid around is as follows:
(23) 
where is the dimension of , and
(24) 
where coefficients and define the shape of the ellipse around epicenter (see Fig. 2), while is an angle:
(25) 
The seismic power operator for an epicenter in a location point , is defined as
(26) 
where and are regression coefficients,
is the standard deviation
[14], is the seismic magnitude in a location , and is the projection of onto the ellipsoid around [14].IiiA3 Cumulative Magnitude
Let be the location point where the seismic power is maximal for a given epicenter . Let be the maximal seismic power,
(28) 
Assuming that epicenters, exist in the system, let identify by the epicenter with a maximal seismic power among as
(29) 
with magnitude , where is the location point where the seismic power is maximal yielded for .
Then the cumulative magnitude for an epicenter is defined as
(30) 
where is the highest seismic power epicenter with magnitude , is the minimum objective value among the epicenters, and is a control parameter defined as
(31) 
where provides the maximal seismic power for an epicenter , functions and are the fitness values (see Section AB1) for the current epicenter and for the highest seismic power epicenter , and is a residual quantity.
IiiB Distribution of Epicenters
Assume that is a current epicenter (solution) and and are two random reference points around . Using the cumulative seismic magnitude ((30)) of an epicenter , the generation of a new epicenter is as follows:
Let be a Poisson range identifier function [12, 13] for using and as random reference points:
(32) 
where is a current epicenter, and are random reference points, is the Euclidean distance function, and are weighting coefficients between epicenters and and between and , and is the angle between lines and :
(33) 
Without loss of generality, using (32), a Poissonian distance function for the finding of new epicenter is defined via a Poisson distribution [12, 13] as follows:
(34) 
where
(35) 
with mean
(36) 
Therefore, the resulting new epicenter is a Poisson random epicenter with a Poisson range identifier .
For a large set of reference points, only those reference points that are within the radius around the current solution are selected for the determination of the new solution . This radius is defined as
(37) 
where is the average magnitude,
(38) 
and are constants, and is a normalization term. Motivated by the corresponding seismologic relations of the DobrovolskyMegathrust radius formula [13], the constants in (37) are selected as and .
IiiC Population Diversity
IiiC1 Hypocentral
The hypocentral of an epicenter is aimed to increase the diversity of population by a randomization.
Let be an actual randomly selected dimension and be a current epicenter , . The hypocentral provides a random displacement [12, 13] of using (see (30)):
(39) 
where is a uniform random number from the range of to yield the displacement , is the magnitude, and is a location point where is maximal for .
IiiC2 Poisson Randomization
To generate random locations around , a Poisson distribution is also used to increase the diversity of the population. A random location in the dimension around is generated as follows:
(41) 
where
(42) 
is a Poisson random number with distribution coefficients and . Given that it is possible that using (41) some randomly generated locations will be out of the feasible space , a normalization operator of is defined to keep the new locations around in , as follows [9, 10]:
(43) 
where and are lower and upper bounds on the boundaries of locations in a dimension, and is to a modular arithmetic function. The procedure is repeated for the randomly selected dimensions of , for .
IiiD Iterative Convergence
The method of convergence of solutions in the Poisson optimization is summarized in Method 1.
An epicenter and the generation of a new solution with an in the objective space are depicted in Fig. 2. The ellipsoid around and the projection point of the reference location are serving the determination of power function in the reference location .
IiiE Framework
The algorithmical framework that utilizes the Poisson entanglement optimization method for the problem statement presented in Section II is defined in Algorithm 1.
Subprocedure 1 of step 5 is discussed in the Appendix.
IiiE1 Optimization of Classical Communications
To achieve the minimization of classical communications required by the entanglement optimization, the metric (or hypervolume indicator) is integrated, which is a quality measure for the solutions or a contribution of a single solution in a solution set [9, 10] By definition, this metric identifies the size of dominated space (size of space covered).
By theory, the metric for a solution set is as follows:
(44) 
where is a Lebesgue measure, notation means dominates (or is dominated by ), and is a reference point dominated by all valid solutions in the solution set [9, 10].
For a given solution , the metric identifies the size of space dominated by but not dominated by any other solution, without loss of generality as:
(45) 
In the optimization of classical communications, the existence of two objective functions is assumed. The first objective function, , is associated with the minimization of the cost of the first type of classical communications related to the reception and storage of entangled systems in the quantum nodes (it covers the classical communications related to the required entanglement throughput by the nodes, fidelity of received entanglement, number of stored entangled states, and fidelity parameters). Thus,
(46) 
where is the cost associated with the first type of classical communications related to a .
The second objective function, , is associated with the cost of the second type of classical communications that is related to entanglement purification:
(47) 
where is the cost associated with the second type of classical communications with respect to .
Assuming objective functions and , the of a particular solution is as follows:
(48) 
Given that the metric is calculated for the solutions, a set of nearest neighbors that restrict the space can be determined. Since the volume of this space can be quantified by the hypervolume, the solutions that satisfy objectives and can be found by utilizing (48).
IiiF Computational Complexity
The computational complexity of the Poissonian optimization method is derived as follows: Given that epicenters are generated in the search space and that the number of locations for an epicenter is determined by the dispersion operator , the resulting computational complexity at a total number of locations (see (22)) is
(49) 
since after a sorting process the locations for a given epicenter can be calculated with complexity , where is the number of objectives.
Considering that in our setting , the total complexity is
(50) 
Iv Problem Resolution
The resolution of the problem shown in Section II using the Poissonian entanglement optimization framework of Section III is as follows:
Let be a set of nodes for which condition (7) holds for the fidelity of the received entangled states in the nodes, and let be a set of nodes for which condition (8) holds for the received fidelity entanglement.
Then let and be the cardinality of and , respectively.
Specifically, function (10) for the type nodes is rewritten as
(51) 
where is the entanglement fidelity function for an type node , , and is the expected relative entropy of entanglement in an type .
Comments
There are no comments yet.