A Poisson Model for Entanglement Optimization in the Quantum Internet

03/06/2018
by   Laszlo Gyongyosi, et al.
University of Southampton
0

A Poisson model for entanglement optimization in quantum repeater networks is defined in this paper. The nature-inspired multiobjective optimization framework fuses the fundamental concepts of quantum Shannon theory with the theory of evolutionary algorithms. The optimization model aims to maximize the entanglement fidelity and relative entropy of entanglement for all entangled connections of the quantum network. The cost functions are subject of a minimization defined to cover and integrate the physical attributes of entanglement transmission, purification, and storage of entanglement in quantum memories. The method can be implemented with low complexity that allows a straightforward application in future quantum Internet and quantum networking scenarios.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

05/01/2019

Opportunistic Entanglement Distribution for the Quantum Internet

Quantum entanglement is a building block of the entangled quantum networ...
08/23/2018

Entanglement Availability Differentiation Service for the Quantum Internet

A fundamental concept of the quantum Internet is quantum entanglement. I...
03/11/2020

Entanglement Accessibility Measures for the Quantum Internet

We define metrics and measures to characterize the ratio of accessible q...
04/12/2020

From Holant to Quantum Entanglement and Back

Holant problems are intimately connected with quantum theory as tensor n...
05/26/2020

Entanglement formation in continuous-variable random quantum networks

Entanglement is not only important for understanding the fundamental pro...
08/23/2018

Multilayer Optimization for the Quantum Internet

We define a multilayer optimization method for the quantum Internet. Mul...
01/14/2022

Towards a Semantic Information Theory (Introducing Quantum Corollas)

The field of Information Theory is founded on Claude Shannon's seminal i...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

Quantum entanglement serves as a fundamental concept of long-distance quantum networks, quantum Internet, and future quantum communications [1, 2, 3, 4, 7]. Since the no-cloning theorem makes it impossible to use the “copy-and-resend” mechanisms of traditional repeaters [7], in a quantum networking scenario the quantum repeaters have to transmit correlations in a different way [1, 2, 3, 4, 5]. The main task of quantum repeaters is to distribute quantum entanglement between distant points that will then serve as a fundamental base resource for quantum teleportation and other quantum protocols [1]. Since in an experimental scenario [15, 16, 17, 18, 19, 20, 21] the quantum links between nodes are noisy and entanglement fidelity decreases as hop distance increases, entanglement purification is applied to improve the entanglement fidelity between nodes [1, 3, 4, 5, 6]. Quantum nodes also perform internal quantum error correction that is a requirement for reliability and storage in quantum memories [1, 5, 6, 8]. Both entanglement purification and quantum error correction steps in local nodes are high-cost tasks that require significant minimization [1, 3, 4, 5, 6, 15, 16, 19, 20, 21].

The shared entangled connections between nodes form entangled links. Significant attributes of these entangled links are entanglement fidelity, and correlation, in terms of relative entropy entanglement (for a definition, see Section A-A). Entanglement fidelity is a crucial parameter. It serves as the primary objective function in our model, which is a subject of maximization. Maximizing the relative entropy of entanglement is the secondary objective function. Minimizing the cost of classical communications, which is required by the entanglement optimization method as an auxiliary objective function, is also considered.

Besides these attributes, the entangled links are characterized by the entanglement throughput that identifies the number of transmittable entangled systems per sec at a particular fidelity. In our model, the nodes are associated with an incoming entanglement throughput [1], that serves as a resource for the nodes to maximize the entanglement fidelity and the relative entropy of entanglement. The nodes receive and process the incoming entangled states. Each node performs purification and internal quantum error correction, and it stores the entangled systems in local quantum memories. The amount of input entangled systems in a node is therefore connected to the achievable maximal entanglement fidelity and correlation in the entangled states associated with that node. The objective of the proposed model is to reveal this connection and to define a framework for entanglement optimization in the quantum nodes of an arbitrary quantum network. The required input information for the optimization without loss of generality are the number of nodes, the number of fidelity types of the received entangled states, and the node characteristics. In a realistic setting, these cover the incoming entanglement throughput in a node and the costs of internal entanglement purification steps, internal quantum error corrections, and quantum memory usage.

In this work, an optimization framework for quantum networks is defined. The method aims to maximize the achievable entanglement fidelity and correlation of entangled systems, in parallel with the minimization of the cost of entanglement purification and quantum error correction steps in the quantum nodes of the network. The problem model is therefore defined as a multiobjective optimization. This paper aims to provide a model that utilizes the realistic parameters of the internal mechanisms of the nodes and the physical attributes of entanglement transmission. The proposed framework integrates the results of quantum Shannon theory, the theory of evolutionary multiobjective optimization algorithms [9, 10], and the mathematical modeling of seismic wave propagation [9, 10, 11, 12, 13, 14].

Inspired by the statistical distribution of seismic events and the modeling of wave propagations in nature, the model utilizes a Poisson distribution framework to find optimal solutions in the objective space. In the theory of earthquake analysis and spatial connection theory

[9, 10, 11, 12, 13, 14], Poisson distributions are crucial in finding new epicenters. Motivated by these findings, a Poisson model is proposed to find new solutions in the objective space that is defined by the multiobjective optimization problem. The solutions in the objective space are represented by epicenters with several locations around them that also represent solutions in the feasible space [9, 10]. The epicenters have a magnitude and seismic power operators that determine the distributions of the locations and fitness [9, 10] of locations around the epicenters. Epicenters with low magnitude generate high seismic power in the locations, whereas epicenters with high magnitude generate low seismic power in the locations. Epicenters are generated randomly in the feasible space, and each epicenter is weighted from which the magnitude and power are derived. By a general assumption, epicenters with lower magnitude produce more locations because the locations are closer to the epicenter. The locations are placed within a certain magnitude around the epicenters in the feasible space. The optimization framework involves a set of solutions to the Pareto optimal front by combining the concept of Pareto dominance and seismic wave propagations. The new epicenters are determined by a Poisson distribution in analogue to prediction theory in earthquake models. The mathematical model of epicenters allows us to find new solutions iteratively and to find a global optimum. The framework has low complexity that allows an efficient practical implementation to solve the defined multiobjective optimization problem.

The multiobjective optimization problem model considers the fidelity and correlation of entanglement of entangled states available in the quantum nodes. The resources for the nodes are the incoming entangled states from the quantum links, and the already stored entangled quantum systems in the local quantum memories. In the optimization procedure, both memory consumptions and environmental effects, such as entanglement purification and quantum error correction steps, are considered to develop the cost functions. In particular, the amount of resource, in terms of number of available entangled systems, is a coefficient that can be improved by increasing the incoming number of entangled systems, such as the incoming entanglement throughput in a node. In the proposed model, the incoming entanglement fidelity is further divided into some classes, which allows us to differentiate the resources in the nodes with respect to their fidelity types. Therefore, the fidelity type serves as a quality index for the optimization procedure. The optimization aims to find the optimal incoming entanglement throughput for all nodes that leads to a maximization of entanglement fidelity and correlation of entangled states with respect to the relative entropy of entanglement, for all entangled connections in the quantum network.

The novel contributions of this paper are as follows:

  • A nature-inspired, multiobjective optimization framework is conceived for quantum repeater networks.

  • The model considers the physical attributes of entanglement transmission and quantum memories to provide a realistic setting (realistic objective functions and cost functions).

  • The method fuses the results of quantum Shannon theory and theory of evolutionary multiobjective optimization algorithms.

  • The model maximizes the entanglement fidelity and relative entropy of entanglement for all entangled connections of the network. It minimizes the cost functions to reduce the costs of entanglement purification, error correction, and quantum memory usage.

  • The optimization framework allows a low-complexity implementation.

This paper is organized as follows. Section II presents the problem statement. Section III details the optimization method. Section IV provides the problem resolution. Section V proposes numerical evidence. Finally, Section VI concludes the paper. Supplemental material is included in the Appendix.

Ii Problem Statement

The problem statement is as follows. For a given quantum network with nodes, for all nodes , , the entanglement fidelity and relative entropy of entanglement for all entangled connections are maximized, and the cost of optimal purification and quantum error correction and the cost of memory usage for all nodes are minimized.

The network model is as follows: Let be the incoming number of received entangled states (incoming entanglement throughput) in a given quantum node , measured in the number of -dimensional entangled states per sec at a particular entanglement fidelity [1, 3, 4].

Let be the number of nodes in the network, and let be the number of fidelity types , of the entangled states in the quantum network.

Let be the number of incoming entangled states in an node , , from fidelity type . In our model, represents the utilizable resources in a particular node . Thus, the task is to determine this value for all nodes in the quantum network to maximize the fidelity and relative entropy of shared entanglement for all entangled connections.

Let be an matrix

(1)

The matrix describes the number of entangled states of each fidelity type for all nodes in the network, for all and .

Ii-a Objective Functions

For a given node , let be the primary objective function that identifies the cumulative entanglement fidelity (a sum of entanglement fidelities in ) after an entanglement purification and an optimal quantum error correction in . In our framework, for a node is defined as

(2)

where is the quadratic regression coefficient, is the simple regression coefficient, is a constant, and is defined as

(3)

where is an initialization value for in a particular node .

Then let be the secondary objective function that refers to the expected amount of cumulative relative entropy of entanglement (a sum of relative entropy of entanglement) in node , defined as

(4)

where , , and are some regression coefficients, by definition.

Therefore, the aim is to find the values of for all and in (1), such that and are maximized for all .

Assuming that the fidelity of entanglement is dynamically changing and evolves over time, the quantum memory coefficient is introduced for the storage of entangled states from the fidelity type in a node as follows:

(5)

where and are coefficients that describe the storage characteristic of entangled states with the fidelity type.

Ii-B Cost Functions

The cumulative entanglement fidelity (2) and cumulative relative entropy of entanglement (4) in a particular node are associated with a cost entanglement purification and a cost of optimal quantum error correction in , where is the cost function.

Then let be the total cost function for all of the fidelity types and for all of the nodes, as follows:

(6)

where is a total cost of purification and error correction associated with the fidelity type of entangled states.

Let be a critical fidelity on the received quantum states. The entangled states are then decomposable into two sets and with fidelity bounds and as

(7)

and

(8)

For the quantum systems of , the highest fidelity is below the critical amount , and for set , the lowest fidelity is at least . Then let and identify the set of nodes for which condition (7) or (8) holds, respectively.

Let be the cost of quantum memory usage in node , defined as

(9)

where is a constant, is a quality coefficient that identifies set (7) or (8) for a given node , and is the capacity coefficient of the quantum memory.

The main components of the network model are depicted in Fig. 1.

Fig. 1: Illustration of the network model components. The quantum nodes and are associated with current input values and (blue and green arrows), where and identify the fidelity types of received entangled states. The nodes have several entangled connections (depicted by gray lines) in the network. The nodes are associated with subject functions , , and , . The maximum of the received entanglement fidelity in the nodes allows the classification of the nodes to sets and : node belongs to set , whereas node belongs to set (depicted by dashed frames).

Ii-C Multiobjective Optimization

The optimization problem is as follows: the entanglement fidelity and the relative entropy of entanglement for all types of fidelity of stored entanglement for all nodes are maximized, while the cost of entanglement purification and quantum error correction is minimal, and the memory usage cost (required storage time) is also minimal. These requirements define a multiobjective optimization problem [9, 10].

Utilizing functions (2) and (4), the function subject of a maximization to yield maximal entanglement fidelity and maximal relative entropy of entanglement in all nodes of the network is defined via main objective function :

(10)

Function should be maximized while cost functions (6) and (9) are minimized via functions and :

(11)

and

(12)

with the problem constraints [9, 10] , , and for all and . Constraint is defined as

(13)

where is a cumulative lower bound on the required entanglement fidelity for all nodes, while is

(14)

and constraint is

(15)

where is an upper bound on the total cost function , while is

(16)

For constraint , let be a differentiation of storage characteristic of entangled states from the fidelity type:

(17)

where

(18)

Then, is defined as

(19)

where is an upper bound on the storage characteristic of entangled states from the fidelity type, while is evaluated via (17) as

(20)

Iii Poisson Model for Entanglement Optimization

This section defines the Poisson entanglement optimization method, and it is applied to the solution of the multiobjective optimization problem of Section II.

Iii-a Poisson Operators

The attributes of the Poisson operator are as follows:

Iii-A1 Dispersion

The dispersion coefficient of an epicenter (solution in the feasible space ) determines the number of affected , , locations around an epicenter . The random locations around an epicenter also represent solutions in that help in increasing the diversity of population (a set of possible solutions) to find a global optimum. The diversity increment is therefore a tool to avoid an early convergence to a local optimum [9, 10].

The dispersion operator for an epicenter is defined as

(21)

where is a control parameter, is an individual (epicenter) from the individuals (epicenters) in population , is the size of population , function is the fitness value (see Section A-B1), is a maximum objective value among the individuals, and is a residual quantity.

Without loss of generality, assuming epicenters, the total number of locations is as

(22)

Iii-A2 Seismic Power and Magnitude

Assume that is a random location around . For , the Euclidean distance between an epicenter and the projection point of a location point , on the ellipsoid around is as follows:

(23)

where is the dimension of , and

(24)

where coefficients and define the shape of the ellipse around epicenter (see Fig. 2), while is an angle:

(25)

The seismic power operator for an epicenter in a location point , is defined as

(26)

where and are regression coefficients,

is the standard deviation

[14], is the seismic magnitude in a location , and is the projection of onto the ellipsoid around [14].

Thus, at a given with ((23)), from (see (26)), the magnitude between epicenter and location is evaluated as

(27)

Iii-A3 Cumulative Magnitude

Let be the location point where the seismic power is maximal for a given epicenter . Let be the maximal seismic power,

(28)

Assuming that epicenters, exist in the system, let identify by the epicenter with a maximal seismic power among as

(29)

with magnitude , where is the location point where the seismic power is maximal yielded for .

Then the cumulative magnitude for an epicenter is defined as

(30)

where is the highest seismic power epicenter with magnitude , is the minimum objective value among the epicenters, and is a control parameter defined as

(31)

where provides the maximal seismic power for an epicenter , functions and are the fitness values (see Section A-B1) for the current epicenter and for the highest seismic power epicenter , and is a residual quantity.

Iii-B Distribution of Epicenters

Assume that is a current epicenter (solution) and and are two random reference points around . Using the cumulative seismic magnitude ((30)) of an epicenter , the generation of a new epicenter is as follows:

Let be a Poisson range identifier function [12, 13] for using and as random reference points:

(32)

where is a current epicenter, and are random reference points, is the Euclidean distance function, and are weighting coefficients between epicenters and and between and , and is the angle between lines and :

(33)

Without loss of generality, using (32), a Poissonian distance function for the finding of new epicenter is defined via a Poisson distribution [12, 13] as follows:

(34)

where

(35)

with mean

(36)

Therefore, the resulting new epicenter is a Poisson random epicenter with a Poisson range identifier .

For a large set of reference points, only those reference points that are within the radius around the current solution are selected for the determination of the new solution . This radius is defined as

(37)

where is the average magnitude,

(38)

and are constants, and is a normalization term. Motivated by the corresponding seismologic relations of the Dobrovolsky-Megathrust radius formula [13], the constants in (37) are selected as and .

In the relevance range of (37), the weights of reference points are determined by the seismic power function (26).

Iii-C Population Diversity

Iii-C1 Hypocentral

The hypocentral of an epicenter is aimed to increase the diversity of population by a randomization.

Let be an actual randomly selected dimension and be a current epicenter , . The hypocentral provides a random displacement [12, 13] of using (see (30)):

(39)

where is a uniform random number from the range of to yield the displacement , is the magnitude, and is a location point where is maximal for .

The locations around the cumulative magnitude of are generated by (39) through all the randomly selected dimensions, where is as follows [9, 10]:

(40)

The process is repeated for all .

Iii-C2 Poisson Randomization

To generate random locations around , a Poisson distribution is also used to increase the diversity of the population. A random location in the dimension around is generated as follows:

(41)

where

(42)

is a Poisson random number with distribution coefficients and . Given that it is possible that using (41) some randomly generated locations will be out of the feasible space , a normalization operator of is defined to keep the new locations around in , as follows [9, 10]:

(43)

where and are lower and upper bounds on the boundaries of locations in a dimension, and is to a modular arithmetic function. The procedure is repeated for the randomly selected dimensions of , for .

Iii-D Iterative Convergence

The method of convergence of solutions in the Poisson optimization is summarized in Method 1.

Step 1. Generate epicenters, , with random locations around a given epicenter . Step 2. Select an epicenter , and determine the seismic operators , ,. Step 3. Determine the Poisson distance function using references and to yield a new solution . Step 4. Repeat steps 1–3, until the closest epicenter to the optimal epicenter is not found or other stopping criteria are not met.
Method 1 Convergence of Solutions

An epicenter and the generation of a new solution with an in the objective space are depicted in Fig. 2. The ellipsoid around and the projection point of the reference location are serving the determination of power function in the reference location .

A new epicenter is determined via the Poisson function . Locations with low power function (26) values have high magnitudes (27) from the epicenter, whereas locations with high power function values have low magnitudes from the epicenter.

Fig. 2: Iteration step of the Poisson optimization model in the objective space . An epicenter, (depicted by the red dot), with a projected point of random reference location . Reference locations and (blue dots) identify locations and , respectively. The power in is (see (26)), while the magnitude is (see (27)). Notation refers to the dimension of , and coefficients and define the shape of the ellipse (yellow) around epicenter . The hypocentral of is determined via the range of the cumulative magnitude (depicted by the green circle). The new epicenter (depicted by the green dot) is determined by the Poisson distance function using and , with angle between lines and .

Iii-E Framework

The algorithmical framework that utilizes the Poisson entanglement optimization method for the problem statement presented in Section II is defined in Algorithm 1.

Step 0. In an initial phase, a random population of feasible solutions is generated [9, 10] Let be an upper bound on the number of generations, . Step 1. For each epicenter in , define random locations around . For a diversity increment, determine the hypocentral displacement function (39) for , for . Step 2. Determine the seismic power operator via (26) for an epicenter in a location point ,. Determine the , the location point where the seismic power is maximal for a given epicenter , via (28). Step 3. Determine epicenter with a maximal seismic power via (29). Compute seismic magnitude via (27), and determine the sum of all seismic magnitudes via (31). Step 4. Compute the dispersion via (21) and the cumulative seismic magnitude via (30). Select non-dominated solutions from the population set to the set of non-dominated solutions. Identify as , where is a location around . Update with the non-dominated solutions. Step 5. Create set of epicenters by selecting feasible solutions from using the

selection probability as

. Apply Sub-procedure 1. Step 6. If , then stop the iteration; otherwise, repeat steps 1–4.
Algorithm 2 Poisson Entanglement Optimization for Quantum Networks

Sub-procedure 1 of step 5 is discussed in the Appendix.

Iii-E1 Optimization of Classical Communications

To achieve the minimization of classical communications required by the entanglement optimization, the -metric (or hypervolume indicator) is integrated, which is a quality measure for the solutions or a contribution of a single solution in a solution set [9, 10] By definition, this metric identifies the size of dominated space (size of space covered).

By theory, the -metric for a solution set is as follows:

(44)

where is a Lebesgue measure, notation means dominates (or is dominated by ), and is a reference point dominated by all valid solutions in the solution set [9, 10].

For a given solution , the -metric identifies the size of space dominated by but not dominated by any other solution, without loss of generality as:

(45)

In the optimization of classical communications, the existence of two objective functions is assumed. The first objective function, , is associated with the minimization of the cost of the first type of classical communications related to the reception and storage of entangled systems in the quantum nodes (it covers the classical communications related to the required entanglement throughput by the nodes, fidelity of received entanglement, number of stored entangled states, and fidelity parameters). Thus,

(46)

where is the cost associated with the first type of classical communications related to a .

The second objective function, , is associated with the cost of the second type of classical communications that is related to entanglement purification:

(47)

where is the cost associated with the second type of classical communications with respect to .

Assuming objective functions and , the of a particular solution is as follows:

(48)

Given that the -metric is calculated for the solutions, a set of nearest neighbors that restrict the space can be determined. Since the volume of this space can be quantified by the hypervolume, the solutions that satisfy objectives and can be found by utilizing (48).

Iii-F Computational Complexity

The computational complexity of the Poissonian optimization method is derived as follows: Given that epicenters are generated in the search space and that the number of locations for an epicenter is determined by the dispersion operator , the resulting computational complexity at a total number of locations (see (22)) is

(49)

since after a sorting process the locations for a given epicenter can be calculated with complexity , where is the number of objectives.

Considering that in our setting , the total complexity is

(50)

Iv Problem Resolution

The resolution of the problem shown in Section II using the Poissonian entanglement optimization framework of Section III is as follows:

Let be a set of nodes for which condition (7) holds for the fidelity of the received entangled states in the nodes, and let be a set of nodes for which condition (8) holds for the received fidelity entanglement.

Then let and be the cardinality of and , respectively.

Specifically, function (10) for the -type nodes is rewritten as

(51)

where is the entanglement fidelity function for an -type node , , and is the expected relative entropy of entanglement in an -type .

Similarly, for the -type nodes, function (10) is as follows:

(52)

From (51) and (52), a cumulative is defined as