## 1 Introduction

The problem of analytical calculations of characteristics for a given information channel is of great importance in the information theory due to the practically significant applications of optical fibers. These characteristics are as follows: the conditional probability density function (

), i.e., the probability to detect the output signal if the transmitted input signal is ; the distribution of the output signal (); the distribution of the recovered input signal, the output () and conditional () entropies, the mutual information (); the optimal input signal distribution; the channel capacity () and others. In our work we consider the problem of the signal propagation in a noisy information channel where the propagation is governed by the stochastic nonlinear Schrödinger equation (NLSE) with the additive white Gaussian noise for the case of small dispersion and the large signal-to-noise power ratio ().We consider the complex signal which is related with the electric field in the optical fiber as with and being the carrier frequency and the group velocity, correspondingly. The propagation of the signal is described by the NLSE with the additive white Gaussian noise, see [1, 2, 3, 4]:

(1) |

here is the second dispersion coefficient, is the Kerr nonlinearity coefficient,

is the additive complex white noise with zero mean

and correlation function(2) |

where bar means complex conjugation, is a power of the white Gaussian noise per unit length and per unit frequency interval. The input condition for the signal is and the output one is , where is determined both the input condition and the noise impact in Eq. (1). The frequency bandwidth of the noise is assumed to be much greater than the frequency bandwidth of the input signal : .

There are different approaches to analyze Eq. (1) and to find informational channel characteristics, primarly, the conditional probability density function that is one of the most important characteristics. The field theory approach is based on the path-integral formulation of the quantity [5, 6]

. We have performed some estimations for the channel with the small dispersion parameter via this approach

[7]. Now we proceed the small dispersion analysis of the Eq. (1) by exploiting the stochastic approximation of the noisy equation (1) within the detector averaging procedure. Firstly, we perform the linearization of the Eq. (1) using the fact that for the large case the solution of Eq. (1) can not significantly deviate from the solution (we denote this solution as ) of this equation with zero noise and with the input condition . Thus we consider the linear equation for the difference , see Eq. (8) below in the Sec. 3. Secondly, the averaging action of the output signal detector allows us to reduce this linear equation to one that can be treated with the perturbation theory both in small noise parameter () and small dispersion parameter : see Eq. (3.2) below in the Sec. 3. The detector at performs the frequency filtering with bandwidth which obeys the inequality . Therefore we have the separation of the time scales for the input signal function (“slowly varying function”), averaged function , and the noise . This separation enables us to develop the perturbation theory in small dispersion parameter. We demonstrate how the channel informational characteristics depend on the procedure of the output signal detecting (frequency filtering) and the algorithm of the input signal recovery on the base of the filtered output signal, see Sec. 4. Finally, we present the correlators of the input signal recovered from the noisy channel and the conditional probability density function up to the first corrections in dispersion parameter, see Sec. 5.## 2 Input signal model

We assume the following simple coding model for the input signal :

(3) |

where the whole (large) time interval is divided into subintervals of the duration , and we have (almost) nonoverlapping envelope functions with the bandwidth of order of in every subinterval . For simplicity we choose Gaussian functions as the envelope shape:

(4) |

If the sparsity is large: , i.e, the overlapping is small, and we have . For our estimations we choose (practically, if the overlapping, i.e. , is about ). The function is normalized by the condition . When recovering the input signal the coefficients can be found as the projection of the signal on basis functions:

(5) |

In our model we suppose coefficients to have various amplitude with given distribution, , where we separate the dimensionless random real value and the phase . We suppose that the phases are constants over the subinterval and assume a discrete values, say, from the set . In the representation the dimension parameter is the average power over the given input signal distribution :

(6) |

In our calculations we present the input signal in the form , where , and the constant over the subintervals phase is smoothed function in such a way that all time derivates (, ) are localized on the borders of subintervals , and if one neglects the overlapping then , may be considered as zero.

We will use the dimensionless parameter characterizing the impact of nonlinearity in the phase evolution for the NLSE (1) with , see [8, 9]. The dimensionless parameter is the average nonlinearity parameter. We use the dimensionless dispersion parameter , where is referred to as the frequency bandwidth of the input signal (and ), see Eq. (4). In our perturbative considerations the parameter is assumed to be small: .

## 3 Output signal filtering and stochastic NSLE solution

The concrete form of the conditional probability density function depends essentially on the signal detection procedure: this implies the output signal detection and the reconstruction of the input signal on the base of the detected output signal. Let us first consider the procedure of the output signal detection.

### 3.1 Linearization

We present the NLSE solution in the form

(7) |

where is the solution of the NLSE equation (1) with zero noise, with nonzero , and with the input boundary condition . It means that . The phase corresponds to the phase of the solution which is the NLSE solution with zero noise and zero and with the input boundary condition: , see [8]. For nonzero the solution has the following form in expansion: . Quantities are polynomials in nonlinear parameter up to the factors . The function corresponds to the leading order in small , and corresponds to the next-to-leading order, and so on. As an example, the first correction reads , where , , . In the expansion of the NLSE equation (1) with zero noise it is easy to find for arbitrary .

We assume that the function is of order of (i.e., we consider only such output signals that ) and therefore in Eq. (7). We can obtain from Eq. (1) the linear equation for up to terms and :

(8) | |||

where in we have taken into account only the first correction in . Here is the noise with the same statistics as in equation (2) and we can replace with .

### 3.2 Averaging of the detector

For simplicity we assume that the output signal detector performs the following averaging:

(9) |

where the averaging time parameter is connected with the output frequency filtering bandwidth . To generalize this procedure, we can introduce , where filtering function truncates the output signal up to the given bandwidth ( is localized on the bandwidth ).

For sufficient small we can choose in the wide region in such a way that the parameter is small:

(10) |

These relations (10) allows us to perform the averaging of the equation (8). From this equation one can see that has the time scale of order of : the frequency bandwidth of is determined by the bandwidth of the noise in the right hand side, ). The frequency bandwidth of the averaged solution, see Eq. (9), is : . The last relation in Eq. (10), i.e., , implies that .

The functions , , and their time derivatives can be considered as slowly varying functions in comparison with both and the averaged solution : we generally denote them as . This means that we can average the equation (8) and obtain the approximate equation

(11) |

where , and we have used that for a slowly varying function with the bandwidth one has at least with accuracy :

(12) |

It is worth noting that from the practical and numerical points of view the averaging (frequency filtering) of the detector can affect on transmitted information (distort the output signal) much greater than both noise effects and effects of small dispersion. Therefore to guarantee the validity of our calculation method we should keep strict watch for hierarchy of our approximations when linearizing, see Eq. (8), and when obtaining Eq. (3.2) from (10) and (12). It means that for appropriateness of the derivation of Eq. (3.2) we require, e.g., for the first approximation in Eq. (12), that and so on.

For the averaged and multiplied by L noise we can obtain from Eq. (2) the following statistics:

(13) |

where our averaging procedure (9) results in the following form of the function :

(14) |

and for the frequency domain we have , where for our procedure (9) , . Note that for the limit (from relation (10) one can see that this limit is acceptable for dispersionless case) , and .

### 3.3 Solution of the averaged equation (3.2)

Note that the time scale of the solution and the first term () in the r.h.s. of equation (3.2) is . The second term in the r.h.s. of (3.2) is of order of (since the time scale of is ). The other terms in the r.h.s. of (3.2) are of order of (since there are no derivatives of in these terms). This means that we can solve equation (3.2) perturbatively in small parameter . To find the solution we use the Laplace transformation over

and the Fourier transformation over “fast” time, i.e., all slowly varying functions

and are treated as constants (“freezing” of the coefficients), but then their time derivatives emerge as corrections in :(15) |

Here we explicitly emphasize and –dependance of our solution. We will omit this dependance in what follows. To find and we have to solve the following algebraic system resulting from Eq. (3.2):

(16) |

where in the leading (zero) order in we have from the r.h.s. of Eq. (3.2)

(17) |

And we easily find the leading order contribution from (3.3). The solution is the even function of and it depends on only.

In the next-to-leading order in (i.e., order ) we have the contributions from the second term in r.h.s. of Eq. (3.2) and the contributions from the second time derivatives of the leading order, see the second term in the r.h.s. of the following equality

(18) |

Thus we have

(19) |

Note that solution of Eq. (3.3) with r.h.s. (3.3

) is the odd function of

.In the next-to-next-to-leading order in (i.e., in the order ) we have two different sources for these corrections: the second order corrections in parameter and the first order corrections in parameter . However, we can separate these types of corrections in the following way: . The r.h.s. resulting in corrections of the first order in reads

(20) |

The Laplace transformation replaces by the operator . It is obvious that is proportional to or , where is slowly varying function (, ). The r.h.s. of the system (3.3) resulting in corrections of the second order in has the same structure as (3.3): . Fortunately, these corrections () are not essential for the further calculations.

Now we present the result of our calculation in the first order in the noise and in the leading and next-to-leading order in (i.e., we take into account , , and contributions):

(21) |

(22) |

In the further consideration we need only the the expansion of and in . The reason is the subsequent projection of these functions and on the basis (4) with the frequency support : it reduces all powers of to powers of , see the detailed explanation in Sec. 5.

In the leading order in we have the following result presented as the expansion in :

(23) |

(24) |

The first correction in contains the terms proportional to and :

(25) |

(26) |

Finally, we found the first corrections in parameter , i.e., and . Here we present the expansion of this result in parameter :

(27) |

(28) |

Note that the function presented in the unexpanded form allows one to calculate the averaged output signal, i.e., , and then to find all correlators of the output signal.

## 4 Input signal recovery: backward NLSE propagation

The function describes the noise impact in our channel when we measure the output signal filtered with the bandwidth . Now we describe the procedure of the input signal recovery from the output signal. This procedure is not unique and depends on the output signal filter and the input signal model, see Sec. 2.

On the base of the measured (with filtering process described above) output signal we can restore the input signal by the backward NLSE evolution. We denote this analytically restored signal as , where is the inverted evolution operator of the NLSE (1) with zero noise (backward NLSE propagation). In the case of small we know the explicit form of this operator in the perturbation theory. We present the restored signal in the form , and for one has in the leading () and in next-to-leading () orders:

Comments

There are no comments yet.