BGD-based Adam algorithm for time-domain equalizer in PAM-based optical interconnects

08/12/2019 ∙ by Haide Wang, et al. ∙ SUN YAT-SEN UNIVERSITY 0

To the best of our knowledge, for the first time, we propose adaptive moment estimation (Adam) algorithm based on batch gradient descent (BGD) to design a time-domain equalizer (TDE) for PAM-based optical interconnects. Adam algorithm has been widely applied in the fields of artificial intelligence. For TDE, BGD-based Adam algorithm can obtain globally optimal tap coefficients without being trapped in locally optimal tap coefficients. Therefore, fast and stable convergence can be achieved by BGD-based Adam algorithm with low mean square error. Meanwhile, BGD-based Adam algorithm is implemented by parallel processing, which is more efficient than conventional serial algorithms, such as least mean square and recursive least square algorithms. The experimental results demonstrate that BGD-based Adam feed-forward equalizer works well in 120-Gbit/s PAM8 optical interconnects. In conclusion, BGD-based Adam algorithm shows great potential for converging the tap coefficients of TDE in future optical interconnects.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

Owing to the emergence of cloud computing and a variety of web applications, large-scale data centers nowadays are resorting to optical interconnects to meet the explosive increase of network traffic [1]. To achieve higher capacity, eight-level pulse-amplitude modulation (PAM8) is a potential for future optical interconnects although it is sensitive to inter-symbol interference (ISI) and noise [2, 3]. In general, time-domain equalizer (TDE) can be employed to compensate ISI for PAM8 system. Recursive least squares (RLS) and least mean squares (LMS) algorithms are two common adaptive algorithms to converge the tap coefficients for TDE. However, with the increase of data rate and modulation level, TDE using RLS algorithm has high computational complexity and TDE using LMS algorithm requires large amount of training samples, which may be not well-suited for future optical interconnects [4].

In recent years, machine learning (ML) algorithms have been widely applied in the fields of artificial intelligence (AI)

[5]. Since ML algorithms require large scale training data sets, many efficient adaptive algorithms have been proposed for fast and stable convergence to minimize the error function of ML algorithms [6, 7, 8]. Since it s believed that more data beats better algorithm in AI fields [9], AI scientists often use the distinguished adaptive moment estimation (Adam) algorithm for stochastic optimization without reducing the scale of training set. Generally, there is a trade-off between the accuracy and the number of training samples in the training process [10]. It s very possible to optimize the TDE with traditional structure by using these adaptive algorithms. However, different from ML, in real communication systems, we are supposed to use as few training samples and low computational complexity as possible to get the optimal tap coefficients.

In this paper, inspired by advances of AI, we first propose batch gradient descent (BGD)-based Adam algorithm to achieve fast and stable convergence of tap coefficients in feed-forward equalizer (FFE), a major type of TDE. As we known, the conventional RLS and LMS algorithms employ the serial processing to converge the tap coefficients. However, BGD-based Adam algorithm for FFE can be implemented by an effective parallel processing. Meanwhile, BGD-based Adam algorithm has adaptive step size for realizing precise convergence with low mean square error (MSE). The experimental results of 120-Gbit/s PAM8 optical interconnects demonstrate that BGD-based Adam FFE can effectively get the optimal tap coefficients using less training samples and iterations, which shows great potential for future optical interconnects.

Ii Princeple of BGD-based Adam algorithm for FFE

BGD-based Adam algorithm for FFE requires training samples to perform the update of tap coefficients in the training process. At the receiver, the training samples are received and stored to construct a training matrix R for parallel processing. During the procedure of training, all the samples are required to be stored. The structure of the training matrix R can be expressed as

R (1)

where are the received training samples. Obviously, the dimension of R is -by- where is the number of training samples and

is the number of taps in FFE. The transmitted training vector is

Y (2)

where denotes matrix transpose. The error function used in BGD-based Adam algorithm for FFE is MSE, which can be expressed as

(3)

where is the tap coefficient vector of the FFE. Gradient G is the partial derivative of with respect to , which can be calculated as

G (4)
0:     Y {Training vector}
0:     R {Received training matrix}
0:     {Total iteration number}
0:     {Step size}
0:     {Initialize tap coefficients}
0:     {Initialize first moment vector}
0:     {Initialize second moment vector}
0:     {Iteration initialization}
1:  
2:   {Get gradients}
3:  
4:  
5:   {Bias-corrected operation}
6:   {Bias-corrected operation}
7:   {Update tap coefficients}
8:  end while
9:  return {Return tap coefficients}
Algorithm 1 BGD-based Adam algorithm for FFE

Conventional BGD method updates in the opposite direction of the gradient , which can be expressed as

(5)

where is a fixed step size ranging from to and subscript denotes -th iteration. Generally speaking, when the step size is too large, it may fail to converge, or even diverge; but it needs a great number of iterations when the step size is too small [11]. However, BGD-based Adam algorithm is much less sensitive to the step size compared with conventional BGD method for the reason that it computes adaptive step sizes from estimates of biased first and second moments of gradients. BGD-based Adam algorithm for FFE in the training process is illustrated in Algorithm 1. The biased first and second moment estimates and of are initialized as zeros vector, which can be expressed as

(6)
(7)

where and are set to 0.9 and 0.999, respectively. The bias-corrected operations keep the biased first and second moment estimates from moving towards zeros at the beginning of iterations, which can be expressed as

(8)
(9)

A relative small value is used to prevent zero-division error and the tap coefficients are updated as

(10)

BGD-based Adam algorithm calculates the error function after scanning all training samples and then updates parameters. It s acknowledged that BGD-based Adam, which requires all training samples every iteration, is guaranteed to converge to globally optimal solution for convex error function, such as MSE function. However, other gradient descent methods which do not use all training samples every iteration, are more likely to be trapped in the locally optimal coefficients and frequent updating may result in drastic fluctuation of the error function [12]. Moreover, BGD-based Adam algorithm updates the parameters also less frequently than LMS and RLS algorithms, which uses only one training sample every iteration and run in serial. However, it should be noted that BGD-based method needs an extra memory to store all training samples in the training process. It’s worth noting that a significant feature of BGD-based Adam algorithm is that the basic operations are based on matrices and vectors. It means that it runs much faster in parallel in some computing environments, such as MATLAB, Numerical Python in AI field, field programmable gata array in industrial field and so on [13, 14].

Further, after converging to the globally optimal tap coefficients by BGD-based Adam algorithm, the extra memory aren’t needed and the system serially equalizes the received signals. After equalization, a simple post filter is used to suppress the amplified high-frequency noise. The output of the post filter can be express as

(11)

where is the output of the FFE, and is the tap coefficient of post filter. Further, the post filter unavoidably introduces a known ISI, but it can be eliminated by maximum likelihood sequence detection (MLSD) algorithm [15].

Iii Experimental setups

Fig. 1: Experiment setups. EML, electro-absorption modulator integrated laser; DAC, digital-to-analog converter; EA, electric amplifier; DC bias, direct current bias; SSMF, standard single-mode fiber; VOA, variable optical attenuator; PD, photodiode; RTO, real-time oscilloscope. Inset (a) is eye diagram of received PAM8 signals. Inset (b) is equalized eye diagram of PAM8 signals.
Fig. 2: MSE curves of FFE applied for 120-Gbit/s PAM8 optical interconnects. (a) BGD-based Adam, (b) LMS, (c) RLS algorithm are employed, respectively.

Fig. 1 shows the experimental setups of 120-Gbit/s PAM8 system using BGD-based Adam FFE. At the transmitter, the digital frames of PAM8 signals are uploaded into a digital-to-analog converter (DAC) with 86-GSa/s sampling rate and 16-GHz 3-dB bandwidth to generate electrical PAM8 frames. The symbol rate of electrical PAM8 frames is set to 43 GBaud. After being amplified by an electrical amplifier (EA), the amplified electrical PAM8 frames are modulated by a 40-Gbit/s electro-absorption integrated laser modulator (EML), to which an appropriate direct current (DC) bias is applied, to generate the optical PAM8 frames. Subsequently, the generated optical PAM8 signals are launched into 2-km standard single mode fiber (SSMF). At the receiver, a variable optical attenuator (VOA) is employed to adjust the received optical power (ROP) of signals. Then a photodiode (PD) converts the received optical signals into electrical signals. The electrical signals are converted into digital signals by a real-time oscilloscope (RTO) with sampling rate of 80 GSa/s and 3-dB bandwidth of 36 GHz. Finally, off-line processing is implemented to deal with the digital signals, including re-sampling, synchronization, BGD-based Adam FFE, post filter, MLSD, PAM8 de-mapping and bit error rate (BER) calculation. When the lengths of training samples and total samples in one frame is set to 300 and 164480, respectively, the net rate of electrical PAM8 frames is approximately 120 Gbit/s ( Gbit/s). The eye diagrams of received PAM8 signals and equalized PAM8 signals are shown as Inset (a) and (b), respectively. Apparently, the serious ISI is effectively compensated after equalization.

Iv Results and discussion

The curves of MSE of FFE with BGD-based Adam, LMS and RLS algorithm are shown in Fig. 2. As shown in Fig. 2 (a), on the one hand, after 100 iterations, MSE of BGD-based Adam algorithm is around and obviously it has converged. Although computational complexity of each iteration of BGD-based Adam algorithm is higher than that of the other two algorithms, it updates the coefficients of tap with all training samples every iteration less frequently and steadily. As shown in Fig. 2 (b) and Fig. 2

(c), it’s clear that LMS or RLS algorithm cannot converge even after 200 iterations and the MSE curves are much more fluctuated than that of BGD-based Adam algorithm for the reason that LMS or RLS algorithm frequently updates the tap coefficients giving rise to MSE with a high variance. BER performances of the above algorithms undoubtedly become better with the increase of iterations.

Fig. 3: BER performances of 120-Gbit/s PAM8 optical interconnects versus ROPs at BTB (a) and 2-km transmission (b) with FFE, post filter and MLSD. FFE employs BGD-based Adam with 300 training samples (green rhombus), LMS with 1200 (red triangle) or 300 (yellow circle) training samples, and RLS with 300 training samples (blue square), respectively.

As shown in Fig. 3 (a), for 120-Gbit/s PAM8 system after back-to-back (BTB) transmission, BGD-based Adam algorithm using 300 training samples has almost the same performance as those of RLS algorithm with 300 training samples and LMS algorithm with 1200 training samples. However, LMS algorithm doesn’t work well for converging the tap coefficients when the training samples are set to 300. Therefore, BGD-based Adam algorithm is more efficient than LMS algorithm. As shown in Fig. 3 (b), BGD-based Adam algorithm using 300 training samples can achieve good and stable performances for 120-Gbit/s PAM8 system after 2-km SSMF transmission, which is comparable to RLS algorithm using 300 training samples but better than LMS algorithm using 1200 training samples. The ROP of 120-Gbit/s PAM8 system with BGD-based Adam algorithm is approximately 1-dB lower than that with LMS algorithm at the 7% FEC limit. Therefore, BGD-based Adam algorithm is more robust for resisting the limited bandwidth and noise than LMS algorithm even using less training samples. The tap numbers of FFE using the above algorithms are set to 181.

The computational complexity of the training processes can be calculated as [16]

(12)
(13)
(14)

where is the length of training sequences, is the number of taps and is the iteration number of BGD-based Adam algorithm. and are respectively set to 120 and 181. The computational complexity of BGD-based Adam algorithm is a concave quadratic function of the tap number , while the computational complexity of LMS algorithm is proportional to and the computational complexity of RLS algorithm is proportional to .

Table I shows the comparisons of training samples, computational complexity and run mode of BGD-based Adam, LMS and RLS algorithms in the training process. Owing to the precise convergence of batch training, the training samples of BGD-based Adam algorithm is the same as RLS algorithm and 75% less than that of LMS algorithm. In general, the computational complexities of the mentioned adaptive algorithms in the training process are shown in ascending order as follows: . The computational complexity of BGD-based Adam algorithm is higher than that of LMS, but less than that of RLS algorithm. However, thanks to the adaptive step size, the tap coefficients converge rapidly and steadily by using BGD-based Adam algorithm. Therefore, the iteration number of BGD-based Adam algorithm is usually smaller than that of LMS and RLS algorithm. Furthermore, parallel computation techniques greatly speed up BGD-based Adam algorithm.

Run mode
BGD-based Adam 300 Parallel
LMS 1200 Serial
RLS 300 Serial
TABLE I: Comparisons of number of training samples, computational complexity and run mode of BGD-based Adam, LMS and RLS algorithms in the training process.

V Conclusion

In conclusion, for the first time, we propose BGD-based Adam TDE for PAM-based optical interconnects. The experimental results of 120-Gbit/s PAM8 optical interconnects over 2-km transmission demonstrate that BGD-based Adam TDE can effectively and efficiently get the optimal tap coefficients using less training samples and iterations to approach good performance. BGD-based Adam algorithm achieves a better performance than LMS algorithm. Furthermore, the computational complexity of BGD-based Adam algorithm is lower than RLS algorithm and it can be accelerated owing to the matrix operations in parallel. BGD-based Adam algorithm is robust and suitable for resisting the limited bandwidth and serious noise, showing great potential for future optical interconnects.

Funding.

The Science and Technology Planning Project of Guangdong Province (2017B010123005, 2018B010114002). Local Innovation and Research Teams Project of Guangdong Pearl River Talents Program (2017BT01X121); National Science Foundation of China (NSFC) (61525502); The Fundamental Research Funds for the Central Universities (21619309).


References

  • [1] Q. Cheng, M. Bahadori, M. Glick, S. Rumley, and K. Bergman, “Recent advances in optical technologies for data centers: a review,” Optica.5, 1354-1370 (2018).
  • [2] G. Chen, J. Du, L. Sun, W. Zhang, K. Xu, X. Chen, G. T. Reed, and Z. He, “Nonlinear distortion mitigation by machine learning of svm classification for pam-4 and pam-8 modulated optical interconnection,” J. Light. Technol.36, 650-657 (2018).
  • [3] N.-P. Diamantopoulos, W. Kobayashi, H. Nishi, K. Takeda, T. Kakitsuka, and S. Matsuo, “Amplifierless pam-4/pam-8 transmissions in o-bandusing a directly modulated laser for optical data-center interconnects,” Opt. letters 44, 9-12 (2019).
  • [4] Q. Zhang, N. Stojanovic, C. Prodaniuc, C. Xie, M. Koenigsmann, and P. Laskowski, “Single-lane 180 gbit/s pam-4 signal transmission over 2 km ssmf for short-reach applications,” Opt. letters 41, 4449-4452 (2016).
  • [5] M. I. Jordan and T. M. Mitchell, “Machine learning: Trends, perspectives,and prospects,” Science 349, 255-260 (2015).
  • [6]

    J. Duchi, E. Hazan, and Y. Singer, “Adaptive subgradient methods for online learning and stochastic optimization,” J. Mach. Learn. Res.

    12, 2121-2159 (2011).
  • [7] M. D. Zeiler, “Adadelta: an adaptive learning rate method,” arXivpreprint arXiv:1212.5701 (2012).
  • [8] D. P. Kingma and J. Ba, “Adam: A method for stochastic optimization,” arXiv preprint arXiv:1412.6980 (2014).
  • [9] P. M. Domingos, “A few useful things to know about machine learning.” Commun. acm 55,78-87 (2012).
  • [10] L. Bottou and O. Bousquet, “The tradeoffs of large scale learning,” in Advances in neural information processing systems, (2008), pp.161-168.
  • [11] I. Goodfellow, Y. Bengio, and A. Courville, Deep learning (MIT press, 2016).
  • [12] S. Ruder, “An overview of gradient descent optimization algorithms,” arXiv preprint arXiv:1609.04747 (2016).
  • [13] S. Van Der Walt, S. C. Colbert, and G. Varoquaux, “The numpy array:a structure for efficient numerical computation,” Comput.Sci. & Eng.13, 22 (2011).
  • [14] Y. Dou, S. Vassiliadis, G. K. Kuzmanov, and G. N. Gaydadjiev, “64-bit floating-point fpga matrix multiplication,” in Proceedings of the 2005ACM/SIGDA 13th international symposium on Field-programmablegate arrays, (ACM, 2005), pp. 86-95.
  • [15] K. Zhong, X. Zhou, J. Huo, C. Yu, C. Lu, and A. P. T. Lau, “Digital signal processing for short-reach optical communications: a review of current technologies and future trends,” J. Light. Technol.36, 377-400 (2018).
  • [16] J. Zhou, Y. Qiao, X. Huang, C. Yu, Q. Cheng, X. Tang, M. Guo, W. Liu,and Z. Li, “Joint fde and mlsd algorithm for 56-gbit/s optical ftn-pam4 system using 10g-class optics,” J. Light. Technol.37, 3343-3350 (2019).