I Introduction
The focusing nonlinear Schrödinger equation (NLSE)
(1) 
is a universal model in the nonlinear science such as the deep water wave RW_tank , nonlinear optics prw ; rw2010 ; Dudley2014 , BoseEinstein condensate becrw and even the finance yanfrw . The rogue wave (RW) solution or the Peregrine soliton is a typical exact solution for this model, which is related with the modulational instability (MI) Kharif2009 ; Onorato . Even though the NLSE is an integrable model and possesses lot of exact solutions (solitons, multisolitons, breathers, rogue waves), which had been observed in the physical experiments. Lots of rogue wave patterns had been discovered and analyzed based on analytic solutions Kedziora2013 ; YangY21 . However, it is crucial to consider the general initial data problem for the NLSE, since many different waves can be generated by MI on a plane wave background.
The optical rogue waves are observed in the integrable turbulence for the NLSE Walczak15 , which is related to random initial data. Recent studies also show that RW patterns and integrable turbulence are observed in the soliton gas, for which the statistics on the kinetic, potential energies and other features are performed to characterize integrable turbulence Gelash1819
. Starting from the stochastic perturbation on the nonzero background, the maximum peaks and probability density functions on the intensity are analyzed by the numerical method
SotoCrespo16, which provides the explanations on the probability of the appearance of rogue waves in a chaotic wave state. In the previous studies, the turbulence is measured by the statistics. If we depart from arbitrary initial data on the plane wave background, the chaotic structures can be observed from the numeric simulation due to the development of MI
MI1967 ; Baronio ; MInonlinearMI , in which the weak units similar as the Peregrine solitons can be detected clearly. Thus, it is naturally to measure the RW pattern and compute the density of RW by the RWs number. However, the bijection relation between RWs and the scattering data has not been established, in contrast to soliton density in soliton gas, which is well established by inverse scattering theory Gelash1819 .The machine learning method
Jordan15 was used to analyze the extreme events in optical fiber MI Narhi18, in which the intensity and spectral intensity are analyzed by the supervised and unsupervised learning method. The machine learning algorithms, which involve the
nearest neighbors, support vector machine and artificial neural networks, were also used for predicting the amplitude in the chaotic lase pulse
Amil19 . The extreme events are predicted by the deep neural networks in a truncated Kortewegde Vries (tKdV) statistical framework Qi20 . Other relative statistic learning methods to the extreme events are performed by the literature Mohamad18 ; Dematteis18 ; Majda19 ; Salmela20 . Object detection based on deep learning is widely applied to various scientific fields, e.g. Tao et al. combined microscopy observation with artificial neural networks (ANNs) and realized by machine learning algorithms for the study of starch gelatinization Tao . A numerical method based on the neural network can be used to long time simulation of rogue waves and other nonlinear waves having the MI WangLZF21 . Recent developed machine learning method provides possibilities to measure the RW pattern and compute the density of RWs.In this work, we propose our deep neural networks (DNNs) model to automatically and accurately detect the RWs in the images. We have designed the related dataset, which has RW images with bounding box annotations for each RW unit. We further derive our novel metric, the DRW, to characterize the evolution of Gaussian perturbations, and finally give the statistical characters on them.
Ii Preliminary
It has been shown that local weak perturbations can generate RWs Zhaoling ; GaoZYLY20 . We therefore consider an initial data (where
is a white noise or weak localized perturbation) to investigate the evolution patterns containing RWs. By using the
thorder integratingfactor method for solving the NLSE Yang10 , we see that the initial data can generate lots of RWs with time evolution. Interestingly, each of them has the similar shape as the fundamental RW (Peregrine breather), which has the form Peregrine1983 ; Akhmediev2009a :(2) 
whose temporalspatial pattern is shown in FIG. 1(a). However, there is no systematic evidence to illustrate the results, which is an open problem for us up to now. In this paper, we would like to solve this problem by the artificial neural network. We choose an initial data with usual Gaussian form perturbations Zhaoling
(3) 
to demonstrate our results. The corresponding Lax spectrum of the initial data was found to locate at the pure imaginary axis Biondini18 . The highorder RWs can be generated by the multiGaussian perturbations on a plane wave background GaoZYLY20 . To measure the RW pattern evolved from the Gaussian perturbation, we define the density of rogue wave (DRW) as follows:
(4) 
where denotes the number of RWs that appear during the period from to . We compute according to the number of RW bounding boxes in and incomplete boxes are computed by the proportion of their areas inside , for instance, half of a box is computed as . is the area of the triangular . It should be pointed out that
is the time interval between the beginning moment and the time when the first RW peak appears. Fig.
1(b) shows the details of some terms defined in this paper. Nextly, we introduce our DNNs model to detect RW pattern.Iii RWDNet
Image recognition and object detection by deep learning methods had gotten great development in recent years. Residual Network (ResNet) was designed to learn the residual representation between the layers, which not only train the network easier but also improve the accuracy a lot He2016
. Even now ResNet is still one of the most fundamental feature extraction network in the computer vision field. Lin et al. proposed the Feature Pyramid Network (FPN) which improves the accuracy of object detection, especially in the detection of weak objects, without adding additional calculations by extracting and fusing multiscale feature map of RW images
He2017b. Based on the ResNet and FPN, Lin et al. proposed their RetinaNet with the noval focal loss function replacing the traditional crossentropy loss to solve the unbalance problem between positive and negative objects, which improves both the accuracy and speed
He2017a. Goldman et al. improve the detection in densely packed scenes by adding the SoftIoU layer and the ExpectationMaximization Merge (EMMerge) unit to the RetinaNet
Goldman2019 .Inspired by the studies above, we propose a new methodology, namely Rogue Wave Detection Network (RWDNet), developed from RetinaNet improved by Goldman et al. in Goldman2019 , which is designed to detect the RW regions in the images and get the number and distribution of them to measure the rogue wave pattern. The architecture of the network is shown in Fig.2, which mainly consists of three parts: feature extractor or backbone, detector and postprocessing unit. The backbone is composed of ResNet and FPN to get feature maps of RW images under different scales. Each instance in the feature map is covered with priori boxes, called anchors, each of which has different sizes and shapes and will be send to the next subnets—class, box and SoftIoU subnet to be trained.
Mathematically, let the pixel values of a RW image be denoted by , denotes the width, height and length of the image respectively, and the training data is denoted by , where is the set of offset and label of each anchor covering and denoted the number of anchors in image . Our goal is to predict the bounding box matrices of image . A RW image I is first fed into the backbone consists of ResNet and Feature Pyramnd Network (FPN), parameterized by , to produce a feature map , where C
is the number of feature channels. The feature vector
at each spatial location on the feature map represents the extracted feature of anchor on the image I by the ResNet+FPN model. After the feature extracting, there are subnets of our proposed RWDNet: the class subnet for RW/background classification of each anchor feature , the box subnet for the bounding box regression and the SoftIoU subnet for predicting the SoftIoU between the prediction boxes and the ground truth.In the class subnet, we define the confidence level that an anchor instance covers a RW region as:
(5) 
where is the parameter of the class subnet, then we minimize the focal loss on the traing data with annotations:
(6) 
where is the label annotation of anchor obtained directly from and is the set of all anchors covering the image . We set and in our implementation. And the loss function for training the class subnet over the whole traing set is defined by
(7) 
We now introduce the box subnet module of our RWDNet. The model predicts the offset between positive anchors and their belonged ground truth as
(8) 
where is the parameter of box subnet. Then we use the smooth loss function smoothL1 on the image with box annotations to learn the box subnet:
(9) 
where
(10) 
and is the annotation of anchors’ offset. Now we can write down the loss function on the trainning data :
(11) 
The Intersection over Union (IoU) defined as
(12) 
is used as the evaluation metric in the detection, where
and denote the intersection and union area between the prediction and ground truth respectively. The class subnet gives the confidence and the SoftIoU subnet gives the SoftIoU which is just the IoU predicted by the model and is not the real IoU. The SoftIoU between the prediction boxes and their belonged ground truth is defined as(13) 
where is the parameter of the SoftloU subnet. Then we minimize the crossentropy loss function on the image :
(14) 
where is the IoU ground truth between anchor in location and their belonged ground truth. And the whole loss in this SoftIoU subnet for training data defined as:
(15) 
Finally, we write down the overall loss function for traing our RWDNet:
(16) 
All parameters are jointly optimized during network training. The optimized parameters are obtained by
(17) 
And we minimize the overall loss function by stochastic gradient decent method.
Given a testing image , whether its anchor in location covers RW regions is determined by:
(18) 
If , then we further need to do the bounding box regression for anchor in :
(19) 
With and above, we obtain the prediction . But it does not end, RWDNet also predicts the SoftIoU between each positive anchor and the ground truth:
(20) 
which is used for expectationmaximizationmerge (EMMerge) unit next to obtain the final predicting boxes. The details about the EMMerge unit can be seen in Goldman2019 .
Iv Experimental results
In this work, we provide our big dataset, termed as Rogue Wave DatasetK (RWDK) containing 10,191 images of rogue wave pattern. We propose an efficient semiautomatic method, called Peak Search method, to achieve fast and accurate predetection to rogue waves instead of manual labeling. It is designed to determine the approximate location of the peak for each rogue wave by filtering out the local maximum points on the numerical solution matrices such that one peak point on the matrix corresponds to one rogue wave on the image. Then, all of these peak points will be expanded into bounding boxes. It should be noted that the sizes and locations of these bounding boxes will be manually refined due to the errors caused by the dimensional difference between the numerical solution matrices and the image matrices. By Peak Search and additional minor corrections, we can efficiently build the RWD10K dataset. The details about Peak Search are given in the Appendix A.
We variate the parameters (from to with the interval ) and (from to with the interval ) in the initial NLSE (1) to generate images. Additionally, to study the variation of DRW or other terms for and , we make other images with larger value range of and to complete the following statistical experiments. Then we get the pseudo annotations by the Peak Search algorithm and refine manually so that we assembled our big benchmark RWD10K. Each image corresponds a parameter twotuple (, ) in the initial equation. We focus on such settings for two reasons. First, every image in our dataset has its physical meaning. Second, by detecting those images we can capture the pattern similarity of computer vision among them and get certain statistical results about the distribution of these rogue waves (e.g. the evolution that the number of the rogue wave changes with the parameters in the initial data (1))
In our detection experiment, the RWD10K dataset is partitioned into train, validate and test splits. Training consists of 60% of the images ( images) and their associated bounding boxes; of the images ( images), are used for validation (with their bounding boxes). The rest images ( bounding boxes) were used for testing. Images were selected randomly, ensuring that the same RW from the same image does not appear in more than one of these subsets. We present our RWD10K dataset open for public studies and you can get more details at https://github.com/ZouLiwen1999/RogueWave.
The details about our experiment settings are shown in Appendix C. After 20 epochs training, we get the average precision (AP) of
in the RW detection experiment, which shows that we successfully capture the rogue wave pattern similarity from the perspective of computer vision. Fig.3 shows the detection results of the images whose parameters are randomly chosen from the test dataset and we can see that we shot almost every rogue wave in the detection. Additionally, it is easy to observe that the distribution of these rogue wave patterns under different initial data are discriminative, which prompts us to do further statistics in the next section.V Measuring the rogue wave pattern
Now we try to measure the RW pattern under different Gaussian initial data by the trained RWDNet. We use the DRW defined in (4) to quantify the RW pattern and we can also get the variation of GT and based on the detection results of our RWDNet model. The results are shown in the Fig. 4 and initial data is given in (3).
FIG. 4(a) is the distribution of the DRW with respect to the parameter and when in (4) is fixed as 15. Meanwhile, it is easy to find that the value of DRW decreases from bottom left to top right of the image, which means the value of corresponding DRW will decline in the spatiotemporal regions when and increase. These characters can be understood by a fact that each localized wave is closer to Peregrine RW for initial condition with larger and , since initial perturbation with much larger and approaches more closely to the resonant condition with background Zhaoling ; Lingzhao2017 .
In FIG. 4(b), we show the variations of DRW as varies when and are fixed at different constant values. Since the initial data with different parameters and will yield different , then the range of is different for the fixed image size. According to the figure, it can be seen that when and are fixed, DRW will decrease as a smooth curve as increase. At the same time, if we only consider the variation of for fixed , we can see that when increases from to , the corresponding DRW value will decrease in turn. Also, if we only consider the variation of for the fixed , the DRW value will still show a downward trend as the value of increases from to . Those results are in line with the ones of FIG. 4(a).
In FIG. 4(c), we show the relation between and ( ), and we respectively use the exponential function with base and logarithmic function to fit the curves of GT about the change of and . In the Appendix B, we compare different fitting functions, and list the results to show that it is more reasonable to use the following two functions. For fixed , we have the relation between GT and as
(21) 
where and are the fitting parameters. For fixed , we have the relation between GT and as
(22) 
where and are the fitting parameters. Related fitting parameters are given in Tabel 1 and 2. In Fig. 4(c), we can see that the fitting curves indeed agree well with the numerical results.
FIG. 4(d) demonstrates the variation of with respect to the parameters or respectively. Fixed the parameter , it is shown that the angle almost steadily varies between and . But if we fix the parameter , it is seen that the angle will decrease as increases with the range approximately between and . These results show the amplitude parameter will effect the angle . Additional statistics are given in the Appendix B.
Vi Conclusions
In this paper, we propose an automatic, fast and accurate method to measure RW pattern by deep learning. Recently, Guo et al. Guo2021
described their automated classification and positioning system for identifying localized excitations in atomic BoseEinstein condensates by deep convolutional neural networks to eliminate the need for human image examination. They implement the detection by CNNbased image classification and leastsquares fittingbased position regression. But we focus on the efficient detection of RWs on numerical solutions images of NLSEs entirely by an endtoend deep learning framework to capture the pattern similarity of computer vision. Besides, we also get the statistics of these RW patterns based on the detection results so that we are able to predict the time interval and region angle in real cases. Our proposed term DRW can intuitively measure the the Gaussian perturbations in our experiments. Our model can be generalized for the other integrable systems with MI
Manakov ; Zhao2012 ; Chabchoub14 ; Baronio121318 ; Chen2013 ; Zhao2019 ; Tikan17 ; Li2013 ; Kartashov19 ; ZhangLY21 ; CheFL21 ; FengLT20 ; MoLZ21 .For multiGaussian perturbation or other weak localized forms in initial data (3
) (i.e. there are two RW valleys in the images), can we still classify these rogue wave patterns and get their corresponding distribution? We leave this problem to the future work. We also hope that this work will initiate more innovative efforts in this field.
Acknowlegement
Delu Zeng is supported by National Science Foundation of China (61571005), the Fundamental Research Program of Guangdong, China (No. 2020B1515310023), the Science and Technology Research Program of Guangzhou, China (No. 201804010429). Liming Ling is supported by National Natural Science Foundation of China (No. 11771151), Guangzhou Science and Technology Program(No. 201904010362). LiChen Zhao is supported by the National Natural Science Foundation of China (Contract No. 12022513, 11775176), and the Major Basic Research Program of Natural Science of Shaanxi Province (Grant No. 2018KJXX094).
Appendix
In the Appendix, we give the outline of our proposed Peak Search method, the description of our RWD10K dataset and the details about the setting and results of our rogue wave detection experiments.
vi.1 Peak Search
Peak Search is our proposed fast and clusterbased algorithm to generate the rogue wave images with pseudo annotations. Through our observations, selecting points with larger modulus length from the numerical solution matrix can well filter out the crests of rogue waves.Therefore, we propose the following algorithm in Algorithm 1 to realize the goal to predetect the rogue waves on images. What needs to be pointed out here is that the Peak Search method only works with rogue wave matrices, which means it can not replace our RWDNet to complete the task of detect rogue wave patterns when there are only images without any numerical solution matrices.
vi.2 Additional statistics
We compare different fitting functions of GT, the variation curves are shown in 5(a) and the fitting parameters are shown in Table 1 and 2. Besides, we give the curves showing the numbers of rogue waves in the images changing with and in 5(b). Lastly, the variation of curves for DRW with and are given in 5(c) and 5(d) under different .





10  1.581  0.804  
30  2.136  0.766  
50  2.563  0.819 





20  0.682  1.900  
50  0.890  2.760  
100  1.042  3.382 
vi.3 Experimental details
In this part, we show some details and results about our RWDNet detection experiment. We use the pretraining weight file of ResNet101 obtained from https://github.com/kerasteam/kerasapplications/releases/tag/resnet as the initial weight. The learning rate is set to , the batch size is set to 4 and the epoch number is set to 20. Our whole training experiment runs on the machine using an NVidia TITAN RTX GPU with 11GB of GDDR6 memory.
We record the overall loss changes of the training set during the training process as shown in FIG. 6(a). At the end of training, the overall loss of the model achieves 0.071 on the train splits. Based on the trained model above, we test it on the test splits. Finally, we get 99.29% AP at the threshold IoU set as 0.5 on the test splits and the PrecisionRecall curve is shown in Fig. 6(b).
References
 (1) A. Chabchoub, N. P. Hoffmann, and N. Akhmediev, Phys. Rev. Lett. 106, 204502 (2011).
 (2) D. R. Solli, C. Ropers, P. Koonath, B. Jalali, Nature (London), 450, 10541057 (2007).
 (3) B. Kibler, J. Fatome, C. Finot, G. Millot, F. Dias, G. Genty, N. Akhmediev, and J. M. Dudley, Nature Phys. 6, 790 (2010).
 (4) J. M. Dudley, F. Dias, M. Erkintalo, and G. Genty, Nature Photon. 8, 755 (2014).
 (5) Yu. V. Bludov, V. V. Konotop, and N. Akhmediev, Phys. Rev. A, 80, 033610 (2009); Z. Yan, V. V. Konotop, and N. Akhmediev, Phys. Rev. A, 82, 036610 (2010).
 (6) V. G. Ivancevic, Cogn. Comput. 2, 17 (2010); Z. Yan, Commun. Theor. Phys. 54, 947 (2010); Z. Yan, Phys. Lett. A 375, 4274 (2011).
 (7) C. Kharif and E. Pelinovsky, Eur. J. Mech. B/Fluids, 22, 603 (2003); P. K. Shukla, et al., Phys. Rev. Lett. 97, 094501 (2006); C. Kharif, E. Pelinovsky, and A. Slunyaev, Rogue Waves in the Ocean (Springer, New York, 2009); M. Onorato, et al., Phys. Rep. 528, 47 (2013).
 (8) M. Onorato, A. R. Osborne, and M. Serio, Phys. Rev. Lett. 96, 014503 (2006).
 (9) D. J. Kedziora, A. Ankiewicz, and N. Akhmediev, Phys. Rev. E 88, 013207 (2013).
 (10) B. Yang, J. Yang, Physica D: Nonlinear Phenomena, 419, 132850 (2021).
 (11) P. Walczak, S. Randoux, and P. Suret, Phys. Rev. Lett. 114, 143903 (2015).
 (12) A. A. Gelash, Physical Review E, 97(2), 022208 (2018); A. A. Gelash, D. S. Agafontsev, Physical Review E, 98(4), 042210 (2018); A. A. Gelash, D. gafontsev, V. Zakharov, et al., Physical Review papers, 123(23), 234102 (2019).
 (13) J. M. SotoCrespo, N. Devine, N. Akhmediev, Phys. Rev. Lett. 116(10), 103901 (2016).
 (14) T. B. Benjamin, Proc. R. Soc. A 299, 5975 (1967); T. B. Benjamin, and J. E. Feir, J. Fluid Mech. 27, 417430 (1967).
 (15) F. Baronio, M. Conforti, A. Degasperis, S. Lombardo, M. Onorato, and S. Wabnitz, Phys. Rev. Lett. 113, 034101 (2014).
 (16) V. E. Zakharov and A. A. Gelash, Phys. Rev. Lett. 111, 054101 (2013); G. Biondini, and D. Mantzavinos, Phys. Rev. Lett. 116, 043902 (2016).
 (17) M. I. Jordan, T. M. Mitchell, 349(6245), 255260 (2015).
 (18) M. Närhi, L. Salmela, J. Toivonen, C. Billet, J.M. Dudley, and G. Genty, Nature communications, 9(1), 111 (2018).
 (19) P. Amil, M. C. Soriano, and C. Masoller, Chaos: An Interdisciplinary Journal of Nonlinear Science, 29(11), 113111 (2019).
 (20) D. Qi, A. J. Majda, Proceedings of the National Academy of Sciences, 117(1), 5259 (2020).
 (21) M. A. Mohamad, T. P. Sapsis, Proceedings of the National Academy of Sciences, 115(44), 1113811143 (2018).
 (22) G. Dematteis, T. Grafke, E. VandenEijnden, Proceedings of the National Academy of Sciences, 115(5), 855860 (2018).
 (23) A. J. Majda, M. N. J. Moore, D. Qi, Proceedings of the National Academy of Sciences, 116(10), 39823987 (2019).
 (24) L. Salmela, C. Lapre, J. M. Dudley, G. Genty, Scientific Reports, 10(1), 18 (2020).
 (25) J. X. Tao, J. B. Huang, L. Yu, Z. K. Li, H. S. Liu, B. Yuan, D. L. Zeng, Food Hydrocolloids, 74(28), 151158 (2017).
 (26) R. Q. Wang, L. M. Ling, D. L. Zeng and B. F. Feng, Communications in Nonlinear Science and Numerical Simulation, 101, 105896 (2021).
 (27) L. C. Zhao and L. M. Ling, J. Opt. Soc. Am. B 33, 850856 (2016).
 (28) P. Gao, L. C. Zhao, Z. Y. Yang, X. H. Li, and W. L. Yang, Opt. Lett. 45, 23992402 (2020)
 (29) J. Yang, Nonlinear waves in integrable and nonintegrable systems. Society for Industrial and Applied Mathematics (2010).
 (30) D. Peregrine, J. Aust. Math. Soc. B, Appl. Math. 25, 16 (1983).
 (31) N. Akhmediev, A. Ankiewicz, and J. SotoCrespo, Phys. Rev. E 80, 026601 (2009).
 (32) G. Biondini, X. Luo, Phys. Lett. A 382(37), 26322637 (2018).

(33)
K. He, X. Zhang, S. Ren, J. Sun and Microsoft Research, 2016 IEEE Conference on Computer Vision and Pattern Recognition, 770778 (2016)
 (34) T. Lin, P. Dollar, R. Girshick, K. He, B. Hariharan and S. Belongie, 2017 IEEE Conference on Computer Vision and Pattern Recognition, 936944 (2017).
 (35) T. Lin, P. Goyal, R. Girshick,K. He and P. Dollar, 2017 IEEE Conference on Computer Vision and Pattern Recognition, 29993007 (2017).
 (36) E. Goldman, R. Herzig, A. Eisenschtat, O. Ratzon, I. Levi, j. Goldberger and T. Hassner, 2019 IEEE Conference on Computer Vision and Pattern Recognition, 52275236 (2019).
 (37) R. Girshick, 2015 IEEE international conference on computer vision, 14401448 (2015).
 (38) S. Ren, K. He, R. Girshick, et al., IEEE Transactions on Pattern Analysis and Machine Intelligence, 39(6), 11371149 (2017).
 (39) J. Tao, J. Huang, L. Yu, Z. Li, H. Liu, B. Yuan, et al., Food Hydrocolloids, 74, 151158 (2017).
 (40) L. M. Ling, L. C. Zhao, Z. Y. Yang, B. Guo, Phys. Rev. E 96, 022211 (2017).
 (41) S. Guo, A. R. Fritsch, C. Greenberg, et al., Machine Learning: Science and Technology, 2, 035020 (2021).
 (42) S. V. Manakov, Zh. Eksp. Teor. Fiz. 67, 543 (1974) [Sov. Phys. JETP, 38, 248 (1974)].
 (43) B. L. Guo and L. M. Ling, Chin. Phys. Lett. 28, 110202 (2011); L. C. Zhao and J. Liu, J. Opt. Soc. Am. B 29, 3119 (2012); L. C. Zhao and J. Liu, Phys. Rev. E 87, 013201 (2013).
 (44) A. Chabchoub and M. Fink, Phys. Rev. Lett. 112, 124101 (2014); A. Przadka, S. Feat, P. Petitjeans, V. Pagneux, A. Maurel, and M. Fink, Phys. Rev. Lett. 109, 064501 (2012).
 (45) F. Baronio, A. Degasperis, M. Conforti, and S. Wabnitz, Phys. Rev. Lett. 109, 044102 (2012); F. Baronio, M. Conforti, A. Degasperis, and S. Lombardo, Phys. Rev. Lett. 111, 114101 (2013); S. Chen, Y. Ye, J. M. SotoCrespo, Ph. Grelu, and F. Baronio, Phys. Rev. Lett. 121, 104101 (2018).
 (46) S. Chen and L.Y. Song, Phys. Rev. E 87, 032910 (2013); S. Chen, Phys. Lett. A 378, 2851 (2014); S. Chen and D. Mihalache, J. Phys. A 48, 215202 (2015); S. Chen, X. M. Cai, P. Grelu, J. SotoCrespo, S. Wabnitz, and F. Baronio, Opt. Express 24, 5886 (2016).
 (47) L. C. Zhao, L. Duan, P. Gao, Z. Y. Yang, EuroPhys. Lett. 125, 40003 (2019).
 (48) A. Tikan, C. Billet, G. El, A. Tovbis, M. Bertola, T. Sylvestre, F. Gustave, S. Randoux, G. Genty, P. Suret, and J. M. Dudley, Phys. Rev. Lett. 119, 033901 (2017).
 (49) L. Li, Z. Wu, L. Wang, and J. He, Ann. Phys. 334, 198 (2013); J. He, H. Zhang, L. Wang, K. Porsezian, and A. Fokas, Phys. Rev. E 87, 052914 (2013).
 (50) Y. V. Kartashov, V. V. Konotop, M. Modugno, and E. Ya. Sherman, Phys. Rev. Lett. 122, 064101 (2019).
 (51) G. Q. Zhang, L.M. Ling, Z.Y. Yan, Journal of Nonlinear Science, 31(5), 152 (2021).
 (52) Y. R. Chen, B. F. Feng, L. M. Ling, Physica D: Nonlinear Phenomena, 424, 132954 (2021).
 (53) B. F. Feng, L. M. Ling, D. A. Takahashi, Studies in Applied Mathematics, 144(1), 46101 (2020).
 (54) Y. F. Mo, L. M. Ling and D. L. Zeng, Phys. Lett. A, 127739 (2021).
Comments
There are no comments yet.