Measuring the rogue wave pattern triggered from Gaussian perturbations by deep learning

Weak Gaussian perturbations on a plane wave background could trigger lots of rogue waves, due to modulational instability. Numerical simulations showed that these rogue waves seemed to have similar unit structure. However, to the best of our knowledge, there is no relative result to prove that these rogue waves have the similar patterns for different perturbations, partly due to that it is hard to measure the rogue wave pattern automatically. In this work, we address these problems from the perspective of computer vision via using deep neural networks. We propose a Rogue Wave Detection Network (RWD-Net) model to automatically and accurately detect RWs on the images, which directly indicates they have the similar computer vision patterns. For this purpose, we herein meanwhile have designed the related dataset, termed as Rogue Wave Dataset-10K (RWD-10K), which has 10,191 RW images with bounding box annotations for each RW unit. In our detection experiments, we get 99.29% average precision on the test splits of the RWD-10K dataset. Finally, we derive our novel metric, the density of RW units (DRW), to characterize the evolution of Gaussian perturbations and obtain the statistical results on them.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 2

page 5

03/22/2020

Gravitational Wave Detection and Information Extraction via Neural Networks

Laser Interferometer Gravitational-Wave Observatory (LIGO) was the first...
02/03/2021

Modelling and simulation of a wave energy converter

In this work we present the mathematical model and simulations of a part...
04/02/2020

Trapped solitary-wave interaction for Euler equations with low pressure region

Trapped solitary-wave interaction is studied under the full Euler equati...
11/21/2017

Deep Learning for Real-time Gravitational Wave Detection and Parameter Estimation with LIGO Data

The recent Nobel-prize-winning detections of gravitational waves from me...
06/26/2014

Pattern-wave model of brain. Mechanisms of information processing, memory organization

The structure of the axon-dendrite connections of neurons of the brain c...
05/27/2021

The Nanohertz Gravitational Wave Astronomer

Gravitational waves are a radically new way to peer into the darkest dep...
03/08/2021

Wave focusing and related multiple dispersion transitions in plane Poiseuille flows

Motivated by the recent discovery of a dispersive-to-nondispersive trans...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

The focusing nonlinear Schrödinger equation (NLSE)

(1)

is a universal model in the nonlinear science such as the deep water wave RW_tank , nonlinear optics prw ; rw2010 ; Dudley2014 , Bose-Einstein condensate bec-rw and even the finance yanfrw . The rogue wave (RW) solution or the Peregrine soliton is a typical exact solution for this model, which is related with the modulational instability (MI) Kharif2009 ; Onorato . Even though the NLSE is an integrable model and possesses lot of exact solutions (solitons, multi-solitons, breathers, rogue waves), which had been observed in the physical experiments. Lots of rogue wave patterns had been discovered and analyzed based on analytic solutions Kedziora2013 ; YangY-21 . However, it is crucial to consider the general initial data problem for the NLSE, since many different waves can be generated by MI on a plane wave background.

The optical rogue waves are observed in the integrable turbulence for the NLSE Walczak-15 , which is related to random initial data. Recent studies also show that RW patterns and integrable turbulence are observed in the soliton gas, for which the statistics on the kinetic, potential energies and other features are performed to characterize integrable turbulence Gelash-18-19

. Starting from the stochastic perturbation on the non-zero background, the maximum peaks and probability density functions on the intensity are analyzed by the numerical method

Soto-Crespo-16

, which provides the explanations on the probability of the appearance of rogue waves in a chaotic wave state. In the previous studies, the turbulence is measured by the statistics. If we depart from arbitrary initial data on the plane wave background, the chaotic structures can be observed from the numeric simulation due to the development of MI

MI-1967 ; Baronio ; MI-nonlinear-MI , in which the weak units similar as the Peregrine solitons can be detected clearly. Thus, it is naturally to measure the RW pattern and compute the density of RW by the RWs number. However, the bijection relation between RWs and the scattering data has not been established, in contrast to soliton density in soliton gas, which is well established by inverse scattering theory Gelash-18-19 .

The machine learning method

Jordan-15 was used to analyze the extreme events in optical fiber MI Narhi-18

, in which the intensity and spectral intensity are analyzed by the supervised and unsupervised learning method. The machine learning algorithms, which involve the

-nearest neighbors, support vector machine and artificial neural networks, were also used for predicting the amplitude in the chaotic lase pulse

Amil-19 . The extreme events are predicted by the deep neural networks in a truncated Korteweg-de Vries (tKdV) statistical framework Qi-20 . Other relative statistic learning methods to the extreme events are performed by the literature Mohamad-18 ; Dematteis-18 ; Majda-19 ; Salmela-20 . Object detection based on deep learning is widely applied to various scientific fields, e.g. Tao et al. combined microscopy observation with artificial neural networks (ANNs) and realized by machine learning algorithms for the study of starch gelatinization Tao . A numerical method based on the neural network can be used to long time simulation of rogue waves and other nonlinear waves having the MI WangLZF-21 . Recent developed machine learning method provides possibilities to measure the RW pattern and compute the density of RWs.

In this work, we propose our deep neural networks (DNNs) model to automatically and accurately detect the RWs in the images. We have designed the related dataset, which has RW images with bounding box annotations for each RW unit. We further derive our novel metric, the DRW, to characterize the evolution of Gaussian perturbations, and finally give the statistical characters on them.

Ii Preliminary

It has been shown that local weak perturbations can generate RWs Zhaoling ; GaoZYLY-20 . We therefore consider an initial data (where

is a white noise or weak localized perturbation) to investigate the evolution patterns containing RWs. By using the

th-order integrating-factor method for solving the NLSE Yang-10 , we see that the initial data can generate lots of RWs with time evolution. Interestingly, each of them has the similar shape as the fundamental RW (Peregrine breather), which has the form Peregrine1983 ; Akhmediev2009a :

(2)

whose temporal-spatial pattern is shown in FIG. 1(a). However, there is no systematic evidence to illustrate the results, which is an open problem for us up to now. In this paper, we would like to solve this problem by the artificial neural network. We choose an initial data with usual Gaussian form perturbations Zhaoling

(3)

to demonstrate our results. The corresponding Lax spectrum of the initial data was found to locate at the pure imaginary axis Biondini-18 . The high-order RWs can be generated by the multi-Gaussian perturbations on a plane wave background GaoZYLY-20 . To measure the RW pattern evolved from the Gaussian perturbation, we define the density of rogue wave (DRW) as follows:

(4)

where denotes the number of RWs that appear during the period from to . We compute according to the number of RW bounding boxes in and incomplete boxes are computed by the proportion of their areas inside , for instance, half of a box is computed as . is the area of the triangular . It should be pointed out that

is the time interval between the beginning moment and the time when the first RW peak appears. Fig.

1(b) shows the details of some terms defined in this paper. Nextly, we introduce our DNNs model to detect RW pattern.

Figure 1: (a) Rouge waves on rogue wave pattern images; (b) Some terms defined in this paper.

Iii RWD-Net

Image recognition and object detection by deep learning methods had gotten great development in recent years. Residual Network (ResNet) was designed to learn the residual representation between the layers, which not only train the network easier but also improve the accuracy a lot He2016

. Even now ResNet is still one of the most fundamental feature extraction network in the computer vision field. Lin et al. proposed the Feature Pyramid Network (FPN) which improves the accuracy of object detection, especially in the detection of weak objects, without adding additional calculations by extracting and fusing multi-scale feature map of RW images

He2017b

. Based on the ResNet and FPN, Lin et al. proposed their RetinaNet with the noval focal loss function replacing the traditional cross-entropy loss to solve the unbalance problem between positive and negative objects, which improves both the accuracy and speed

He2017a

. Goldman et al. improve the detection in densely packed scenes by adding the Soft-IoU layer and the Expectation-Maximization Merge (EM-Merge) unit to the RetinaNet

Goldman2019 .

Inspired by the studies above, we propose a new methodology, namely Rogue Wave Detection Network (RWD-Net), developed from RetinaNet improved by Goldman et al. in Goldman2019 , which is designed to detect the RW regions in the images and get the number and distribution of them to measure the rogue wave pattern. The architecture of the network is shown in Fig.2, which mainly consists of three parts: feature extractor or backbone, detector and post-processing unit. The backbone is composed of ResNet and FPN to get feature maps of RW images under different scales. Each instance in the feature map is covered with priori boxes, called anchors, each of which has different sizes and shapes and will be send to the next subnets—class, box and Soft-IoU subnet to be trained.

Figure 2: Structure of our RWD-Net. (a) ResNet-101; (b) FPN; (c) The box, class and the Soft-IoU subnets; (d) EM-Merge unit; (e) Detection results.

Mathematically, let the pixel values of a RW image be denoted by , denotes the width, height and length of the image respectively, and the training data is denoted by , where is the set of offset and label of each anchor covering and denoted the number of anchors in image . Our goal is to predict the bounding box matrices of image . A RW image I is first fed into the backbone consists of ResNet and Feature Pyramnd Network (FPN), parameterized by , to produce a feature map , where C

is the number of feature channels. The feature vector

at each spatial location on the feature map represents the extracted feature of anchor on the image I by the ResNet+FPN model. After the feature extracting, there are subnets of our proposed RWD-Net: the class subnet for RW/background classification of each anchor feature , the box subnet for the bounding box regression and the Soft-IoU subnet for predicting the Soft-IoU between the prediction boxes and the ground truth.

In the class subnet, we define the confidence level that an anchor instance covers a RW region as:

(5)

where is the parameter of the class subnet, then we minimize the focal loss on the traing data with annotations:

(6)

where is the label annotation of anchor obtained directly from and is the set of all anchors covering the image . We set and in our implementation. And the loss function for training the class subnet over the whole traing set is defined by

(7)

We now introduce the box subnet module of our RWD-Net. The model predicts the offset between positive anchors and their belonged ground truth as

(8)

where is the parameter of box subnet. Then we use the smooth loss function smoothL1 on the image with box annotations to learn the box subnet:

(9)

where

(10)

and is the annotation of anchors’ offset. Now we can write down the loss function on the trainning data :

(11)

The Intersection over Union (IoU) defined as

(12)

is used as the evaluation metric in the detection, where

and denote the intersection and union area between the prediction and ground truth respectively. The class subnet gives the confidence and the Soft-IoU subnet gives the Soft-IoU which is just the IoU predicted by the model and is not the real IoU. The Soft-IoU between the prediction boxes and their belonged ground truth is defined as

(13)

where is the parameter of the Soft-loU subnet. Then we minimize the cross-entropy loss function on the image :

(14)

where is the IoU ground truth between anchor in location and their belonged ground truth. And the whole loss in this Soft-IoU subnet for training data defined as:

(15)

Finally, we write down the overall loss function for traing our RWD-Net:

(16)

All parameters are jointly optimized during network training. The optimized parameters are obtained by

(17)

And we minimize the overall loss function by stochastic gradient decent method.

Given a testing image , whether its anchor in location covers RW regions is determined by:

(18)

If , then we further need to do the bounding box regression for anchor in :

(19)

With and above, we obtain the prediction . But it does not end, RWD-Net also predicts the Soft-IoU between each positive anchor and the ground truth:

(20)

which is used for expectation-maximization-merge (EM-Merge) unit next to obtain the final predicting boxes. The details about the EM-Merge unit can be seen in Goldman2019 .

Iv Experimental results

In this work, we provide our big dataset, termed as Rogue Wave Dataset-K (RWD-K) containing 10,191 images of rogue wave pattern. We propose an efficient semi-automatic method, called Peak Search method, to achieve fast and accurate pre-detection to rogue waves instead of manual labeling. It is designed to determine the approximate location of the peak for each rogue wave by filtering out the local maximum points on the numerical solution matrices such that one peak point on the matrix corresponds to one rogue wave on the image. Then, all of these peak points will be expanded into bounding boxes. It should be noted that the sizes and locations of these bounding boxes will be manually refined due to the errors caused by the dimensional difference between the numerical solution matrices and the image matrices. By Peak Search and additional minor corrections, we can efficiently build the RWD-10K dataset. The details about Peak Search are given in the Appendix A.

We variate the parameters (from to with the interval ) and (from to with the interval ) in the initial NLSE (1) to generate images. Additionally, to study the variation of DRW or other terms for and , we make other images with larger value range of and to complete the following statistical experiments. Then we get the pseudo annotations by the Peak Search algorithm and refine manually so that we assembled our big benchmark RWD-10K. Each image corresponds a parameter two-tuple (, ) in the initial equation. We focus on such settings for two reasons. First, every image in our dataset has its physical meaning. Second, by detecting those images we can capture the pattern similarity of computer vision among them and get certain statistical results about the distribution of these rogue waves (e.g. the evolution that the number of the rogue wave changes with the parameters in the initial data (1))

In our detection experiment, the RWD-10K dataset is partitioned into train, validate and test splits. Training consists of 60% of the images ( images) and their associated bounding boxes; of the images ( images), are used for validation (with their bounding boxes). The rest images ( bounding boxes) were used for testing. Images were selected randomly, ensuring that the same RW from the same image does not appear in more than one of these subsets. We present our RWD-10K dataset open for public studies and you can get more details at https://github.com/ZouLiwen-1999/RogueWave.

The details about our experiment settings are shown in Appendix C. After 20 epochs training, we get the average precision (AP) of

in the RW detection experiment, which shows that we successfully capture the rogue wave pattern similarity from the perspective of computer vision. Fig.3 shows the detection results of the images whose parameters are randomly chosen from the test dataset and we can see that we shot almost every rogue wave in the detection. Additionally, it is easy to observe that the distribution of these rogue wave patterns under different initial data are discriminative, which prompts us to do further statistics in the next section.

Figure 3: The detection results of the images we randomly selected from the test splits of RWD-10K using our RWD-Net.

V Measuring the rogue wave pattern

Now we try to measure the RW pattern under different Gaussian initial data by the trained RWD-Net. We use the DRW defined in (4) to quantify the RW pattern and we can also get the variation of GT and based on the detection results of our RWD-Net model. The results are shown in the Fig. 4 and initial data is given in (3).

FIG. 4(a) is the distribution of the DRW with respect to the parameter and when in (4) is fixed as 15. Meanwhile, it is easy to find that the value of DRW decreases from bottom left to top right of the image, which means the value of corresponding DRW will decline in the spatiotemporal regions when and increase. These characters can be understood by a fact that each localized wave is closer to Peregrine RW for initial condition with larger and , since initial perturbation with much larger and approaches more closely to the resonant condition with background Zhaoling ; Lingzhao2017 .

In FIG. 4(b), we show the variations of DRW as varies when and are fixed at different constant values. Since the initial data with different parameters and will yield different , then the range of is different for the fixed image size. According to the figure, it can be seen that when and are fixed, DRW will decrease as a smooth curve as increase. At the same time, if we only consider the variation of for fixed , we can see that when increases from to , the corresponding DRW value will decrease in turn. Also, if we only consider the variation of for the fixed , the DRW value will still show a downward trend as the value of increases from to . Those results are in line with the ones of FIG. 4(a).

In FIG. 4(c), we show the relation between and ( ), and we respectively use the exponential function with base and logarithmic function to fit the curves of GT about the change of and . In the Appendix B, we compare different fitting functions, and list the results to show that it is more reasonable to use the following two functions. For fixed , we have the relation between GT and as

(21)

where and are the fitting parameters. For fixed , we have the relation between GT and as

(22)

where and are the fitting parameters. Related fitting parameters are given in Tabel 1 and 2. In Fig. 4(c), we can see that the fitting curves indeed agree well with the numerical results.

FIG. 4(d) demonstrates the variation of with respect to the parameters or respectively. Fixed the parameter , it is shown that the angle almost steadily varies between and . But if we fix the parameter , it is seen that the angle will decrease as increases with the range approximately between and . These results show the amplitude parameter will effect the angle . Additional statistics are given in the Appendix B.

Figure 4: Results of measuring the rogue wave pattern. (a) The map of DRW with the change of and when ; (b) The variations of DRW for different ; (c) The relation between GT and or , in which the solid and blue lines are the fitting curves and the cross or circle points are the data points; (d) The values with different or values.

Vi Conclusions

In this paper, we propose an automatic, fast and accurate method to measure RW pattern by deep learning. Recently, Guo et al. Guo2021

described their automated classification and positioning system for identifying localized excitations in atomic Bose-Einstein condensates by deep convolutional neural networks to eliminate the need for human image examination. They implement the detection by CNN-based image classification and least-squares fitting-based position regression. But we focus on the efficient detection of RWs on numerical solutions images of NLSEs entirely by an end-to-end deep learning framework to capture the pattern similarity of computer vision. Besides, we also get the statistics of these RW patterns based on the detection results so that we are able to predict the time interval and region angle in real cases. Our proposed term DRW can intuitively measure the the Gaussian perturbations in our experiments. Our model can be generalized for the other integrable systems with MI

Manakov ; Zhao2012 ; Chabchoub14 ; Baronio121318 ; Chen2013 ; Zhao2019 ; Tikan17 ; Li2013 ; Kartashov19 ; ZhangLY-21 ; CheFL-21 ; FengLT-20 ; MoLZ-21 .

For multi-Gaussian perturbation or other weak localized forms in initial data (3

) (i.e. there are two RW valleys in the images), can we still classify these rogue wave patterns and get their corresponding distribution? We leave this problem to the future work. We also hope that this work will initiate more innovative efforts in this field.

Acknowlegement

Delu Zeng is supported by National Science Foundation of China (61571005), the Fundamental Research Program of Guangdong, China (No. 2020B1515310023), the Science and Technology Research Program of Guangzhou, China (No. 201804010429). Liming Ling is supported by National Natural Science Foundation of China (No. 11771151), Guangzhou Science and Technology Program(No. 201904010362). Li-Chen Zhao is supported by the National Natural Science Foundation of China (Contract No. 12022513, 11775176), and the Major Basic Research Program of Natural Science of Shaanxi Province (Grant No. 2018KJXX-094).

Appendix

In the Appendix, we give the outline of our proposed Peak Search method, the description of our RWD-10K dataset and the details about the setting and results of our rogue wave detection experiments.

vi.1 Peak Search

Peak Search is our proposed fast and cluster-based algorithm to generate the rogue wave images with pseudo annotations. Through our observations, selecting points with larger modulus length from the numerical solution matrix can well filter out the crests of rogue waves.Therefore, we propose the following algorithm in Algorithm 1 to realize the goal to pre-detect the rogue waves on images. What needs to be pointed out here is that the Peak Search method only works with rogue wave matrices, which means it can not replace our RWD-Net to complete the task of detect rogue wave patterns when there are only images without any numerical solution matrices.

Input: The numerical solution matrix, ; The initical location map of the peak points, where at first;
Output: The final location map of the peak points, ;
1 If , then we set , where represents the peak factor which is a constant we specified in advance (we set in our experiment), and represents the level set value corresponding to the level ground plane.
2 For each , if is the highest value point within the distance , then we set , where denotes the comparison radius we specified in advance (we set in our experiment).
3 We map the numerical matrix coordinates of each peak point satisfying into a weak bounding box on the image, and then zoom them in size of ;
return ;
Algorithm 1 The procedure of Peak Search algorithm.

vi.2 Additional statistics

We compare different fitting functions of GT, the variation curves are shown in 5(a) and the fitting parameters are shown in Table 1 and 2. Besides, we give the curves showing the numbers of rogue waves in the images changing with and in 5(b). Lastly, the variation of curves for DRW with and are given in 5(c) and 5(d) under different .

a
b
10 1.581 -0.804
30 2.136 -0.766
50 2.563 -0.819
Table 1: The parameters of fitting functions (21)
c
d
20 0.682 1.900
50 0.890 2.760
100 1.042 3.382
Table 2: The parameters of fitting functions (22)
Figure 5: (a-b) Variation of GT for or when the other is fixed; (c-d) Variation of number of rogue wave in the images for or when the other is fixed; (e) Variation of DRW for under different ; (f) Variation of DRW for under different ;

vi.3 Experimental details

In this part, we show some details and results about our RWD-Net detection experiment. We use the pre-training weight file of ResNet-101 obtained from https://github.com/keras-team/keras-applications/releases/tag/resnet as the initial weight. The learning rate is set to , the batch size is set to 4 and the epoch number is set to 20. Our whole training experiment runs on the machine using an NVidia TITAN RTX GPU with 11GB of GDDR6 memory.

Figure 6: (a) Evolution of the overall loss ; (b) The Precision-Recall curve of detection in test splits.

We record the overall loss changes of the training set during the training process as shown in FIG. 6(a). At the end of training, the overall loss of the model achieves 0.071 on the train splits. Based on the trained model above, we test it on the test splits. Finally, we get 99.29% AP at the threshold IoU set as 0.5 on the test splits and the Precision-Recall curve is shown in Fig. 6(b).

References

  • (1) A. Chabchoub, N. P. Hoffmann, and N. Akhmediev, Phys. Rev. Lett. 106, 204502 (2011).
  • (2) D. R. Solli, C. Ropers, P. Koonath, B. Jalali, Nature (London), 450, 1054-1057 (2007).
  • (3) B. Kibler, J. Fatome, C. Finot, G. Millot, F. Dias, G. Genty, N. Akhmediev, and J. M. Dudley, Nature Phys. 6, 790 (2010).
  • (4) J. M. Dudley, F. Dias, M. Erkintalo, and G. Genty, Nature Photon. 8, 755 (2014).
  • (5) Yu. V. Bludov, V. V. Konotop, and N. Akhmediev, Phys. Rev. A, 80, 033610 (2009); Z. Yan, V. V. Konotop, and N. Akhmediev, Phys. Rev. A, 82, 036610 (2010).
  • (6) V. G. Ivancevic, Cogn. Comput. 2, 17 (2010); Z. Yan, Commun. Theor. Phys. 54, 947 (2010); Z. Yan, Phys. Lett. A 375, 4274 (2011).
  • (7) C. Kharif and E. Pelinovsky, Eur. J. Mech. B/Fluids, 22, 603 (2003); P. K. Shukla, et al., Phys. Rev. Lett. 97, 094501 (2006); C. Kharif, E. Pelinovsky, and A. Slunyaev, Rogue Waves in the Ocean (Springer, New York, 2009); M. Onorato, et al., Phys. Rep. 528, 47 (2013).
  • (8) M. Onorato, A. R. Osborne, and M. Serio, Phys. Rev. Lett. 96, 014503 (2006).
  • (9) D. J. Kedziora, A. Ankiewicz, and N. Akhmediev, Phys. Rev. E 88, 013207 (2013).
  • (10) B. Yang, J. Yang, Physica D: Nonlinear Phenomena, 419, 132850 (2021).
  • (11) P. Walczak, S. Randoux, and P. Suret, Phys. Rev. Lett. 114, 143903 (2015).
  • (12) A. A. Gelash, Physical Review E, 97(2), 022208 (2018); A. A. Gelash, D. S. Agafontsev, Physical Review E, 98(4), 042210 (2018); A. A. Gelash, D. gafontsev, V. Zakharov, et al., Physical Review papers, 123(23), 234102 (2019).
  • (13) J. M. Soto-Crespo, N. Devine, N. Akhmediev, Phys. Rev. Lett. 116(10), 103901 (2016).
  • (14) T. B. Benjamin, Proc. R. Soc. A 299, 59-75 (1967); T. B. Benjamin, and J. E. Feir, J. Fluid Mech. 27, 417-430 (1967).
  • (15) F. Baronio, M. Conforti, A. Degasperis, S. Lombardo, M. Onorato, and S. Wabnitz, Phys. Rev. Lett. 113, 034101 (2014).
  • (16) V. E. Zakharov and A. A. Gelash, Phys. Rev. Lett. 111, 054101 (2013); G. Biondini, and D. Mantzavinos, Phys. Rev. Lett. 116, 043902 (2016).
  • (17) M. I. Jordan, T. M. Mitchell, 349(6245), 255-260 (2015).
  • (18) M. Närhi, L. Salmela, J. Toivonen, C. Billet, J.M. Dudley, and G. Genty, Nature communications, 9(1), 1-11 (2018).
  • (19) P. Amil, M. C. Soriano, and C. Masoller, Chaos: An Interdisciplinary Journal of Nonlinear Science, 29(11), 113111 (2019).
  • (20) D. Qi, A. J. Majda, Proceedings of the National Academy of Sciences, 117(1), 52-59 (2020).
  • (21) M. A. Mohamad, T. P. Sapsis, Proceedings of the National Academy of Sciences, 115(44), 11138-11143 (2018).
  • (22) G. Dematteis, T. Grafke, E. Vanden-Eijnden, Proceedings of the National Academy of Sciences, 115(5), 855-860 (2018).
  • (23) A. J. Majda, M. N. J. Moore, D. Qi, Proceedings of the National Academy of Sciences, 116(10), 3982-3987 (2019).
  • (24) L. Salmela, C. Lapre, J. M. Dudley, G. Genty, Scientific Reports, 10(1), 1-8 (2020).
  • (25) J. X. Tao, J. B. Huang, L. Yu, Z. K. Li, H. S. Liu, B. Yuan, D. L. Zeng, Food Hydrocolloids, 74(28), 151-158 (2017).
  • (26) R. Q. Wang, L. M. Ling, D. L. Zeng and B. F. Feng, Communications in Nonlinear Science and Numerical Simulation, 101, 105896 (2021).
  • (27) L. C. Zhao and L. M. Ling, J. Opt. Soc. Am. B 33, 850-856 (2016).
  • (28) P. Gao, L. C. Zhao, Z. Y. Yang, X. H. Li, and W. L. Yang, Opt. Lett. 45, 2399-2402 (2020)
  • (29) J. Yang, Nonlinear waves in integrable and nonintegrable systems. Society for Industrial and Applied Mathematics (2010).
  • (30) D. Peregrine, J. Aust. Math. Soc. B, Appl. Math. 25, 16 (1983).
  • (31) N. Akhmediev, A. Ankiewicz, and J. Soto-Crespo, Phys. Rev. E 80, 026601 (2009).
  • (32) G. Biondini, X. Luo, Phys. Lett. A 382(37), 2632-2637 (2018).
  • (33)

    K. He, X. Zhang, S. Ren, J. Sun and Microsoft Research, 2016 IEEE Conference on Computer Vision and Pattern Recognition, 770-778 (2016)

  • (34) T. Lin, P. Dollar, R. Girshick, K. He, B. Hariharan and S. Belongie, 2017 IEEE Conference on Computer Vision and Pattern Recognition, 936-944 (2017).
  • (35) T. Lin, P. Goyal, R. Girshick,K. He and P. Dollar, 2017 IEEE Conference on Computer Vision and Pattern Recognition, 2999-3007 (2017).
  • (36) E. Goldman, R. Herzig, A. Eisenschtat, O. Ratzon, I. Levi, j. Goldberger and T. Hassner, 2019 IEEE Conference on Computer Vision and Pattern Recognition, 5227-5236 (2019).
  • (37) R. Girshick, 2015 IEEE international conference on computer vision, 1440-1448 (2015).
  • (38) S. Ren, K. He, R. Girshick, et al., IEEE Transactions on Pattern Analysis and Machine Intelligence, 39(6), 1137-1149 (2017).
  • (39) J. Tao, J. Huang, L. Yu, Z. Li, H. Liu, B. Yuan, et al., Food Hydrocolloids, 74, 151-158 (2017).
  • (40) L. M. Ling, L. C. Zhao, Z. Y. Yang, B. Guo, Phys. Rev. E 96, 022211 (2017).
  • (41) S. Guo, A. R. Fritsch, C. Greenberg, et al., Machine Learning: Science and Technology, 2, 035020 (2021).
  • (42) S. V. Manakov, Zh. Eksp. Teor. Fiz. 67, 543 (1974) [Sov. Phys. JETP, 38, 248 (1974)].
  • (43) B. L. Guo and L. M. Ling, Chin. Phys. Lett. 28, 110202 (2011); L. C. Zhao and J. Liu, J. Opt. Soc. Am. B 29, 3119 (2012); L. C. Zhao and J. Liu, Phys. Rev. E 87, 013201 (2013).
  • (44) A. Chabchoub and M. Fink, Phys. Rev. Lett. 112, 124101 (2014); A. Przadka, S. Feat, P. Petitjeans, V. Pagneux, A. Maurel, and M. Fink, Phys. Rev. Lett. 109, 064501 (2012).
  • (45) F. Baronio, A. Degasperis, M. Conforti, and S. Wabnitz, Phys. Rev. Lett. 109, 044102 (2012); F. Baronio, M. Conforti, A. Degasperis, and S. Lombardo, Phys. Rev. Lett. 111, 114101 (2013); S. Chen, Y. Ye, J. M. Soto-Crespo, Ph. Grelu, and F. Baronio, Phys. Rev. Lett. 121, 104101 (2018).
  • (46) S. Chen and L.Y. Song, Phys. Rev. E 87, 032910 (2013); S. Chen, Phys. Lett. A 378, 2851 (2014); S. Chen and D. Mihalache, J. Phys. A 48, 215202 (2015); S. Chen, X. M. Cai, P. Grelu, J. Soto-Crespo, S. Wabnitz, and F. Baronio, Opt. Express 24, 5886 (2016).
  • (47) L. C. Zhao, L. Duan, P. Gao, Z. Y. Yang, EuroPhys. Lett. 125, 40003 (2019).
  • (48) A. Tikan, C. Billet, G. El, A. Tovbis, M. Bertola, T. Sylvestre, F. Gustave, S. Randoux, G. Genty, P. Suret, and J. M. Dudley, Phys. Rev. Lett. 119, 033901 (2017).
  • (49) L. Li, Z. Wu, L. Wang, and J. He, Ann. Phys. 334, 198 (2013); J. He, H. Zhang, L. Wang, K. Porsezian, and A. Fokas, Phys. Rev. E 87, 052914 (2013).
  • (50) Y. V. Kartashov, V. V. Konotop, M. Modugno, and E. Ya. Sherman, Phys. Rev. Lett. 122, 064101 (2019).
  • (51) G. Q. Zhang, L.M. Ling, Z.Y. Yan, Journal of Nonlinear Science, 31(5), 1-52 (2021).
  • (52) Y. R. Chen, B. F. Feng, L. M. Ling, Physica D: Nonlinear Phenomena, 424, 132954 (2021).
  • (53) B. F. Feng, L. M. Ling, D. A. Takahashi, Studies in Applied Mathematics, 144(1), 46-101 (2020).
  • (54) Y. F. Mo, L. M. Ling and D. L. Zeng, Phys. Lett. A, 127739 (2021).