Sound source ranging using a feed-forward neural network with fitting-based early stopping

04/01/2019 ∙ by Jing Chi, et al. ∙ University of California, San Diego Ocean University of China 0

When a feed-forward neural network (FNN) is trained for source ranging in an ocean waveguide, it is difficult evaluating the range accuracy of the FNN on unlabeled test data. A fitting-based early stopping (FEAST) method is introduced to evaluate the range error of the FNN on test data where the distance of source is unknown. Based on FEAST, when the evaluated range error of the FNN reaches the minimum on test data, stopping training, which will help to improve the ranging accuracy of the FNN on the test data. The FEAST is demonstrated on simulated and experimental data.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction section

Matched field processing (MFP) Bucker_1 ; Tolstoy_2 ; Baggeroer_3 ; Gingras_a1 ; Mecklenbrauker_a2 ; Debever_a3

for source localization has been developed for many years. It can have limited performance due to its sensitivity to the mismatch between model-generated replica fields and measurements. With the development of machine learning, source localization methods based on machine learning have been revived

Niu_8 ; Niu_23 ; Wang_24 ; Ferguson_25 ; Huang_9 . As early as 1991, Steinberg et al.Steinberg_7

applied perceptrons for source localization in a homogeneous medium. Recently, Niu et al.

Niu_8 ; Niu_23 performed ship ranging using a feed-forward neural network (FNN) trained on experimental data. Besides, a regression neural network (NN) Wang_24 and a convolutional NN Ferguson_25 are also trained on experimental data for underwater source ranging. Although a NN can be trained on experimental data, because of the difficulty to obtain amounts of ocean acoustic experimental data containing distance labels, it is cumbersome to train a NN on experimental data to realize sound source ranging in an ocean waveguide. Considering the rarity of experimental data, Huang et al. Huang_9 combined simulation data in close environments to train a deep NN for source localization. However, because of the space-time variation of the ocean waveguide environment, even if the training data includes both simulation and experimental data, the test data is often different from the training data due to the difference of the environment. Therefore, the NN with the minimum ranging error on training data may not reach the minimum ranging error on testing data. If the distance of sound source of partial test data is known, then this part of the data can be used as validation data with the source distance as labels, and early stopping Garvesh_20 ; Prechelt_21 can be used to improve the ranging accuracy of the NN in the test data.

Early stopping is a form of regularization based on choosing when to stop running an iterative algorithm and is usually used to enhance generalization performance of NN and to fight overfitting Garvesh_20 ; Prechelt_21 . Generalization performance means small error on examples not seen during training. Validation error, which is the average error of NN on validation data and is computed by labeled validation data, is chosen as the criterion of whether the NN stops training in early stopping method Garvesh_20 ; Prechelt_21 . During the training process, when validation error reaches the minimum, stop training. Thus the validation error on validation data is reduced by early stopping. Generally, however, test data do not contain labels and cannot be used as validation data. In this case, people cannot use early stopping to improve the ranging accuracy of the NN in the test data. If the ranging error of the NN in the test data can be evaluated, it can be used as the criterion of whether the NN stops training, so as to optimize the ranging accuracy of the NN in the test data.

In this paper, a FNN is trained on simulation data to realize source ranging in an ocean waveguide. Different from Niu_8 ; Niu_23 ; Ferguson_25 ; Huang_9 and early stopping method, the evaluated ranging error of the FNN on test data where the distance of source is unknown is used as the criterion of whether the FNN stops training. To evaluate the ranging error of the FNN on test data, a method called fitting-based early stopping (FEAST) is introduced. Assuming that the track of an underwater source satisfies a known parameterized function, the FEAST evaluates the ranging error of the FNN by parameter fitting. The FEAST is demonstrated on simulated and experimental data.

Ii Simulation data preparation, FNN architecture and learning parameters

Figure 1:

(Color online) (a) Sound speed profile (SSP). (b) Seabed parameters. (c) Architecture of the FNN with 1024 neurons in each hidden layer, 462 in the input layer and 201 in the output layer.

It will be useful to introduce the parameters used for simulation. Let E1 represent an range independent ocean waveguide which will be used for modeling the training data. The parameters of E1 are given by the S5 event in the SWellEx-96 experiment website_12 . The sound speed profile (SSP) and the seabed parameters of E1 are shown in Fig. 1 (a) and (b). The vertical line array (VLA) had 21 elements from 94.125–212.25 m in depth. Let E2 represent a range independent ocean waveguide which is used for modeling the test data. Except for the SSP, see Fig. 1 (a), the parameters of E2 are the same as E1.

The simulated training and test data set are prepared as follows. Selecting a domain in E1, which is 1100–5000 m in range from the VLA and 1–30 m below the sea surface, a training set containing samples is constructed by choosing source locations in uniformly. Let represent the acoustic signal received by the VLA when a 232 Hz point source is in the , computed by Kraken kraken

. Then the input of FNN can be constructed by vectorizing the normalized sample covariance matrix

, see Niu_8 . Considering the Hermite symmetry of and the fact that is a complex matrix, contains independent real numbers, which make up , the input of the FNN. The label in the training set is obtained by dividing into parts uniformly in range direction and encoding the distance information in a 201-dimensional vector . If a source is in the part of , the element of is 1, all others are 0. The test set is generated by a moving Hz point source positioned 9 m below the sea surface and leaves the VLA in E2 at uniform velocity; see the black solid line in Fig. 2 (b). The VLA records data at every 10 s and records 80 sets of data. When recording data, the moving point source is considered static. Then the test data which contain 80 samples are constructed in the same way as the training data except that the test data contains no labels. The differences between training and test data are mainly caused by environmental differences.

A four-hidden-layered FNN with 1024 neurons in each hidden layer is used, see Fig. 1

(c). The input layer has 462 neurons, and the output layer has 201 neurons. Sigmoid function is selected as the activation function of the neurons in the hidden layers and softmax function in the output layer. The FNN is trained on TensorFlow with a learning rate of 0.0005 and the cross-entropy loss function is chosen to optimize the FNN. The cross-entropy loss function

is:

(1)

where

represents epoch which is a measure of number of iterations in training,

is the number of training samples, represents the training samples, is the input to FNN, is the label of , represents the trained FNN when epoch is , “T” is to transpose, and is a vector with all elements 1. The element of

represents the probability of a source in the

part of . The maximum of represent the likely source position and the source–VLA range is expressed by . When , has a minimum of 0. Fig. 2 (a) shows .

Figure 2: (Color online) Simulated data. (a) The loss functions (solid) and (dashed). reaches the minimum when . (b) (diamonds), (circles) and the range from MFP (crosses), and the real range of the source (solid). (c) Relative mean square error (RMSE) for . (diamonds) and (circles) for (d) , (e) and (f) .

Iii Basic idea of FEAST

Although the test data do not contain labels, the performance of a FNN on the test data can be evaluated in some situations. If the expected output of a FNN on the test data satisfies a known parameterized function , where represents unknown fitting parameters, is the th test data and is a known parameter,

is used to evaluate the performance on the test data, where is number of test data. Take a moving source from a a VLA as the example in this paper. Generally, the distance between a moving source and the VLA is a simple curve in the time-distance plane, which can be fitted by polynomials of finite order. For example, if the source moves from the VLA at constant speed, the function is the first-order polynomial on time, and if the source moves from VLA at constant acceleration, the function is a second-order polynomial. For simplicity, only a source moving at constant speed is considered, thus , where is the time instance and . Define one loss function as

(2)

where is defined in Eq. (1),

(3)

and is a regularization parameter. Here

(4)

to make the maximum value of the two terms on the right side of Eq. (2) equal to each other, and generally, the two terms reach their maximum values at small (the two terms reach their maximum values in in this paper). The first term on the right side of Eq. (2) is the loss function defined on the training data, which aims to avoid that the initialization result of the FNN satisfies the parameterized function; the second term computes the difference between

and the parametric model of known form

and evaluates the ranging error of the FNN on the test data. When reaches the minimum or converges, stop training the FNN. Note that the training process of the FNN is completed by optimizing , and just indicates when to stop. Because it is necessary to calculate by fitting parameters and reaches its minimum before , this method is called fitting-based early stopping (FEAST). Not only in FNN, the FEAST is used in other types of neural network to improve ranging accuracy on test data.

To demonstrate FEAST, the test data prepared in Sec. II is used to calculate . Fig. 2 (a) shows which has minimum at . In order to facilitate the understanding of FEAST, Fig. 2 (d)-(f) show and at different , and one find that and are similar when reaches its minimum. Fig. 2 (b) indicates that is close to the true range in the test data. For comparison, Fig. 2 (b) also shows and the range from MFP where the ocean waveguide environment used in MFP is E1. Except for the points near 620 s, the range from MFP and are almost the same and slightly deviate from the true source distance which is caused by the difference between E1 and E2. However, the ranging results of have larger derivations. Define the relative mean square error (RMSE) for ranging:

(5)

where and are the predicted range and the ground truth range corresponding to . Fig. 2 (c) gives the RMSE of . One can find that when , the value of RMSE (0.0252) is closed to the minimum (0.0251), which verifies the FEAST.

Iv Experimental results

Figure 3: (Color online) Experimental data. (a) Loss function is minimum when . (b) Range results (diamonds), (circles) and MFP (crosses), and range from GPS (solid line). (c) RMSE for . (diamonds) and (circles) for (d) , (e) and (f) .

FEAST is demonstrated with the experimental data from the SWellEx-96 Event S5 website_12 . Only the 232 Hz shallow source that was towed at a depth of about 9 m is considered. The data recorded by VLA from to s are selected to prepare the experimental test data; every 10 s of data is used to construct a test data. The experimental test set contains 80 samples. Fig. 3 (a) shows which is computed by the experimental test data and reaches the minimum at . Fig. 3 (d)-(f) show and at different , and one again finds that and are similar when reaches the minimum. Fig. 3 (b) indicates that is close to the GPS range. For comparison, Fig. 3 (b) also shows the ranging results and the range from MFP where the ocean waveguide environment used in MFP is E1. It can be seen that, except for the points at 20, 60, 70 s, the range from MFP and are almost the same and slightly deviate from the GPS range of the moving source, which is caused by the difference between E1 and the experimental environment. However, the ranging results have larger derivations from the GSP range. Fig. 3 (c) gives the RMSE of . One can find that when , the value of RMSE (0.0577) is closed to the minimum (0.0528), which verifies the FEAST again.

V Conclusion

A method called FEAST is introduced to evaluate the ranging error of a FNN for source ranging on test data set. The FEAST is demonstrated by simulated and experimental data. FEAST, which requires that the trajectory of a moving sound source satisfies a known parameterized function, is used for data post-processing but not real-time processing. The results indicates that FEAST improves the ranging accuracy of the FNN on test data. The FEAST is used for source ranging in this paper, but it can be used in other applications which has a known parameterized function.

Acknowledgements.
This work is supported by the National Natural Science Foundation of China under Grant Nos. 11674294 and 11704359, the Fundamental Research Funds for the Central Universities under Grant No. 201861011 and Qingdao National Laboratory for Marine Science and Technology Foundation under Grant No. QNLM2016ORP0106. The authors also thank Ning Wang and Ruichun Tang for their useful suggestions for this paper.

References

  • (1) H. P. Bucker, “Use of calculated sound fields and matched field detection to locate sound source in shallow water,” J. Acoust. Soc. Am. 59, 368–373 (1976).
  • (2) A. Tolstoy, “Matched field processing for underwater acoustics,” (World Scientific, Singapore, 1993)., Google Scholar .
  • (3) A. B. Baggeroer, W. A. Kuperman, and P. N. Mikhalevsky, “An overview of matched field methods in ocean acoustics,” IEEE J. Ocean. Eng. 18, 401–424 (1993).
  • (4) D. F. Gingras and P. Gerstoft, “Inversion for geometric and geoacoustic parameters in shallow water: Experimental results,” J. Acoust. Soc. Am. 97, 3589–3598 (1995).
  • (5) C. F. Mecklenbraüker and P. Gerstoft, “Objective functions for ocean acoustic inversion derived by likelihood methods,” J. Comput. Acoust. 8, 259–270 (2000).
  • (6)

    C. Debever and W. A. Kuperman, “Robust matched-field processing using a coherent broadband white noise constraint processor,” J. Acoust. Soc. Am.

    122, 1979–1986 (2007).
  • (7) H. Niu, E. Reeves, and P. Gerstoft, “Source localization in an ocean wave- guide using supervised machine learning,” J. Acoust. Soc. Am. 142, 1176–1188 (2017).
  • (8)

    H. Niu, E. Ozanich, and P. Gerstoft, “Ship localization in santa barbara channel using machine learning classifiers,” J. Acoust. Soc. Am.

    142, EL455–EL460 (2017).
  • (9) Y. Wang and H. Peng, “Underwater acoustic source localization using generalized regression neural network,” J. Acoust. Soc. Am. 143, 2321–2331 (2018).
  • (10)

    E. L. Ferguson, R. Ramakrishnan, S. B. Williams, and C. T. Jin, “Convolutional neural networks for passive monitoring of a shallow water environment using a single sensor,” in

    Proc. IEEE Int. Conf. Acoust., Speech, Signal Process (2017), pp. 2657–2661.
  • (11) Z. Q. Huang, J. Xu, Z. Gong, H. Wang, and Y. Yan, “Source localization using deep neural networks in a shallow water environment,” J. Acoust. Soc. Am. 143, 2922–2932 (2018).
  • (12) B. Z. Steinberg, M. J. Beran, S. H. Chin, and J. H. Howard, “A neural network approach to source localization,” J. Acoust. Soc. Am. 90, 2081–2090 (1991).
  • (13) G. Raskutti, M. J. Wainwright, and B. Yu, “Early stopping and non-parametric regression: An optimal data-dependent stopping rule,” Journal of Machine Learning Research 15, 335–366 (2014).
  • (14) L. Prechelt, “Automatic early stopping using cross validation: quantifying the criteria,” Neural Networks 11, 761–767 (1998).
  • (15) J. Murray and D. Ensberg, “The –96 experiment,” at http://swellex96.ucsd.edu/ (Last viewed April 29, 2003).
  • (16) M. B. Porter, “The kraken normal mode program,” , Naval Research Lab, Washington, DC (1992).