DeepAI
Log In Sign Up

A Machine Learning-based Characterization Framework for Parametric Representation of Nonlinear Sloshing

The growing interest in creating a parametric representation of liquid sloshing inside a container stems from its practical applications in modern engineering systems. The resonant excitation, on the other hand, can cause unstable and nonlinear water waves, resulting in chaotic motions and non-Gaussian signals. This paper presents a novel machine learning-based framework for nonlinear liquid sloshing representation learning. The proposed method is a parametric modeling technique that is based on sequential learning and sparse regularization. The dynamics are categorized into two parts: linear evolution and nonlinear forcing. The former advances the dynamical system in time on an embedded manifold, while the latter causes divergent behaviors in temporal evolution, such as bursting and switching. The proposed framework's merit is demonstrated using an experimental dataset of liquid sloshing in a tank under horizontal excitation with a wide frequency range and various vertical slat screen settings.

READ FULL TEXT VIEW PDF

page 11

page 14

04/03/2022

Learning Linear Representations of Nonlinear Dynamics Using Deep Learning

The vast majority of systems of practical interest are characterised by ...
03/29/2021

Machine learning based digital twin for stochastic nonlinear multi-degree of freedom dynamical system

The potential of digital twin technology is immense, specifically in the...
12/14/2021

Learn bifurcations of nonlinear parametric systems via equation-driven neural networks

Nonlinear parametric systems have been widely used in modeling nonlinear...
01/27/2016

Shape Distributions of Nonlinear Dynamical Systems for Video-based Inference

This paper presents a shape-theoretic framework for dynamical analysis o...
05/17/2019

Modeling of Missing Dynamical Systems: Deriving Parametric Models using a Nonparametric Framework

In this paper, we consider modeling missing dynamics with a non-Markovia...
01/25/2021

Parametric Rectified Power Sigmoid Units: Learning Nonlinear Neural Transfer Analytical Forms

The paper proposes representation functionals in a dual paradigm where l...
11/23/2020

Conjecturing-Based Computational Discovery of Patterns in Data

Modern machine learning methods are designed to exploit complex patterns...

1 Introduction

Sloshing liquid in a partially filled tank is a source of concern in many engineering fields ibrahim2005liquid; Faltinsen2009sloshing. When the external excitation amplitude is large, or the excitation frequency is close to the natural sloshing frequencies, an unrestrained free liquid surface is prone to violent wave motions. Excessive water motions escalate the hydrodynamic forces acting on the tank walls, resulting in severe structural damage, for instance, an oil leak or gas explosion hatayama2008lessons. Vertical slat screens are one of the devices developed to suppress resonantly forced sloshing. These perforated plate-based screens effectively provide efficient wave damping by promoting vortexes in water motions and dissipating energy via turbulence cascade. Understanding and mastering the fundamental principles governing liquid-plate interactions is a necessary first step toward the development of the next generation of offshore platforms, gas containers, and other naval systems rebouillat2010fluid; maleki2008sloshing; hasheminejad2012sloshing; yu2019experimental; yu2020experimental.

Partial differential equations (PDEs) are frequently used to describe the principles that govern physical systems in the modern world. These specified PDEs, also known as governing equations, aid in the establishment of scientific fields and the rapid advancement of technology. In the context of liquid sloshing, potential flow theory was used to simulate small-amplitude sloshing motion in a regular-configuration container faltinsen1974nonlinear; molin2013experimental. The Laplace equation was later used to analyze nonlinear sloshing problems in two/three dimensions for more turbulent scenarios. When solved using the finite element method or the boundary element method, the computed results can capture free surface sloshing motions in various liquid storage tanks faitinsen1978numerical; cho2005finite; biswal2006non. Another possibility is to use Navier Stokes equations to solve the problem. To simulate viscous liquid sloshing in a baffled tank with possibly broken free surfaces, several turbulence models, including Reynolds averaged Navier–Stokes (RANS) and large-eddy simulation (LES), were evaluated xue2011numerical; liu2009three; liu2016comparison. However, when large amplitude sloshing occurs traditional finite element-based methods frequently fail to provide accurate results. To address this issue, advanced numerical methods such as arbitrary Lagrangian-Eulerian elements yong2017simulation; tang2019improved and smoothed particle hydrodynamics iglesias2004simulation; liu2010smoothed are introduced, where the former focuses on maintaining a high-quality mesh throughout an analysis when large deformation occurs and the latter is meshless and uses a collection of pseudo-particles to represent a given body. While knowing these equations and using advanced numerical methods allow us to simulate liquid sloshing in a variety of tanks, reliable and high-fidelity PDE-governed models remain elusive in many tank and baffle settings.

Fortunately, the unprecedented availability of data today promises a renaissance in the analysis and understanding of physical systems. This begs the question of whether we can extract governing equations from data bongard2007automated; schmidt2009distilling; brunton2016discovering. Historically, scientists have attempted to discover the underlying governing equations by strictly adhering to first principles such as conservation laws, geometric Brownian motion assumptions, and knowledge-based deductions hughes2005isogeometric; bathe2006finite. However, these classical approaches are limited in their capacity to establish explicit analytical governing equations and leave a large number of complex systems unexplored, as in climate science, neuroscience, and epidemiology, to name only a few examples. On the other hand, machine learning (ML) techniques have demonstrated great success in a wide range of scientific and engineering applications, paving the way for automated discovery of governing equations and characterization of dynamical systems such as liquid sloshing jordan2015machine; duraisamy2019turbulence; brunton2020machine.

Recently, substantial emphasis has been placed on the possibility of combining machine learning with traditional numerical and experimental methods. Xie and Zhao xie2021sloshing

, for example, integrated deep reinforcement learning and expert knowledge into the current system control method for reducing tank sloshing using two active-controlled horizontal baffles. Zhang et al.

zhang2021machine

used neural networks to simulate the influence of viscous dissipation on the sloshing process, adaptively adjusting the corresponding damping coefficient defined in the numerical simulator. Teja et al.

teja2021identification

suggested a convolutional neural network-based approach for classifying the forms of sloshing noise generated by vehicle brakes, namely, hit and splash. Ahna et al.

ahn2019database used artificial neural networks to forecast the severity of sloshing loads under a variety of operational and environmental conditions. Nonetheless, a very limited number of studies have been reported in the literature regarding nonlinear sloshing characterization using ML techniques. In zhang2015proper, proper orthogonal decomposition was applied to appropriate the dynamics of wave impacts in a sloshing tank. It is noted that a rigorous numerical model is still needed when violent wave motions and long-time predictions are involved. Very recently, Moya et al. moya2019learning; moya2020physically examined the learning ability of locally linear embedding and topological data analysis in constructing a reduced-order model for describing wave motions. This perspective relies on the wealth of information from previous measurements to guide forecasting, which can be data-intensive.

We herein address limitations in current ML-based sloshing characterization methods by introducing an integrated learning framework, which is capable of accurately detecting chaotic dynamical behaviors, for instance, signal bursting and switching, hidden in the noisy experimental measurements. Specifically, a parsimonious model combining Takens’s embedding theorem and sparse learning is developed. The summary of the proposed approach is to (1) embed each sloshing sequence into a higher space via a gradient-based delay embedding method, (2) perform spectral decomposition of the embedded data, and (3) build a parameterized model based on the decomposed coordinates. As a result, the nonlinear sloshing dynamics is split into a linear and a nonlinear excitation term. Like a first-order Markov model, it allows for a fast estimation and prediction of the underlying dynamics.

The remaining of the paper is organized as follows: Section 2 presents the computational procedures of the proposed characterization framework. Next, Section 3 presents the experimental validation of the proposed model. Finally, Section 4 summarizes the conclusions.

2 Methodology

This section discusses the details of the proposed framework for parametric characterization. The learning algorithm is divided into four primary parts, which are data collection, cluster analysis, delay embedding, and sequential sparse learning.

2.1 Step 1: data collection

Consider a series of measurements of some physical quantities that represent the dynamics of liquid sloshing in a partially filled container. A wave probe was used in this study to measure the free-surface elevation near the left wall of a rectangular water tank yu2019experimental; yu2020experimental. Without loss of generality, time histories of the free surface elevations can be expressed in the following form:

(1)

with representing the probe measurement at time and containing the experimental parameters, for example, the number and positions of baffles. The purpose here is to characterize observed dynamics and then predict sloshing waves, i.e., predicting future free-surface elevation states by analyzing present and prior states. We conducted a variety of trials encompassing a wide range of frequencies and baffle designs in order to develop a reliable data-driven model yu2019experimental; yu2020experimental. Based on Eq. 1, these experimental data can be synthesized into a data matrix in the following manner:

(2)

where denotes the number of experiments.

2.2 Step 2: cluster analysis

The underlying dynamics of our experimental findings are then classified using cluster analysis kaufman2009finding; aghabozorgi2015time; liao2005clustering, a powerful method for exploratory pattern analysis and data mining. The objective of the cluster analysis is to group similar sequences in into the same group:

(3)

where is the number of desired clusters. Approaches typically used to cluster time series data fall into three categories: shape-based, feature-based, and model-based kaufman2009finding; aghabozorgi2015time; liao2005clustering; li2017cluster; zhang2020machine; luo2021dynamic

. Among them, feature-based approaches convert the raw time-ordered sequences into a feature vector of a lower dimension

kaufman2009finding; aghabozorgi2015time. This method is efficient in analyzing high-dimensional nonlinear dynamics and, therefore, is utilized in this paper.

1:Input: and
2:Compute the feature vector of each sequence
3:Encode the feature vector to a latent space with lower dimension by proper orthogonal decomposition:
4:Assign initial values for
5:while  and is smaller than the maximum number of iterations do
6:   for  do
7:      Assign each encoded vector to the cluster which has the closest mean
8:      Calculate new mean for each cluster
9:   end for
10:   Update iteration number
11:end while
12:Output: clustered data
Algorithm 1 Compressed features-based clustering

The extracted features dictate the performance of feature-based techniques. It’s worth noting that some features that function well on one sequence may not work well on another. While expanding the feature map by adding more features might improve its generalization capabilities, it also increases computational complexity linearly aghabozorgi2015time; liao2005clustering. To combine computational efficiency and model performance, we present a compressed features-based clustering technique (CFC) for quick cluster analysis of embedded sloshing sequences. The proposed CFC algorithm’s detailed computation procedures are detailed in Algorithm 1.

2.3 Step 3: delay embedding

Many studies have demonstrated that the state of a deterministic dynamical system is the information required to faithfully characterize the evolution of the system takens1981detecting. In this context, it means a free-surface elevation sequence recorded as a vector can be used to gain information about the dynamics of the entire state space of the system. However, the intrinsic dynamics associated with each cluster are highly nonlinear, even chaotic, consisting of various complex dynamic interactions, including standing waves, broken free surfaces, overturning waves, etc ibrahim2005liquid; Faltinsen2009sloshing. To better characterize the sloshing sequence of the interest, we consider the application of time delay embedding, i.e., structuring data prior to analysis by stacking the state variable at different time instances:

(4)

where is the delay step controlling the volume of temporal separation between two adjacent elements in and is the embedding dimension determining the spread-out projection . In practice, a reasonable choice of embedding parameter and is critical as they both directly affect the representation accuracy of the embedded model. Many methods have been developed to select these two parameters broomhead1986extracting; kennel1992determining; kim1999nonlinear; grassberger2004measuring. In this paper, the so-called average mutual information (AMI) kennel1992determining is adopted to determine the delay step and the false nearest neighbors (FNNs) method grassberger2004measuring is utilized to determine the embedding dimension .

2.4 Step 4: sequential sparse learning

With embedded vector containing enriched dynamics, we can transform a time series into a matrix of time–dependent chunks of data:

(5)

The objective is to develop a model for that is both characterized and predictive. To accomplish this, we will investigate a recently presented model dubbed Hankel’s alternative view of Koopman (HAVOK) brunton2017chaos; li2021novel

. To characterize the dynamics, the HAVOK model first factorizes aaa using spectral decomposition methods. Singular value decomposition (SVD) is used in this case because elimination-based approaches such as LU decomposition are statistically unstable when working with a nonsquare matrix

golub1971singular. The SVD technique is as follows:

(6)

where left singular vectors are defined as the columns of an orthogonal matrix

and they represent eigenvectors of

, right singular vectors are returned as the columns of and they represent eigenvectors of

, and nonnegative singular values

are entries of the diagonal matrix with . Such normalization and orthogonality properties of singular vectors allow an efficient way of modeling dynamics using a finite number of SVD modes taira2017modal; kutz2016dynamic:

(7)

with denoting the truncation number. In Eq. 7, the left side term can be computed by standard numerical differentiation methods such as fourth-order central finite difference method, and the right side operator contains regression coefficients that need to be determined. A closed linear model stated in Eq. 7, however detailed, may not be sufficient to capture and reproduce the nonlinear characteristics of a highly nonlinear dynamical system. In brunton2017chaos; khodkar2021data, it is suggested to add a forcing term to the linear model to better capture the nonlinear dynamics:

(8)

where is split into two parts, i.e. the linear part and the nonlinear forcing part . The next goal is to identify regression coefficients by sparse learning. To compute the optimal coefficients, the least-squares (LS) method is adopted. In principle, the LS method minimizes the residual sum of squares:

(9)

However, Eq. 9 is ill-conditioned and numerically unstable in the context of embedded sloshing dynamics for at least two reasons. First, matrix is overdetermined where , indicating Eq. 9 may not produce a unique solution. Secondly, the inevitable noise integrated in the experimental data intensifies the numerical round-off error when inverting to evaluate coefficients . To approximate the nonlinear dynamics more accurately, shrinkage method tibshirani1996regression is utilized to estimate the optimal coefficients:

(10)

where is the regularization parameter that controls the shrinkage: as , and as

. Often referred to as the ridge regression,

Eq. 10 places quadratic constraints on model coefficients . The analytical ridge solution of this minimization problem takes the form of:

(11)
1:Input: , , ,
2:Begin
3:while  do
4:   Initialize the set
5:   Evaluate the coefficient vector via Eq. 11
6:   for  do
7:      if  then
8:         
9:      else
10:         Add iterative counter to set
11:      end if
12:   end for
13:   Update the right-hand side matrix
14:   Update the coefficient vector via Eq. 11
15:end while
16:Output: coefficients matrix
Algorithm 2 Sequential coefficient optimization

with

denoting an identity matrix of

. By adding positive quantities to the diagonal entries, the non-full-rank condition of sparse learning encountered in the LS method is partially alleviated, and the coefficients estimator becomes nonsingular tibshirani1996regression. To further enhance the uniqueness of the optimized , the ridge regression is implemented in a sequential thresholded fashion, where columns in correspond to an insignificant coefficient is automatically deleted rudy2017data. Steps to implement this sequential optimization algorithm are summarized in Algorithm 2.

2.5 Review of framework

Figure 1: Overview of the machine learning model for modeling of liquid sloshing.

The preceding four steps are the integral elements of the proposed machine learning framework. Fig. 1 depicts the data flow across the framework graphically. There are two advantages to the proposed characterization framework that should be discussed. First, the overall framework is very adaptable, allowing one or more elements to be replaced with other data-driven models. Different off-the-shelf SVD solvers, for example, can be simply integrated within the framework. Second, no prior knowledge of data structure or topology is required. The framework is implemented automatically, making it accessible to both domain scientists and non-experts. As a result, this framework has high computational efficiency and significant deployment flexibility.

3 Results and Discussion

This section analyzes the results of the machine learning. More precisely, we examine the findings of the characterization in light of the framework’s four stages.

3.1 Part 1: data collection results

The experimental data for this work was provided by the Laboratory of Vibration Test and Liquid Sloshing at Hohai University in China yu2019experimental; yu2020experimental

. A six-degree-of-freedom motion simulation platform, also known as a hexapod, was used in particular to generate the liquid tank’s motions based on a specified input of time histories (See

Fig. 2). A rectangular plexiglass tank with internal dimensions of (length height width) is employed. It is thick and partially filled with water, with the liquid depth-to-tank-length ratio set at . Shallow water sloshing is suppressed by vertical slat screens composed of perforated aaa steel plates. Slat screens have three different solidity ratios (, and ) and two different slot diameters ( and ), yielding six possible testing setups.

Figure 2: Experimental set-up: shake table with shallow water sloshing.

Furthermore, a horizontal sinusoidal motion defined by is applied to the platform. Note with denoting the length of the tank. The excitation frequency is set up by a control computer and a wide range of frequencies have been tested. The displacement of the motion platform was recorded by a displacement sensor. A wave probe (WH200) located at a distance of from the left vertical wall is used to track the evolution of the free-surface elevations near the tank wall. As a result, a total of sequences, where each sequence corresponds to a specific experiment configuration, have been recorded for analysis. A more detailed experimental setup is available in yu2019experimental; yu2020experimental.

3.2 Part 2: clustering results

Direct cluster analysis is used to demonstrate the efficiency of the proposed compressed features-based clustering algorithm (See Algorithm 1

). The eigenvalues of the feature matrix are initially arranged in decreasing magnitude. As a result, the estimated eigenvalue’s cumulative proportion of variance can be examined. The cumulative eigenvalues are shown in

Fig. 3 (a). It can be seen that retaining the first four POD compressed features can provide a good representation of extracted features with an adequate level of accuracy, accounting for of the variance.

Then, Silhouette analysis is used to determine the best cluster number for both approaches rousseeuw1987silhouettes. The Silhouette value , in particular, is a measure of the separation distance between the clustering results, and it has a range of . To illustrate, the computed Silhouette values have been normalized to , with indicating that the given dataset is well clustered because each sequence is clearly separated from the neighboring clusters, and indicating that the select clustering algorithm fails to assign each sequence due to ambiguous boundary lines between two clusters. In both cases, the computed results presented in Fig. 3 (b.1) and (b.2) imply that the optimal cluster number is . In direct clustering, and produce similar Silhouette values, whereas outperforms other values in the clustering evaluation using POD compressed features. As a result, the number of clusters is set to .

Figure 3:

Clustering results: (a) The ratio of the cumulative sum of eigenvalues over the total sum of them; (b) Silhouette analysis for k-means clustering on sequence data with

; and (c) The visualization of the clustered data.

Following that, the uncompressed and compressed feature matrices are partitioned into

clusters. Because the quality of the initial centroid points has a direct impact on the clustering results, a widely used heuristic approach, the k-means++ algorithm, is used to improve the quality of the center initialization in this NP-Hard problem setting

aghabozorgi2015time; liao2005clustering. The maximum number of iterations for the k-means++ algorithm is , and the Minkowski distance with is used to compare the similarity of two different sequences. The computed results are summarized in Fig. 3 (c.1) and (c.2). It can be seen that the POD feature-based clustering produces more visually distinct results. The direct method tends to mix data from the second and third clusters, whereas the proposed method can provide a clear boundary line between clusters 2 and 3.

Wavelet transforms are applied to the clustered sequences to further validate the clustering results. Three sequences are chosen at random from each cluster for demonstration. Fig. 4 summarizes wavelet results as well as experimental parameters for the chosen sequences. Three critical properties should be discussed here. First, frequencies close to the first natural frequency are found to maximize the computed wavelet scalogram yu2019experimental. When compared to the second and third clusters, selected sequences from the first cluster have a higher dominant frequency, that is, while and . This is due to the fact that the excitation frequency used is close to the peak frequency of sloshing in the tank without baffles (It is 0.54 Hz according to the experiments). When the tank is outfitted with porous baffles, violent sloshing is reduced, the liquid is separated into zones, and the moving amplitude is suppressed yu2020experimental. Second, several frequencies are significant for sequences chosen from the first cluster (See Fig. 4 (a.1) and (a.2)). Furthermore, as sloshing evolves, the scalograms of the first cluster tend to exhibit more complicated nonlinear characteristics within the limits of the transformed resolution. This observation agrees with the experimental finding presented in yu2019experimental, which states that when passive control devices (e.g., internal baffles) are used to suppress the maximum free-surface elevations resonated at the first natural frequency, more sloshing phenomena occur at the new frequencies. Third, the second cluster sequences have a higher excitation frequency, whereas the other two clusters have higher harmonic periods. Despite the fact that cluster 2 and cluster 3 sequences have only one dominant frequency band, the bandwidth is strongly influenced by the external excitation frequency (See Fig. 4 (B) and (C)). Overall, the wavelet-based time-frequency representation provides some insight into the timing and frequencies at which a sloshing sequence intensifies. More importantly, it confirms that the proposed clustering algorithm successfully grouped sloshing sequences with similar dynamics into the same cluster.

Figure 4: Wavelet analysis results of clustered free-surface elevation sequences.

3.3 Part 3: embedding results

First, the delay step is estimated by computing the AMI value. Different initialization parameters have been substituted into Eq. 1

. The probability distributions involved in the computation of AMI are estimated using histograms on

and , where intervals and are divided into and sub-intervals kennel1992determining

. And probabilities

are estimated by counting the number of values within , , and sub-intervals and dividing each of these counts by .

Fig. 5 shows the computed results for each cluster. For each candidate , the plot vertically displays the distribution of the computed AMI values for sequences within that cluster. The minimum AMI in an average sense occur at and for cluster and , respectively. Especially for cluster , AMI value increases rapidly from to . This can be explained by the dominant frequency of the first cluster. Shown in the first row of Fig. 4, sequences in this cluster have multiple controlling frequencies. As surpasses the intrinsic frequency, redundant information at other frequencies begins to dominate. and are no longer sufficiently independent, giving rise to the increases of AMI value. Therefore, the delay step is set as the candidate corresponds to the actual first minimum of AMI, namely, 13, 18 and 18.

Figure 5: Selection of delay step .

Next, the embedding dimension can be selected by the FNNs method. The relation curve of the embedding dimension and FNNs percentage is presented in Fig. 6. Similarly, the dots associated with a specific dimension represent the distribution of the FNN percentage of different sequences in a cluster. It can be seen that these percentages drop by when , , and for cluster , , and , respectively, indicating the dimension where the sloshing dynamics of interest can be smoothly unfolded . Meanwhile, it should be stressed that the percentage of a few sequences starts to increase (e.g., the tail parts of Fig. 6 (b)). This phenomenon is often referred to as the noise to signal effects, where experimental noises overpower the FNNs statistics kim1999nonlinear; grassberger2004measuring. In many applications, this turning point serves as an indicator of the level of single contamination, providing a way to distinguish chaotic dynamics from noise perturbations. In the following analysis, the embedding dimension is set to , 38 and 43 for cluster , , and , respectively.

Figure 6: Selection of embedding dimension .

Based on the selected delay step and embedding dimension , we can assemble the input matrix for sparse learning (See Eq. 5). Here, the dynamics of the embedded state variable at each time instance is plotted in three-dimensional space where singular vectors , , and that pack most of the energy contained in the embedded expression are selected as the three major axes (See Eq. 6). The selected sample trajectories from the first cluster have a horse saddle structure (See Fig. 7

). The resulting attractor is skew-symmetric and diffuses from the inside out. When

, the trajectory asymptotically converges to the saddle contour. The reader is referred to the web version of this paper to better understand the convergence (Green states to yellow states). Meanwhile, samples from the second cluster have a truncated spinning top structure. Initialize a point at its apex upon which it is spun. Then, this point spins spirally from top to bottom and gradually expands the boundaries. It can be seen from Fig. 7 that the attractor moves from a dark brown region to a Brownish orange region. The third cluster yields a squeezed sponge structure. In the beginning, the attractor chaotically oscillates in space. Later, it reaches a periodic orbit and traces ovals like a car on a hyper track (See Fig. 7).

Figure 7: Embedding attractors of different clusters.

3.4 Part 4: sparse learning results

The attractor is first reconstructed using the decomposed eigen components. We plot the data in the reconstructed phase space with the most energetic components , and as , and for the first time instances, as shown in Fig. 8. The remaining time instances ranging from to are used to assess prediction performance. Shown in Fig. 8, the optimally learned reconstructed attractor is qualitatively similar to the ground truth, and the trajectories of the reconstructed attractors are clearly smoother and less chaotic when fewer components are used. In practice, a large number of components will suffice to provide the necessary information for the reconstruction, and the least-squares estimates provided by Eq. 9 are unbiased. The variances of predicted values obtained by the fitted regression model, on the other hand, increase. As a result, the overfitting problem arises. For example, the number of components used in sparse learning for Fig. 8 (b), (c), and (d) are , , and , respectively. To select an appropriate number of components for sparse learning, one can use hard threshold learning methods such as asymptotic mean squared error-based singular value thresholding donoho2013optimal and optical flow-based denoising abarbanel1993analysis in addition to trial-and-error learning.

Figure 8: Reconstructed embedded attractor results from the third cluster.

Fig. 9 shows the first linear component, , as well as the last nonlinear component, , of the SVD results of the embedded sloshing data , to investigate the functional roles played by linear and nonlinear components in the process of time-series analysis and attractor reconstruction. According to the computed results from sparse learning stated in the previous section, is 12, 14, and 7 for the sample from the first, second, and third clusters, respectively. The dynamics observed in the nonlinear component, , is associated with intermittent bursts, lobe switching, and other chaotic points on the embedded attractor in many applications brunton2017chaos; khodkar2021data. A threshold value of is defined to identify the locations where the nonlinear component is active. When values exceed , the corresponding regions are isolated, classified as forcing active, and denoted by red dots.

Figure 9: The linear and nonlinear (forcing) components of the sparse regression model.
Figure 10: The embedded attractor with forcing active area.

As previously stated in abarbanel1993analysis; platt1993off, the presence of on-off intermittency in an unstable reconstructed attractor can be explicitly revealed in a tuple of transformed coordinates. It is difficult to directly visualize the bursting/switching behavior in In Fig. 9 using a single component, . To help understand the on-off chaos, Fig. 10 shows the underlying geometry of the calculated trajectories in three dimensions, with the forcing active regions highlighted. The red points in Fig. 10 (a) indicate that lobe switching is about to happen. These are two groups of intermittent points, one in the basin of the left lobe and the other in the basin of the right lobe. This switching behavior is consistent with the complex wave motions observed in a partially filled tank, where the free-surface elevation first rises and then falls. The traveling waves are pushed in the opposite direction when they hit the vertical slat screens or the tank’s sidewalls. The resulting nonlinear hydrodynamics causes the lobe switching on the embedded attractor to occur. Meanwhile, as illustrated in Fig. 8. (a), the initial states are more chaotic and less constrained by the embedded attractor. The evolution of the predicted trajectory reflects the same developmental trend. The highlighted locations in Fig. 10. (c) show strong nonlinearities at first, and these nonlinear effects subside when the skeleton of the attractor is formed.

Model Fitted parameters Result value
Normal Rejected
Beta Rejected
Exponential Rejected
EV Rejected
Gamma Rejected
GEV Pass
Lognormal Rejected
Student t Rejected
Weibull Rejected
  • EV denotes extreme value distribution.

  • GEV denotes generalized extreme value distribution.

Table 1: Kolmogorov–Smirnov results of the selected sample from the first cluster.

The statistics of the nonlinear component is essential to characterize the aforestated instability and intermittent phenomena. In practice, rare/extreme events of climate and ocean waves tend to have Non-Gaussian statistics takens1981detecting; broomhead1986extracting. Based on this property, a set of hypothetic probability models are considered to describe the probability distribution of , and maximum likelihood estimation (MLE) is applied to determine values for the fitted model’s parameters. The one-sample Kolmogorov-Smirnov (K-S) test is performed to examine the goodness of fit and deliver the best hypothetic distribution model massey1951kolmogorov

. Specifically, the K-S test rejects the null hypothesis when the computed p-value is smaller than a predefined significance level, that is,

in this study. The test results for the sample selected from the first, second, and third cluster are outlined in Table 1, Table 2, and Table 3

, respectively. The table reports optimally fitted model parameters via MLE and decisions taken by comparing the test statistic with the critical value. Overall, the K-S test results indicate that the nonlinear component

does not follow a normal distribution. The computed p-value is small, implying a doubt on the hypothetical probability distribution model’s validity.

Table 1 shows a generalized extreme value (GEV) distribution developed within extreme value theory provides the best performance in terms of characterizing the cluster’s sample. Meanwhile, Student’s t-distribution is demonstrated to the best statistical model to describe the other two samples (See Table 2 and Table 3).

Model Fitted parameters Result value
Normal Rejected
Beta Rejected
Exponential Rejected
EV Rejected
Gamma Rejected
GEV Rejected
Lognormal Rejected
Student t Pass
Weibull Rejected
Table 2: Kolmogorov–Smirnov results of the selected sample from the second cluster.
Model Fitted parameters Result value
Normal Rejected
Beta Rejected
Exponential Rejected
EV Rejected
Gamma Rejected
GEV Rejected
Lognormal Rejected
Student t Pass
Weibull Rejected
Table 3: Kolmogorov–Smirnov results of the selected sample from the third cluster.

Fig. 11 illustrates the histograms obtained from the calibrated

and the probability density function of the best hypothetical probabilistic model identified by the K-S test. The length of the decomposed eigenvector

is 1500, and the values have been shifted to the positive region to ensure the MLE algorithm’s numerical stability and performance. It is realized by adding the absolute value of the minimum of to all the elements of . As expected, the Non-Gaussian statistics are displayed in the fitted model’s long tails compared with the normal distribution.

Figure 11: Optimally fitted probability density function (PDF) of the forcing term.

It is important to validate the model’s generalization ability once it is completely trained by examining the learned model’s performance on data points not used in the training phase. Therefore, a test dataset is created to provide an unbiased evaluation of a model fit on the training dataset. In particular, an embedded sloshing sequence containing time instances is split into a training set and a test set . The performance of using a well-trained model to predict the multi-steps of selected samples is given in Fig. 12. It can be seen that the model accurately captures the evolution pattern and envelope structure attached to the ground truth. For all samples selected, the difference between prediction and data gradually increases as time propagates.

Furthermore, we evaluate the accuracy of the prediction results using root mean square error (RMSE), mean absolute error (MAE), and variance absolute error (VAE). To statistically present the estimation errors of experiments with different configurations, all samples, each of which contains time instances, are used to compute the evolution of the error distribution. Fig. 13. (a) presents the trend of the predefined error criteria, and Fig. 13. (b), (c), and (d) show the histograms of different errors at three selected time instances, namely, , , and

. Because VAE emphasizes large differences and RMSE/MAE is more used as a measure of dispersion, the evolution of VAE oscillates less rapidly than RMSE and MAE. It is observed that the mean error of those three histograms indicate the prediction error increases slightly as time propagates. The histograms are more concentrated near zero with a small center shift compared with an isotropic Gaussian distribution, which is often used to model errors in machine learning

khodkar2021data; luo2020bayesian.

Figure 12: Validation of prediction performance on both training and testing sets.
Figure 13: Validation of prediction performance on both training and testing sets.

4 Concluding remarks

This paper proposes a framework for characterizing the dynamics of nonlinear and non-Gaussian liquid sloshing. blackIn comparison to traditional system identification methods like wavelet, the proposed machine learning framework provides approximate linear representations of nonlinear dynamics. To address the data rank-deficiency, spectral decomposition is applied to time delay coordinates given a sequence of wave elevation. Then, using a set of decomposed eigenvectors, sequential sparse regression is utilized to identify an intermittently forced linear system. As a result, when compared to traditional approaches, the model’s transparent data-driven structure and fast machine learning computation make it an excellent candidate for online feedback control tasks. In future work, it is worthwhile to incorporate well-established controlling strategies into the current framework.

Through the use of a wide range of experimental data, we demonstrated the model is capable of accurately representing embedded sloshing sequences. Meanwhile, a series of statistical tests show that the model captures chaotic behaviors like bursting and switching in the embedded sequence. Various performance metrics are used to demonstrate the accuracy of the multi-step prediction results. blackAdmittedly, liquid sloshing can cause complex nonlinearities such as out-of-plane sloshing, spinning, irregular beat vibration, and pseudo-periodic motions. These nonlinearities are far more complex than the wave elevation dynamics we observed. Extending the methodology described here to different types of moving water waves, using either high fidelity CFD simulation data or more delicate experiment measurements, is a desirable first step toward developing a more robust and general data-driven simulation strategy for sloshing dynamics.

Acknowledgments

This work is supported by the U.S. Department of Energy (DOE), Office of Science, Advanced Scientific Computing Research under Award Number DE-SC-0012704.

References