Aircraft wings are optimized to produce maximum lift and minimum drag. The design ensures that there is a high-pressure zone below the wing and a low-pressure zone above. Owing to this pressure gradient, air from below the wing is drawn around the wingtip into the region above the wing causing a vortex to trail from each wing tip. These wake vortices (WVs) are stable under calm atmospheric conditions and remain present in the free atmosphere for a very long time, retaining its shape and energy [holzapfel2003analysis, holzapfel2003strategies, breitsamter2011wake]. Furthermore, at very low altitudes, they might rebound from the ground and linger on in the flight path corridor, posing significant risks to other aircraft that might encounter them.
Small general aviation aircraft create vortices that are almost undetectable by a trailing aircraft of a similar size. Large jetliners, however, leave vortices that can exceed 150 mph in rotational velocity and are still detectable over distances of 20 miles. The strongest and the most dangerous vortices are generated by aircraft that are heavy, in a clean gear and flaps-up condition, and flying at low speeds like those of landing approaches. WVs can cause violent rolling motions and even flip a small aircraft upside down when a pilot trailing a large aircraft flies into the vortices [luckner2004hazard]. The power of these dangerous spinning vortices can cause an aircraft to become uncontrollable. In almost all cases, control of the aircraft can be restored if there is sufficient altitude. Some aircraft have sustained serious structural damage when encountering WVs, but were landed safely. There are, however, cases in which smaller trailing aircraft, during a climb-out after take-off or during a landing approach, have crashed after entering WVs because they were too close to the ground for the pilots to recover full control.
It is due to the hazards posed by WVs left behind by a taking off or landing aircraft that serious precautions are to be taken. The operational minimum aircraft separation for different weight class configurations, used by the Air Traffic Control (ATC), varies from 2.5 to 6 nautical miles. However, when deciding the separation distance following those guidelines, the weather conditions and associated transport and decay of WVs are not taken into account. This was not a serious issue a couple of decades ago, but with the significant increase of the air traffic and a push for remote towers for cost effective and safe operation, major airports around the world are feeling the pressure. In this regard, Digital Twins of airports appear like a potential solution; however, to realize such a digital twin will need accurate description of the prevailing condition at the airport as well as its evolution in the near foreseeable future. In the current context, there is a need to develop a more efficient wake turbulence separation consisting of time-based minima between different aircraft types which takes into account the dynamic meteorological factors along with the variation in the wake generation mechanism associated with different class of aircraft. Such information will enable air traffic controllers to deliver consistent and safe spacing between aircraft leading to increased airport capacity, enhanced safety, reduced fuel consumption, improved predictability and increased resilience.
While the current solutions range from actively modifying/dissipating the wake-vortices using physical devices [anton2017noo, holzaepfel2020plr] to accurately estimating the strengths of the vortices using LIDARS and RADARS [gerz2002commercial, holzapfel2003strategies]
. One shortfall of the two approaches is that none of them predicts the evolution of the vortices in the future. This gaps is being filled by advanced computational fluid dynamics modeling which involves solving the highly non-linear Navier Stokes equations at varying levels of approximations. However, their utility owing to their computationally demanding nature has been limited to offline simulations geared towards developing a better understanding of the WV dynamics. At the moment, most of the fast WV models that are state-of-the art in WV predictive systems use physics-based empirical parameterizations to mimic vortex transport and decay. Unfortunately, the computational efficiency of the fast WV models comes at the expense of accuracy. A good overview of the models can be found in[hallock2018aro]
. To alleviate the problems associated with the existing WV models, data-driven machine learning methods might appear attractive at a first glance, but their limited interpretability owing to their black-box nature make them a misfit for the kind of safety-critical application under consideration. To this end, building upon our recent works on the hybrid analysis and modeling (HAM) framework[pawar2020data, pawar2020evolve, ahmed2020long], we present a data assimilation-empowered approach to utilize a machine learning methodology to fuse computationally-light physics-based models with the available real-time measurement data to provide more accurate and reliable predictions of wake-vortex transport and decay. In particular, we build a surrogate reduced order model (ROM), by combining proper orthogonal decomposition (POD) for basis construction [berkooz1993proper, chatterjee2000introduction, liang2002proper, rathinam2003new, kerschen2005method, volkwein2011model] and Galerkin projection to model the dynamical evolution on the corresponding low-order subspace [rowley2004model, rapun2010reduced, lorenzi2016pod, kunisch2002galerkin, noack2011reduced, huang2016exploration, rezaian2020impact, kunisch2001galerkin, xu2020pod]. Although ROMs based on Galerkin projection (denoted as GROMs in the present study) have been traditionally considered the standard approach for reduced order modeling, they often become inaccurate and unstable for long-term predictions of convection-dominated flows with strong nonlinearity [kalashnikova2010stability, grimberg2020stability, lassila2014model, rempfer2000low, noack2003hierarchy]. Ideas like closure modeling [sirisup2004spectral, san2014proper, san2014basis, protas2015optimal, cordier2013identification, osth2014need, kalb2007intrinsic, xie2018data, mohebujjaman2019physically, akhtar2012new, balajewicz2012stabilization, amsallem2012stabilization, san2015stabilized, gunzburger2019evolve, wang2011two, iliescu2014variational, xie2017approximate, xie2018numerical, rahman2019dynamic, rezaian2020hybrid, rezaian2020multi, imtiaz2020nonlinear] and Petrov-Galerkin projection [carlberg2011efficient, carlberg2017galerkin, wentland2019closure, choi2019space, lozovskiy2017evaluation, collins2020petrov, parish2020adjoint, xiao2013non, fang2013non] have been investigated to address this deficiency. Alternatively, we exploit the nudging method [lakshmivarahan2013nudging] as a data assimilation (DA) framework, which works by relaxing the model state toward observations by adding correction (or nudging) terms, proportional to the difference between observations and model state, known as innovation in DA context. In classical DA nudging, this proportionality is assumed to be linear, and the proportionality constants (or weights) are empirically tuned. Instead, we introduce the hybridization at this stage, using a simplistic long short-term memory (LSTM) architecture to generalize this relation to consider nonlinear mappings among the innovation and nudging terms.
In other words, we utilize LSTM to combine the possibly defective model prediction with noisy measurements to “nudge” the model’s solution towards the true states [pawar2020long, ahmed2020reduced]. We apply the proposed LSTM nudging framework (denoted LSTM-N) for the reduced order modeling of the two-dimensional wake vortex problem in order to accurately predict the transport and decay of wake vortices for ATC applications. Moreover, we suppose that both inputs (i.e., the physics-based model and data) are imperfect, thus avoiding biases in predictions. GROMs are inherently imperfect due to the modal truncation and intrinsic nonlinearity. We also perturb the initial conditions to further mimic erroneous state estimates in practice. Meanwhile, we realize that, more often than not, sensor signals are noisy. So, we intentionally inject some noise to the synthesized observation data (using a twin-experiment approach). We test the performance of LSTM-N with various levels of measurement noises, initial field perturbations, and sensors signals sparsity.
2 Wake-vortices and decay prediction system
Every aircraft generates a wake of turbulent air as it flies. This disturbance is caused by a pair of tornado-like counter-rotating vortices (called wake vortex) that trail from the tips of the wings [spalart1998airplane]. Relatively turbulent weather conditions and rough terrain can help dissipate these vortices. A faster wake-decay was seen with increase in terrain roughness (as in [tabib2016]), where it was observed that due to higher shear generated by the rougher terrain, a secondary vortex (SV) gets established more rapidly around the periphery of primary wake-vortex (WV), and the subsequent interactions between SV and WV creates a higher turbulence state which destroys the associated vorticity and both these vortex. The phenomena like WV rebound and generation of omega-shaped hair-pin vortices also takes place during this SV-WV interaction. This complex wake decay phenomena is also applicable for wake-vortex emanating from aircraft. Such facts have also been observed and exploited to artificially destroy wakes close to the ground using plates [stephan2017numerical]. Therefore, understanding the complex dynamics of these wake vortices (WV) from its generation to decay is important in order to ensure flight safety, to increase airport capacity and to test new methods for destroying WVs and mitigating their effect.
Air traffic control can potentially benefit from the emerging concept of a digital twin (DT). DT is the virtual replica of a physical system, where both are able to actively communicate with each other [rasheed2020digital, ganguli2020digital, tao2018digital]. Given the WV and associated airport traffic case, a DT would receive meteorological data concerning present as well as possible weather conditions. Inputs should also include airport traffic status, leading as well as training aircraft characteristics (e.g., weight, size) and flight mode (e.g., take-off or landing). Topography and geographic location can be a factor, too. Then, the DT should process this stream with data and assess a bunch of possible scenarios corresponding to different potential choices. Based on those assessments, an informed decision can be made with regard to separation distance and flight scheduling for instance. Considering the WV problem, numerical modeling based on the Navier-Stokes equation, if accurate, can be a cost effective and easily employable pursuit for wake analysis. In Figure 1, an aircraft is admitted to land safely, based on the decay of wake-vortices from a leading aircraft. These wake-vortices are generated using the aircraft information and an analytical function on a set of two-dimensional (2D) planes (also called gates) perpendicular to the flight path [fuchs2016wake]. An alternative and a more realistic initialization will be the one using LIDAR data, if available in real time. The initialized wake vortices then decay and get transported under the influence of a background wind and turbulence field. Once the flight corridor is clear and free of any influence of the wake-vortices left behind by the leading aircraft, the following aircraft can land or take-off safely.
Direct, full order numerical simulations require large discretized systems for adequate approximation and are not practical for real-time wake prediction, which is an essential ingredient for feasible DT technologies. Therefore, reduced order modeling (ROM) rises as a natural choice for successful implementation of digital twin applications. ROM represents a family of protocols that aim at emulating the relevant system’s dynamics with minimal computational burden. Typical ROM approaches usually consist of two major steps; (1) tailor a low-order subspace, where the flow trajectory can be sufficiently approximated to live (see Section 3.2), (2) build a surrogate model to cheaply evolve this trajectory in time (see Section 3.3
). Traditionally, building surrogate models to evolve on a reduced manifolds has relied on the projection of the full order model (FOM) operators onto a reduced subspace (e.g., using Galerkin-type techniques) to structure a reduced order model (ROM). Those FOM operators are usually the outcome of the numerical discretization of the well-established governing equation, derived from first principles and conservation laws. Such ROMs are attractive due to their reasonable interpretability and generalizability, as well as the existence of robust techniques for stability and uncertainty analysis. However, Galerkin ROM (GROM) can be expensive to solve for turbulent and advection-dominated flows. GROM also might suffer from inaccuracies and instabilities for long-time predictions. Meanwhile, in the digital twin context, the availability of rich stream of data and measurements opens new avenues for further ROM development. One way to utilize this abundance of data is the purely data-driven nonintrusive ROM (NIROM) approach. NIROMs have largely benefited from the widespread of open-source cutting edge and easy-to-use machine learning (ML) libraries, and cheap computational infrastructure to solely rely on data in order to build stable and accurate models, compared to their GROM counterparts[kutz2017deep, brunton2019machine, brenner2019perspective, xie2019non, jian2019flowfield, pawar2019deep, san2019artificial, rahman2019nonintrusive, maulik2020time, renganathan2020machine, maulik2020probabilistic]. However, purely data-driven tools often lack human interpretability and generalizability, and sometimes become prohibitively “data-hungry”.
Alternatively, hybrid approaches can be pursued, where data-driven tools only assist the physics-based models whenever data are available, rather than replacing them entirely. Data assimilation (DA) is a framework which can efficiently achieve this objective. DA generally refers to the discipline of intelligently fusing theory and observations to yield an optimal estimate of the system’s evolution [ghil1991data, kalnay2003atmospheric, lewis2006dynamic, lorenc1991meteorological, derber1989global]. Measurements are usually sparse (both in time and space) and noisy, while dynamical models are often imperfect due to the underlying assumptions and approximations introduced during either model derivation (e.g., neglecting minor source terms) or numerical solution of the resulting model (e.g., truncation error). DA algorithms have rich history in numerical weather predictions and are utilized on a daily basis to provide reliable forecasts. In this paper, we suppose that our dynamical model is the truncated GROM and we aim at utilizing live measurements to correct the GROM trajectory. Specifically, we exploit the nudging method as our data assimilation framework, which works by relaxing the model state toward observations by adding correction (or nudging) terms, to mitigate the discrepancy between observations and model state [di2020synchronization]. We employ LSTM mappings to account for this nudging term based on a combination between GROM predictions and available measurement data (see Section 3.4).
In this section, we first give an overview of the full order model used to simulate the wake-vortex transport problem. Then, we present the reduced order formulations adopted in this study. In particular, we utilize proper orthogonal decomposition (POD) as a data-driven tool to extract the flow’s coherent structures and build a reduced order subspace that best approximate the flow fields of interest. Then, we perform a Galerkin approach to project the full order model operators onto that reduced space to build a “physics-constrained” reduced order model.
3.1 Vorticity Transport Equation
Here, we consider the two-dimensional (2D) vorticity transport equation as our full order model (FOM) that resolves the wake-vortex transport and decay. It refers to the 2D Navier-Stokes equations in vorticity-streamfunction formulation as follows [Gun89],
where and denote the vorticity and streamfunction fields, respectively. Re is the dimensionless Reynolds number, defined as the ratio of inertial effects to viscous effects. Equation 1 is complemented by the kinematic relationship between vorticity and streamfunction defined as,
In order to mimic the wake-vortex problem, several models have been investigated [gerz2002commercial, hallock2018review, ahmad2014review, holzapfel2003analysis], including Gaussian vortex [lugan2007simulation], Rankine vortex [rossow1977convective, aboelkassem2005viscous], Lamb-Oseen vortex [lamb1924hydrodynamics, holzapfel2000wake], Proctor vortex[proctor1998nasa, proctor2000wake], etc. In the present study, we initialize the flow with a pair of counter-rotating Gaussian vortices with equal strengths centered at and as follows,
where is an interacting parameter that controls the mutual interactions between the two vortical motions.
3.2 Proper Orthogonal Decomposition
The first step for building a projection-based reduced order model is to tailor a low-order subspace that is capable of capturing the essential features of the system of interest. In the fluid mechanics community, proper orthogonal decomposition (POD) is one of the most popular techniques in this regard [taira2017modal, taira2020modal, rowley2017model]. Starting from a collection of system’s realizations (called snapshots), POD provides a systematic algorithm to construct a set of orthonormal basis functions (called POD modes) that best describes that collection of snapshot data (in the sense). More importantly, those bases are sorted based on their contributions to the system’s total energy, making the modal selection a straightforward process. This is a significant advantage compared to other modal decomposition techniques like dynamic mode decomposition, where further sorting and selection criterion has to be carefully defined. Usually, the method of snapshots [sirovich1987turbulence]
is followed to perform POD efficiently and economically, especially for high dimensional systems. However, we demonstrate the singular value decomposition (SVD) based approach here for the sake of simplicity and brevity of presentation.
Suppose we have a collection of system realizations, denoted as , we build a snapshot matrix as , where is the
snapshot reshaped into a column vector,is the number of spatial locations (i.e., ) and is the number of snapshots.
Then, a thin (reduced) SVD is performed on ,
where is a matrix with orthonormal columns are the left singular vectors of , which represent the spatial basis, while the columns of are the right singular vectors of , representing the temporal basis. The singular values of are stored in descending order as the entries of the diagonal matrix . For dimensionality reduction purposes, only the first columns of , the first columns of , and the upper-left sub-matrix of are considered, corresponding to the largest singular values. Specifically, the first columns of represent the most effective POD modes, denoted as for now on.
The vorticity field is thus approximated as a linear superposition of the contributions of the first modes, which can be mathematically expressed as
where are the spatial modes, are the time-dependent modal coefficients (also known as generalized coordinates), and is the number of retained modes in ROM approximation (i.e., ROM dimension). We note that the POD basis functions are orthonormal by construction as
where the angle parentheses stands for the standard inner product in Euclidean space (i.e., dot product).
Meanwhile, since vorticity and streamfunction fields are related by Eq. 2, they can share the same modal coefficients (). Moreover, the basis functions for the streamfunction (denoted as ) can be derived from those of the vorticity as follows,
and the ROM approximation of the streamfunction can be written as
3.3 Galerkin Projection
After constructing a set of POD basis functions, an orthogonal Galerkin projection can be performed to obtain the Galerkin ROM (GROM). To do so, the ROM approximation (Eq. 7-10) is substituted into the governing equation (Eq. 1
). Noting that the POD bases are only spatial functions (i.e., independent of time) and the modal coefficients are independent of space, we get the the following set of ordinary differential equations (ODEs) representing the tensorial GROM
are the matrix and tensor of predetermined model coefficients corresponding to linear and nonlinear terms, respectively. Those are precomputed during an offline stage as
Equation 11 can be rewritten in compact form as
where the (continuous-time) model map is defined as follows,
Alternatively, Eq. 12 can be given in discrete-time form as
where is the discrete-time map obtained by any suitable temporal integration technique. Here, we use the fourth-order Runge-Kutta (RK4) method as follows,
Thus the discrete-time map defining the transition from time to time is written as
3.4 Long Short-Term Memory Nudging
Due to the quadratic nonlinearity in the governing equation (and consequently the GROM), the online computational cost of solving Eq. 11 is (i.e., it scales cubically with the number of retained modes). Therefore, this has to be kept as low as possible for feasible implementation of ROM in digital twin applications that require near real-time responses. However, this is often not an easy task for systems with slow decay of the Kolmogorov n-width. Examples includes advection-dominated flows with strong nonlinear interactions among a wide range of modes. As a result, the resulting GROM is intrinsically imperfect model. That is even with the true initial conditions, and absence of numerical errors, the GROM might give inaccurate or false predictions.
Moreover, in most realistic cases, proper specification of the initial state, boundary conditions, and/or model parameters is rarely attainable. This uncertainty in problem definition, in conjunction with model imperfection, poses challenges for accurate predictions. In this study, we put forth a nudging-based methodology that fuses prior model forecast (using imperfect initial condition specification and imperfect model) with the available Eulerian sensor measurements to provide more accurate posterior prediction. Relating our setting to realistic applications, we build our framework on the assumption that measurements are noisy and sparse both in space and time. Nudging has a prestigious history in data assimilation, being a simple and unbiased approach. The idea behind nudging is to penalize the dynamical model evolution with the discrepancy between model’s predictions and observations[anthes1974data, lei2015nudging, stauffer1993optimal]. In other words, the forward model given in Eq. 13 is supplied with a nudging (or correction) term rewritten in the following form,
where is called the nudging (gain) matrix and is the set of measurements (observations), while is a mapping from model space to observation space. For example, can be a reconstruction map, from ROM space to FOM space. In other words, represents the “model forecast for the measured quantity”, while is the “actual” observations. Despite the simplicity of Eq. 16, the specification/definition of the gain matrix is not as simple [zou1992optimal, vidard2003determination, auroux2005back, lakshmivarahan2013nudging].
In the proposed framework, we utilize a recurrent neural network, namely the long short-term memory (LSTM) variant, to define the nudging map. In particular, Eq.16 implies that each component of (i.e., ) is corrected using a linear superposition of the components of , weighted by the gain matrix. Here, we relax this linearity assumption and generalize it to a possibly nonlinear mapping as,
where the map is learnt (or inferred) using an LSTM neural network, and is the prior model prediction computed using imperfect model and/or imperfect initial conditions (called background in data assimilation terminology), defined as . Thus, Eq. 17 can be rewritten as follows,
In order to learn the map , we consider the case with imperfect model, defective initial conditions, and noisy observations. Moreover, we suppose sensors are sparse in space and measurement signals are sparse in time, too. Specifically, we use sensors located at a few equally-spaced grid points, but a generalization to off-grid or adaptive sensor placement is possible. Also, we assume sensors send measurement signals every time units. In order to mimic sensor measurements and noisy initial conditions, we run a twin experiment as follows,
Solve the FOM equation (i.e., Eq. 1) and sample true field data () each time units. In other words, store at where is the total (maximum) time and is the time window over which measurements are collected.
Define erroneous initial field estimate as , where . stands for noise in initial state estimate, assumed as white Gaussian noise with zero mean and covariance matrix (i.e., ).
Define sparse and noisy measurements as , for . Similarly, stands for the measurements noise, assumed to be white Gaussian noise with zero mean and covariance matrix (i.e., ).
For LSTM training data, we project the erroneous field estimates (from Step 2) onto the POD basis functions to get the erroneous POD modal coefficients (i.e., , for . Then, we integrate those erroneous coefficients for time units to get the background prediction , for .
Then, we train the LSTM using and as inputs, and set the target as the correction , for . The true modal coefficients () are obtained by projecting the true field data (from Step 1) onto the POD bases, where the projection is defined via the inner product as . In order to enrich the training data set, Step 2 and Step 3 are repeated several times giving an ensemble of erroneous state estimates and noisy measurements at every time instant of interest. Each member of those ensembles represents one training sample. This also assists the LSTM network to handle wider range of noise.
We emphasize that the proposed LSTM-N approach not only cures model imperfection (i.e., provides model closure as well as accounts for any missing physical processes), but also treats uncertainties in initial state estimates. This might be caused by the selection of inaccurate wake vortex model, or the idealizations embedded in this model compared to reality. Moreover, the field measurements (i.e., the nudging data) are assumed to be sparse and noisy to mimic real-life situations.
In order to test and verify the proposed ideas, we consider a square 2D domain with a side length of . Flow is initiated using a pair of Gaussian vortices as given in Eq. 5 centered at and with an interaction parameter of . Results in this section are shown at . For FOM simulations, a regular Cartesian grid resolution of is considered (i.e., ), with a time-step of . Snapshots of vorticity fields are collected every 100 time-steps for , totalling 300 snapshots. The evolution of the wake vortex problem is depicted in Figure 2, demonstrating the convective and interactive mechanisms affecting the transport and development of the two vortices.
For ROM computations, 6 modes are retained in the reduced order approximation (i.e., ) and a time step of is adopted for the temporal integration of GROM equations. In order to implement the LSTM-N approach, we begin at erroneous initial condition defined as , where is defined with Eq. 5, and is a white Gaussian noise with zero mean and covariance matrix . For simplicity, we assume , where
is the standard deviation in the “background” estimate of the initial condition and
is the identity matrix. We note that this formulation implies that the errors in our estimates of the initial vorticity field at different spatial locations are uncorrelated. As nudging field data, we locate sensors to measure the vorticity fieldevery 64 grid points (i.e., 9 sensors in each direction, with , where is the number of spatial steps between sensors locations), and collect measurements every 10 time steps (i.e., each time unit with , where is the number of time steps between measurement signals). To account for noisy observations, a white Gaussian noise of zero mean and covariance matrix of is added to the true vorticity field obtained from the FOM simulation at sensors locations. Similar to , we set , where is the standard deviation of measurement noise. This assumes that the noise in sensors measurements are not correlated to each other, and all sensors have similar quality (i.e., add similar amounts of noise to the measurements). As a base case, we set , and .
The procedure presented in Sec. 3.4 is applied using the numerical setup described above, and compared against the reference case of GROM with the erroneous initial condition and inherent model imperfections due to modal truncation (denoted as background forecast). In Fig. 3, the temporal evolution of the POD modal coefficients is shown for the true projection, background, and LSTM-Nudge results. The true projection results are obtained by the projection of the true FOM field at different time instants onto the corresponding basis functions (i.e., via inner product, ). The background trajectory is the reference solution obtained by standard GROM using the erroneous initial condition, without any closure or corrections. It can be seen that the background trajectory gets off the true trajectory by time as a manifestation of model. Also, note that the background solution does not begin from the same point as true projection due to the noise in the initial condition. On the other hand, the LSTM-N predictions almost perfectly match the true projection solution, implying that the approach is capable of blending noisy observations with a prior estimate to gain more accurate predictions.
In order to better visualize the predictive capabilities of the LSTM-N methodology, we compute the reconstructed vorticity field using Eq. 7. The final field reconstruction (at ) is shown in Figure 4, comparing the true projection, background, LSTM-N results. Note that the field obtained from true projection at any time instant can be computed as , and represents the optimal reduced-rank approximation that can be obtained using a linear subspace spanned by bases. Comparing true projection results from Figure 4 against FOM at final time from Figure 2 reveals that, for this particular case, 6 modes are qualitatively capable to capture most of the relevant features of the flow field. The LSTM-N outputs significantly match the projection of the FOM field, while the background forecasts show some visible deviations.
For further quantitative assessment, the root mean-squares error of the reconstructed field with respect to the FOM solution is calculated as a function of time as follows,
where is the true vorticity field obtained from solving the FOM equation, while is the reduced order approximation computed through true projection, background (reference) solution, or LSTM-N method. The at different times is plotted in Figure 5, demonstrating the capability of LSTM-N framework to efficiently recover the optimal reconstruction given a few sparse measurements.
4.1 Effect of Noise
Next, We explore the effect of noise on the LSTM-N results. In other words, we investigate how much noise the framework can tolerate. We note that we keep the same LSTM, trained with the base level of noise (i.e., and ) while we test it using different levels of noise. First, we gradually increase the standard deviation of measurement noise from to (5 times larger) and (10 times larger). In Figure 6, we plot the temporal metrics as well as field reconstruction at final time. We find that performance deteriorates a bit with that increase in measurement noise although results are still significantly better than the background forecast (starting from the same initial conditions).
For testing the effect of initial state perturbation, we increase from to , and . Figure 7 display the effect of those levels of initial field perturbations on background forecasts. Despite that, LSTM-N is performing very well even at those high levels of initial peturbations. This is even clearer from the plots, beginning from relatively large values and quickly decaying to the level of true projection once measurements are available. From Figure 6 and Figure 7, we can deduce that the influence of the level of measurement noise on LSTM-N performance is more prominent that of the initial field perturbation. We reiterate that in both cases, the LSTM is trained with and and tested for different values.
4.2 Effect of Measurements Sparsity
Finally, we consider the effect of measurement sparsity on the accuracy of the presented approach. This is crucial for the trade-off between quality and quantity of sensors, since it has been shown in Section 4.1 that measurement noise significantly affects the LSTM-N output. For the base case, sensors are placed at every grid points. Now, we place sensors every 32 grid points, representing a denser case, as well as and grid points, representing scarcer sensors. We find that the framework is quite robust, providing very good results as illustrated in Figure 8. We note here, however, that the same original LSTM cannot be utilized for testing with varying sparsity. This is because sensors sparsity controls the size of the input vector. Therefore, a new LSTM has to be re-trained for each case with the corresponding number of measurements. We also emphasize that compressed sensing techniques should be adopted for optimized sensors placement, rather than the simple collocated equidistant arrangement followed in the present study.
Regarding temporal sparsity, we collect measurement each , , and time-steps, compared to the reference case where measurement are collected every time-steps. We can see from Figure 9 that all cases yield very good predictions. Furthermore, plots provide valuable insights about the capability of LSTM-N to effectively fuse measurement with background forecast to produce more accurate state estimates. For example, when measurement signals are collected every time-steps, this corresponds to time-units, meaning that the LSTM-N directly adopts the GROM prediction without correction for this amount of time, before correction is added. This is evident from Figure 9c, where the red curve starts and continues with the orange curve, then a sharp reduction of the is observed. This behavior is repeated as the red curve departs from the blue one (corresponding to true projection) before correction is added every time-units (i.e., time-steps). On the other hand, when more frequent measurement signals are available (e.g., every time-steps), deviation from the true projection results is less observed, as shown in Figure 9a.
5 Concluding Remarks
We demonstrate hybrid analysis and modeling (HAM) as an enabler for digital twin application of an airport. Specifically, we investigate the problem of wake-vortex transport and decay as a key factor for the determination of separation distance between consecutive aircraft. Reduced order modeling based on Galerkin projection and proper orthogonal decomposition is adopted to provide computationally light models. We develop a methodology to exploit machine learning to cure model deficiency through online measurement data adopting ideas from dynamic data assimilation. Specifically, an LSTM architecture is trained to nudge prior predictions toward optimal state values using a combination of background information along with sparse and noisy observations. The proposed framework is distinguished from previous studies in the sense that it is built on the assumption that all the computing ingredients are intrinsically imperfect, including a truncated GROM model, erroneous initial conditions, and defective sensors.
We study the effects of measurement noise and initial condition perturbation on LSTM-N behavior. The framework works sufficiently well for a wide range of noise and perturbation. Nonetheless, numerical experiments indicate relatively more dependence of performance on measurement quality (noise). Meanwhile, we find that sensors sparsity has minimal effects on results. We emphasize that the proposed framework represents a way of merging human knowledge, physics-based models, measurement information, and data-driven tools to maximize their benefits rather than discarding any of them. The presented framework paves the way for viable digital twin applications to enhance airports capacities by regulating air traffic without compromising consecutive aircraft safety. Nonetheless, the scalability of the approach has yet to be tested using different vortex models and taking into account other effective factors (e.g., wind).
This material is based upon work supported by the U.S. Department of Energy, Office of Science, Office of Advanced Scientific Computing Research under Award Number DE-SC0019290. O.S. gratefully acknowledges the U.S. DOE Early Career Research Program support. The work of A.R. and M.T. was supported by funding from the EU SESAR (Single European Sky ATM Research) program.
Disclaimer: This report was prepared as an account of work sponsored by an agency of the United States Government. Neither the United States Government nor any agency thereof, nor any of their employees, makes any warranty, express or implied, or assumes any legal liability or responsibility for the accuracy, completeness, or usefulness of any information, apparatus, product, or process disclosed, or represents that its use would not infringe privately owned rights. Reference herein to any specific commercial product, process, or service by trade name, trademark, manufacturer, or otherwise does not necessarily constitute or imply its endorsement, recommendation, or favoring by the United States Government or any agency thereof. The views and opinions of authors expressed herein do not necessarily state or reflect those of the United States Government or any agency thereof.