GMLS-Nets: A framework for learning from unstructured data

by   Nathaniel Trask, et al.

Data fields sampled on irregularly spaced points arise in many applications in the sciences and engineering. For regular grids, Convolutional Neural Networks (CNNs) have been successfully used to gaining benefits from weight sharing and invariances. We generalize CNNs by introducing methods for data on unstructured point clouds based on Generalized Moving Least Squares (GMLS). GMLS is a non-parametric technique for estimating linear bounded functionals from scattered data, and has recently been used in the literature for solving partial differential equations. By parameterizing the GMLS estimator, we obtain learning methods for operators with unstructured stencils. In GMLS-Nets the necessary calculations are local, readily parallelizable, and the estimator is supported by a rigorous approximation theory. We show how the framework may be used for unstructured physical data sets to perform functional regression to identify associated differential operators and to regress quantities of interest. The results suggest the architectures to be an attractive foundation for data-driven model development in scientific machine learning applications.



There are no comments yet.


page 9


GMLS-Nets: Scientific Machine Learning Methods for Unstructured Data

Data fields sampled on irregularly spaced points arise in many applicati...

Manapy: MPI-Based framework for solving partial differential equations using finite-volume on unstructured-grid

Manapy is a parallel, unstructured, finite-volume based solver for the s...

Spherical CNNs on Unstructured Grids

We present an efficient convolution kernel for Convolutional Neural Netw...

Enabling Nonlinear Manifold Projection Reduced-Order Models by Extending Convolutional Neural Networks to Unstructured Data

We propose a nonlinear manifold learning technique based on deep autoenc...

Learning optimal multigrid smoothers via neural networks

Multigrid methods are one of the most efficient techniques for solving l...

Convolutional Neural Nets: Foundations, Computations, and New Applications

We review mathematical foundations of convolutional neural nets (CNNs) w...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Many scientific and engineering applications require processing data sets sampled on irregularly spaced points. Consider e.g. GIS data associating geospatial locations with measurements, LIDAR data characterizing object geometry via point clouds, scientific simulations with unstructured meshes. This need is amplified by the recent surge of interest in scientific machine learning (SciML) DOEReportAI2018 targeting the application of data-driven techniques to the sciences. In this setting, data typically takes the form of e.g. synthetic simulation data from meshes, or from sensors associated with data sites evolving under unknown or partially known dynamics. This data is often scarce or highly constrained, and it has been proposed that successful SciML strategies will leverage prior knowledge to enhance information gained from such data Atzberger_PosPaper_2018; DOEReportAI2018. One may exploit physical properties and invariances such as transformation symmetries, conservation structure, or mathematical knowledge such as solution regularity BrennerFEM2008; BruntonKutz2016; Atzberger_PosPaper_2018. This new application space necessitates ML architectures capable of utilizing such knowledge.

Implementations in TensorFlow and PyTorch are available at and

For data sampled on regular grids, Convolutional Neural Networks (CNNs) are widely used to exploit translation invariance and hierarchical structure to extract features from data. Here we generalize this technique to the SciML setting by introducing GMLS-Nets based on the scattered data approximation theory underlying generalized moving least squares (GMLS). Similar to how CNNs learn stencils which benefit from weight-sharing, GMLS-Nets operate by using local reconstructions to learn operators between function spaces. The resulting architecture is similarly interpretable and serves as an effective generalization of CNNs to unstructured data, while providing mechanisms to incorporate knowledge of underlying physics.

In this work we show how GMLS-Nets may be used in a SciML setting. Our results show GMLS-Nets are an effective tool to discover partial diferential equations (PDEs), which may be used as a foundation to construct data-driven models while preserving physical invariants like conservation principles. We also show they may be used to improve traditional scientific components, such as time integrators. We show they also can be used to regress engineering quantities of interest from scientific simulation data. Finally, we briefly show GMLS-Nets can perform reasonably relative to convNets on traditional computer vision benchmarks. These results indicate the promise of GMLS-Nets to support data-driven modeling efforts in SciML applications. Implementations in TensorFlow and PyTorch are available at and

1.1 Generalized Moving Least Squares (GMLS)

Generalized Moving Least Squares (GMLS) is a non-parametric functional regression technique to construct approximations of linear, bounded functionals from scattered samples of an underlying field by solving local least-square problems. On a Banach space with dual space , we aim to recover an estimate of a given target functional acting on , where denote associated locations in a compactly supported domain . We assume is characterized by an unstructured collection of sampling functionals, .

To construct this estimate, we consider and seek an element which provides an optimal reconstruction of the samples in the following weighted- sense.


Here is a positive, compactly supported kernel function establishing spatial correlation between the target functional and sampling set. If one associates locations with , then one may consider radial kernels , with support .

Assuming the basis , and denoting

, the optimal reconstruction may be written in terms of an optimal coefficient vector


Provided one has knowledge of how the target functional acts on , the final GMLS estimate may be obtained by applying the target functional to the optimal reconstruction


Sufficient conditions for the existence of solutions to Eqn. 1 depend only upon the unisolvency of over , the distribution of samples , and mild conditions on the domain ; they are independent of the choice of . For theoretical underpinnings and recent applications, we refer readers to wendland2004scattered; trask2017high; trask2019conservative; Atzberger_GMLS_Manifold_2019.

GMLS has primarily been used to obtain point estimates of differential operators to develop meshfree discretizations of PDEs. The abstraction of GMLS however provides a mathematically rigorous approximation theory framework which may be applied to a wealth of problems, whereby one may tailor the choice of , , and to a given application. In the current work, we will assume the action of on is unknown, and introduce a parameterization , where

denote hyperparameters to be inferred from data. Classically, GMLS is restricted to linear bounded target functionals; we will also consider a novel nonlinear extension by considering estimates of the form


where is a family of nonlinear operators parameterized by acting upon the GMLS reconstruction. Where unambiguous, we will drop the dependence of operators and simply write e.g. . We have recently used related non-linear variants of GMLS to develop solvers for PDEs on manifolds in Atzberger_GMLS_Manifold_2019.

For simplicity, in this work we specialize as follows. Let: be point evaluations on ; be , the space of -order polynomials; let , where denotes the positive part of a function and . We stress however that this framework supports a much broader application. Consider e.g. learning from flux data related to -conforming discretizations, where one may select as sampling functional , or consider the physical constraints that may be imposed by selecting as be divergence free or satisfy a differential equation.

We illustrate now the connection between GMLS and convolutional networks in the case of a uniform grid, . Consider a sampling functional , and assume the parameterization , . Then the GMLS estimate is given explicitly at a point by


Contracting terms involving and , we may write . The collection of stencil coefficients at are . Therefore, one application for GMLS is to build stencils similar to convolutional networks. A major distinction is that GMLS can handle scattered data sets and a judicious selection of , and can be used to inject prior information. Alternatively, one may interpret the regression over as an encoding in a low-dimensional space well-suited to characterize common operators. For continuous functions for example, an operator’s action on the space of polynomials is often sufficient to obtain a good approximation. We also remark that unlike CNNs there is often less need to handle boundary effects; GMLS-nets is capable of learning one-sided stencils.

1.2 GMLS-Nets

From an ML perspective, GMLS estimation consists of two parts: (i) data is encoded via the coefficient vector providing a compression of the data in terms of , (ii) the operator is regressed over ; this is equivalent to finding a function . We propose GMLS-Layers encoding this process in Figure 1.

Figure 1: GMLS-Nets. Scattered data inputs are processed by learnable operators parameterized via GMLS estimators. A local reconstruction is built about each data point and encoded as a coefficient vector via equation 2. The coefficient mapping of equation 4 provides the learnable action of the operator. GMLS-Layers can be stacked to obtain deeper architectures and combined with other neural network operations to perform classification and regression tasks

(inset, SD: scattered data, MP: max-pool, MLP: multi-layer perceptron)


This architecture accepts input channels indexed by which consist of components of the data vector-field sampled over the scattered points . We allow for different sampling points for each channel, which may be helpful for heterogeneous data. Each of these input channels is then used to obtain an encoding of the input field as the vector identifying the optimal representer in .

We next select our parameterization of the functional via , which may be any family of functions trainable by back-propagation. We will consider two cases in this work appropriate for linear and non-linear operators. In the linear case we consider , which is sufficient to exactly reproduce differential operators. For the nonlinear case we parameterize with a multi-layer perceptron (MLP),

. Note that in the case of linear activation function, the single layer MLP model reduces to the linear model.

Nonlinearity may thus be handled within a single nonlinear GMLS-Layer, or by stacking multiple linear GMLS-layers with intermediate ReLU’s, the later mapping more directly onto traditional CNN construction. We next introduce pooling operators applicable to unstructured data, whereby for each point in a given target point cloud

, . Here

represents the pooling operator (e.g. max, average, etc.). With this collection of operators, one may construct architectures similar to CNNs by stacking GMLS-Layers together with pooling layers and other NN components. Strided GMLS-layers generalizing strided CNN stencils may be constructed by choosing target sites on a second, smaller point cloud.

1.3 Relation to other work.

Many recent works aim to generalize CNNs away from the limitations of data on regular grids SpectralCNNs_Bruna_Lecun_2014; Bronstein_BeyondEuc_2017. This includes work on handling inputs in the form of directed and un-directed graphs GraphNNs_Scarselli_2009, processing graphical data sets in the form of meshes and point-clouds Guibas_PointNet_PlusPlus_2017; DeepSets2017, and in handling scattered sub-samplings of images SplineCNN_Fey_2018; SpectralCNNs_Bruna_Lecun_2014. Broadly, these works: (i) use the spectral theory of graphs and generalize convolution in the frequency domain SpectralCNNs_Bruna_Lecun_2014, (ii) develop localized notions similar to convolution operations and kernels in the spatial domain KPConv_Hugues_2019. GMLS-Nets is most closely related to the second approach.

The closest works include SplineCNNs SplineCNN_Fey_2018, MoNet Monti_GeometricDL_2016; KipfWelling_Kernel_CNNs_2016, KP-Conv KPConv_Hugues_2019, and SpiderCNN SpiderCNN_Xu_2018. In each of these methods a local spatial convolution kernel is approximated by a parameterized family of functions: open/closed B-Splines SplineCNN_Fey_2018, a Gaussian correlation kernel Monti_GeometricDL_2016; KipfWelling_Kernel_CNNs_2016, or a kernel function based on a learnable combination of radial ReLu’s KPConv_Hugues_2019. The SpiderCNNs share many similarities with GMLS-Nets using a kernel that is based on a learnable degree-three Taylor polynomial that is taken in product with a learnable radial piecewise-constant weight function SpiderCNN_Xu_2018. A key distinction of GMLS-Nets is that operators are regressed directly over the dual space without constructing shape/kernel functions. Both approaches provide ways to approximate the action of a processing operator that aggregates over scattered data.

We also mention other meshfree learning frameworks: PointNet Guibas_PointNet_2017; Guibas_PointNet_PlusPlus_2017 and Deep Sets DeepSets2017

, but these are aimed primarily at set-based data and geometric processing tasks for segmentation and classification. Additionally, Radial Basis Function (RBF) networks are similarly built upon similar approximation theory  

RBF_Early_Broomhead_1988; RBF_NNs_Poggio_1990.

Related work on operator regression in a SciML context include Karniadakis_PINNs_2019; Karniadakis_Raissi_HiddenPhys_2018; PDENet_Long2018; BruntonKutz2016; RudyKutz2017; Lagaris_PDE_ODE_NN_1998; BrennerDataDrivenPDE2019; Patel2018. In PINNs Karniadakis_PINNs_2019; Karniadakis_Raissi_HiddenPhys_2018, a versatile framework based on DNNs is developed to regress both linear and non-linear PDE models while exploiting physics knowledge. In BrennerDataDrivenPDE2019 and PDE-Nets PDENet_Long2018, CNNs are used to learn stencils to estimate operators. In BruntonKutz2016; RudyKutz2017 dictionary learning is used along with sparse optimization methods to identify dynamical systems to infer physical laws associated with time-series data. In Patel2018, regression is performed over a class of nonlinear pseudodifferential operators, formed by composing neural network parameterized Fourier multipliers and pointwise functionals.

GMLS-Nets can be used in conjunction with the above methods. GMLS-Nets have the distinction of being able to move beyond reliance on CNNs on regular grids, no longer need moment conditions to impose accuracy and interpretability of filters for estimating differential operators 

PDENet_Long2018, and do not require as strong assumptions about the particular form of the PDE or a pre-defined dictionary as in Karniadakis_PINNs_2019; RudyKutz2017

. We expect that prior knowledge exploited globally in PINNs methods may be incorporated into the GMLS-Layers. In particular, the ability to regress natively over solver degrees of freedom will be particularly useful for SciML applications.

2 Results

2.1 Learning differential operators and identifying governing equations.

Figure 2: Regression of Differential Operators. GMLS-Nets can accurately learn both linear and non-linear operators, shown is the case of the 1D/2D Laplacians and Burger’s equation. In-homogeneous operators can also be learned by including as one of the input channels the location . Training and test data consists of random input functions in 1d at nodes on and in 2d at nodes in

. Each random input function follows a Gaussian distribution with

with . Training and test data is generated with by computed operators with spectral accuracy for and .

Many data sets arising in the sciences are generated by processes for which there are expected governing laws expressible in terms of ordinary or partial differential equations. GMLS-Nets provide natural features to regress such operators from observed state trajectories or responses to fluctuations. We consider the two settings


The can be a linear or non-linear operator. When the data are snapshots of the system state at discrete times , we use estimators based on


In the case that , this corresponds to using an Implicit Euler scheme to model the dynamics. Many other choices are possible, and later we shall discuss estimators with conservation properties. The learning capabilities of GMLS-Nets to regress differential operators are shown in Fig. 2. As we shall discuss in more detail, this can be used to identify the underlying dynamics and obtain governing equations.

2.2 Long-time integrators: discretization for native data-driven modeling.

Figure 3: Top: Advection-diffusion solution when . The true model solution and regressed solution all agree with the analytic solution. Bottom: Solution for under-resolved dynamics with . The implicit integrator causes FDM/FVM of true operator to be overly dissipative. The regressed operator matches well with the FVM operator, matching the phase almost exactly.
0.1 0.00093 0.00015 0.00014 0.00010
1 0.0011 0.00093 0.0011 0.00011
10 0.0083 0.0014 0.0083 0.00035
Table 1: The -error for data-driven finite difference model (FDM) and finite volume models (FVM) for advection-diffusion equation. Comparisons made to classical discretizations using exact operators. For conservative data-driven finite volume model, there is an order of magnitude better accuracy for large timestep integration.

The GMLS framework provides useful ways to target and sample arbitrary functionals. In a data transfer context, this has been leveraged to couple heterogeneous codes. For example, one may sample the flux degrees of freedom of a Raviart-Thomas finite element space and target cell integral degrees of freedom of a finite volume code to perform native data transfer

. This avoids the need to perform intermediate projections/interpolations

kuberry2018virtual. Motivated by this, we demonstrate that GMLS may be used to learn discretization native data-driven models, whereby dynamics are learned in the natural degrees of freedom for a given model. This provides access to structure preserving properties such as conservation, e.g., conservation of mass in a physical system.

We take as a source of training data the following analytic solution to the 1D unsteady advection-diffusion equation with advection and diffusion coefficients and on the interval .


To construct a finite difference model (FDM), we assume a node set . To construct a finite volume model (FVM), we construct the set of cells , with associated cell measure and set of oriented boundary faces . We then assume for uniform timestep the Implicit Euler update for the FDM given by


To obtain conservation we use the FVM update


For the advection-diffusion equation in the limit , and . By construction, for any choice of hyperparameters the FVM will be locally conservative. In this sense, the physics of mass conservation are enforced strongly via the discretization, and we parameterize only an empirical closure for fluxes - GMLS naturally enables such native flux regression.

We use a single linear GMLS-net layer to parameterize both and , and train over a single timestep by using Eqn. 8 to evaluate the exact time increment in Eqns. 9-10 . We perform gradient descent to minimize the RMS of the residual with respect to . For the FDM and FVM we use a cubic and quartic polynomial space, respectively. Recall that to resolve the diffusion and advective timescales one would select a timestep of roughly .

After regressing the operator, we solve the extracted scheme to advance from to . As implicit Euler is unconditionally stable, one may select at the expense of introducing numerical dissipation, "smearing" the solution. We consider and compare both the learned FDM/FVM dynamics to those obtained with a standard discretization (i.e. letting . From Fig. 3 we observe that for both the regressed and reference models agree well with the analytic solution. However, for , we see that while the reference models are overly dissipative, the regressed models match the analytic solution. Inspection of the norm of the solutions at in Table 1 indicates that as expected, the classical solutions corresponding to and converge as . The regressed FDM is consistently more accurate than the exact operator. Most interesting, the regressed FVM is roughly independent of , providing a improvement in accuracy over the classical model. This preliminary result suggests that GMLS-Nets offer promise as a tool to develop non-dissipative implicit data-driven models. We suggest that this is due to the ability for GMLS-Nets to regress higher-order differential operator corrections to the discrete time dynamics, similar to e.g. Lax-Friedrichs/Lax-Wendroff schemes.

2.3 Data-driven modeling from molecular dynamics.

Figure 4: GMLS-Nets can be trained with molecular-level data to infer continuum dynamical models. Data are simulations of Brownian motion with periodic boundary conditions on and diffusivity (top-left, unconstrained trajectory). Starting with initial density of a heaviside function, we construct histograms over time to estimate the particle density (upper-right, solid lines) and perform further filtering to remove sampling noise (upper-right, dashed lines). GMLS-Net is trained using FVM estimator of equation 10. Predictive continuum model is obtained for the density evolution. Long-term agreement is found between the particle-level simulation (bottom, solid lines) and the inferred continuum model (bottom, dashed lines).

In science and engineering applications, there are often high-fidelity descriptions of the physics based on molecular dynamics. One would like to extract continuum descriptions to allow for predictions over longer time/length-scales or reduce computational costs. Coarse-grained modeling efforts also have similar aims while retaining molecular degrees of freedom. Each seek lower-fidelity models that are able to accurately predict important statistical moments of the high-fidelity model over longer timescales. As an example, consider a mean-field continuum model derived by coarse-graining a molecular dynamics simulation. Classically, one may pursue homogenization analysis to carefully derive such a continuum model, but such techniques are typically problem specific and can become technical. We illustrate here how GMLS-Nets can be used to extract a conservative continuum PDE model from particle-level simulation data.

Brownian motion has as its infinitesimal generator the unsteady diffusion equation karatzas1998brownian. As a basic example, we will extract a 1D diffusion equation to predict the long-term density of a cloud of particles undergoing pseudo-1D Brownian motion. We consider the periodic domain , and generate a collection of particles with initial position

drawn from the uniform distribution


Due to this initialization and domain geometry, the particle density is statistically one dimensional. We estimate the density field along the first dimension by constructing a collection of uniform width cells and build a histogram,


The is the indicator function taking unit value for and zero otherwise.

We evolve the particle positions under 2D Brownian motion (the density will remain statistically 1D as the particles evolve). In the limit , the particle density satisfies a diffusion equation, and we can scale the Brownian motion increments to obtain a unit diffusion coefficient in this limit.

As the ratio is finite, there is substantial noise in the extracted density field. We obtain a low pass filtered density, , by convolving with a Gaussian kernel of width twice the histogram bin width.

We use the FVM scheme in the same manner as in the previous section. In particular, we regress a flux that matches the increment . This window was selected, since the regression at is ineffective as the density approximates a heaviside function. Such near discontinuities are poorly represented with polynomials and subsequently not expected to train well. Additionally, we train over a time interval of , where in general steps can be used to help mollify high-frequency temporal noise.

To show how the GMLS-Nets’ inferred operator can be used to make predictions, we evolve the regressed FVM for one hundred timesteps and compare to the density field obtained from the particle solver. We apply Dirichlet boundary conditions and initial conditions matching the histogram . Again, the FVM by construction is conservative, where it is easily shown for all that . A time series summarizing the evolution of density in both the particle solver and the regressed continuum model is provided in Fig 4. While this is a basic example, this illustrates the potential of GMLS-nets in constructing continuum-level models from molecular data. These techniques also could have an impact on data-driven approaches for numerical methods, such as projective integration schemes.

2.4 Image processing: MNIST benchmark.

Figure 5: MNIST Classification. GMLS-Layers are substituted for convolution layers in a basic two-layer architecture (Conv2d + ReLu + MaxPool + Conv2d + ReLu + MaxPool + FC). The Conv-2L test are all Conv-Layers, Hybrib-2L has GMLS-Layer followed by a Conv-Layer, and GMLS-2L uses all GMLS-Layers. GMLS-Nets used a polynomial basis of monomials. The filters in GMLS are by design more limited than a general Conv-Layer and correspond here to estimated derivatives of the data set (top-right). Despite these restrictions, the GMLS-Net still performs reasonably well on this basic classification task (bottom-table).

While image processing is not the primary application area we intend, GMLS-Nets can be used for tasks such as classification. For the common MNIST benchmark task, we compare use of GMLS-Nets with CNNs in Figure 5. CNNs use kernel size

, zero-padding, max-pool reduction

, channel sizes , FC as linear map to soft-max prediction of the categories. The GMLS-Nets use the same architecture with a GMLS using polynomial basis of monomials in up to degree .

We find that despite the features extracted by GMLS-Nets being more restricted than a general CNN, there is only a modest decrease in the accuracy for the basic MNIST task. We do expect larger differences on more sophisticated image tasks. This basic test illustrates how GMLS-Nets with a polynomial basis extracts features closely associated with taking derivatives of the data field. We emphasize for other choices of basis for

and sampling functionals , other features may be extracted. For polynomials with terms in dictionary order, coefficients are shown in Fig. 5. Notice the clear trends and directional dependence on increases and decreases in the image intensity, indicating and . Given the history of PDE modeling, for many classification and regression tasks arising in the sciences and engineering, we expect such derivative-based features extracted by GMLS-Nets will be useful in these applications.

2.5 GMLS-Net on unstructured fluid simulation data.

We consider the application of GMLS-Nets to unstructured data sets representative of scientific machine learning applications. Many hydrodynamic flows can be experimentally characterized using velocimetry measurements. While velocity fields can be estimated even for complex geometries, in such measurements one often does not have access directly to fields, such as the pressure. However, integrated quantities of interest, such as drag are fundamental for performing engineering analysis and yet depend upon both the velocity and pressure. This limits the level of characterization that can be accomplished when using velocimetry data alone. We construct GMLS-Net architectures that allow for prediction of the drag directly from unstructured fluid velocity data, without any direct measurement of the pressure.

We illustrate the ideas using flow past a cylinder of radius . This provides a well-studied canonical problem whose drag is fully characterized experimentally in terms of the Reynolds number, . For incompressible flow past a cylinder, one may apply dimensional analysis to relate drag to the Reynolds number via the drag coefficient :


The is the free-stream velocity, is the frontal area of the cylinder, and . Such analysis requires in practice engineering judgement to identify relevant dimensionless groups. After such considerations, this allows one to collapse relevant experimental parameters to () onto a single curve.

Figure 6: GMLS-Nets are trained on a CFD data set of flow velocity fields. Top: Training set of the drag coefficient plotted as a function of Reynolds number (small black dots). The GMLS-Net predictions for a test set (large red dots). Bottom: Flow velocity fields corresponding to the smallest (left) and largest (right) Reynolds numbers in the test set.

For the purposes of training a GMLS-Net, we construct a synthetic data set by solving the Reynolds averaged Navier-Stokes (RANS) equations with a steady state finite volume code. Let and consider and . We consider a turbulence model with inlet conditions consistent with a turbulence intensity and a mixing length corresponding to the inlet size. From the solution, we extract the velocity field at cell centers to obtain an unstructured point cloud . We compute directly from the simulations. We then obtain an unstructured data set of features over , with associated labels . We emphasize that although and are used to generate the data, they are not included as features, and the Reynolds number is therefore hidden.

We remark that the model is well known to perform poorly for flows with strong curvature such as recirculation zones. Here, in our proof-of-concept demonstration, we treat the RANS- solution as ground truth for simplicity, despite its short-comings and acknowledge that a more physical study would consider ensemble averages of LES/DNS data in 3D. We aim here just to illustrate the potential utility of GMLS-Nets in a scientific setting for processing such unstructured data sets.

As an architecture, we provide two input channels for the two velocity components to three stacked GMLS layers. The first layer acts on the cell centers, and intermediate pooling layers down-sample to random subsets of . We conclude with a linear activation layer to extract the drag coefficient as a single scalar output. We randomly select of the samples for training, and use the remainder as a test set. We quantify using the root-mean-square (MSE) error which we find to be below .

The excellent predictive capability demonstrated in Fig. 6 highlights GMLS-Nets ability to provide an effective means of regressing engineering quantities of interest directly from velocity flow data; the GMLS-Net architecture is able to identify a latent low-dimensional parameter space which is typically found by hand using dimensional analysis. This similarity relationship across the Reynolds numbers is identified, despite the fact that it does not have direct access to the viscosity parameter. These initial results indicate some of the potential of GMLS-Nets in processing unstructured data sets for scientific machine learning applications.

3 Conclusions

We have introduced GMLS-Nets for processing scattered data sets leveraging the framework of GMLS. GMLS-Nets allow for generalizing convolutional networks to scattered data, while still benefiting from underlying translational invariances and weight sharing. The GMLS-layers provide feature extractors that are natural particularly for regressing differential operators, developing dynamical models, and predicting quantities of interest associated with physical systems. GMLS-Nets were demonstrated to be capable of obtaining dynamical models for long-time integration beyond the limits of traditional CFL conditions, for making predictions of density evolution of molecular systems, and for predicting directly from flow data quantities of interest in fluid mechanics. These initial results indicate some promising capabilities of GMLS-Nets for use in data-driven modeling in scientific machine learning applications.

Appendix A Derivation of Gradients of the Operator .

a.1 Parameters of the operator .

We give here some details on the derivation of the gradients for the learnable GMLS operator and intermediate steps. This can be used in implementations for back-propagation and other applications.

GMLS works by mapping data to a local polynomial fit in region around with for . To find the optimal fitting polynomial to the function , we can consider the case with and weight function . In a region around a reference point the optimization problem can be expressed parameterically in terms of coefficients as

We write for short , where the basis elements in fact do depend on . Typically, for polynomials we just use . This is important in the case we want to take derivatives in the input values of the expressions.

We can compute the derivative in to obtain

This implies


then we can rewrite the coefficients as the solution of the linear system

This is sometimes written more explicitly for analysis and computations as

We can represent a general linear operator using the representation as

Typically, the weights will not be spatially dependent . Throughout, we shall denote this simply as and assume there is no spatial dependence, unless otherwise indicated.

a.2 Derivatives of in , , and .

The derivative in is given by

In the notation, we denote , where the basis elements in fact can depend on the particular . These terms can be expressed as


The derivatives in are given by

The full derivative of the linear operator can be expressed as

In the constant case , the derivative of simplifies to

The derivatives of the other terms follow more readily. For derivative of the linear operator in the coefficients , we have

For derivatives of the linear operator in the mapping coefficient values, we have

In the case of nonlinear operators there are further dependencies beyond just and , and less explicit expressions. For example, when using MLP’s there may be hierarchy of trainable weights . The derivatives of the non-linear operator can be expressed as

Here, one relies on back-propagation algorithms for evaluation of . Similarly, given the generality of , for derivatives in and , one can use back-propagation methods on

and the chain-rule with the expressions derived during the linear case for

and dependencies.