1 Introduction
Over the past few years, there has been a revolution in the successful application of Artificial Neural Networks (ANN), also commonly referred to Deep Neural Networks (DNN) and Deep Learning (DL), in various fields including image classification, handwriting recognition, speech recognition and translation, and computer vision. These ANN approaches have led to a sea change in the performance of search engines, autonomous driving, ecommerce, and photography (see
Bishop (2006); LeCun et al. (2015); Goodfellow et al. (2016) for a review). In engineering and science, ANNs have been applied to an increasing number of areas, including geosciences Yoon et al. (2015); Bergen et al. (2019); DeVries et al. (2018); Kong et al. (2018); Ren et al. (2019), material science Pilania et al. (2013); Butler et al. (2018); Shi et al. (2019); Brunton and Kutz (2019), fluid mechanics Brenner et al. (2019); Brunton et al. (2020), genetics Libbrecht and Noble (2015), and infrastructure health monitoring Rafiei and Adeli (2017); Sen et al. (2019), to name a few examples. In the solid and geomechanics community, deep learning has been used primarily for material modeling, in an attempt to replace classical constitutive models with ANNs Ghaboussi and Sidarta (1998); Kalidindi et al. (2011); Mozaffar et al. (2019). In these applications, training of the network, i.e., evaluation of the network parameters, is carried out by minimizing the norm of the distance between the network output (prediction) and the true output (training data). In this paper, we will refer to ANNs trained in this way as “datadriven.”A different class of ANNs, known as PhysicsInformed Neural Networks (PINN), was introduced recently Rudy et al. (2019); Raissi et al. (2019); Han et al. (2018); BarSinai et al. (2019); Zhu et al. (2019)
. This new concept of ANNs was developed to endow the network model with known equations that govern the physics of a system. The training of PINNs is performed with a cost function that, in addition to data, includes the governing equations, initial and boundary conditions. This architecture can be used for solution and discovery (finding parameters) of systems of ordinary differential equations (ODEs) and partial differential equations (PDEs). While solving ODEs and PDEs with ANNs is not a new topic, e.g.,
Meade and Fernandez (1994); Lagaris et al. (1998, 2000), the success of these new studies can be broadly attributed to: (1) the choice of network architecture, i.e., the set of inputs and outputs of the ANN, so that one can impose governing equations on the network; (2) algorithmic advances, including graphbased automatic differentiation for accurate differentiation of ANN functionals and for error backpropagation; and (3) availability of advanced machinelearning software with CPU and GPU parallel processing capabilities including Theano
Bergstra et al. (2010)and TensorFlow
Abadi et al. (2016).This framework has been used for solution and discovery of Schrodinger, Allen–Cahn, and Navier–Stokes equations Raissi et al. (2019); Rudy et al. (2019). It has also been used for solution of highdimensional stochastic PDEs Han et al. (2018). As pointed out in Han et al. (2018)
, this approach can be considered as a class of Reinforcement Learning
Lange et al. (2012), where the learning is on maximizing an incentive or minimizing a loss rather than direct training on data. If the network prediction does not satisfy a governing equation, it will result in an increase in the cost and therefore the learning traverses a path that minimizes that cost.Here, we focus on the novel application of PINNs to solution and discovery of solid mechanics. We focus on linear elasticity, but the proposed framework may be applied to other linear and nonlinear problems of solid mechanics. Since parameters of the governing PDEs can also be defined as trainable parameters, the framework inherently allows us to perform parameter identification (model inversion). We validate the framework on synthetic data generated from loworder and highorder Finite Element Methods (FEM) and from Isogeometric Analysis (IGA) Hughes et al. (2005); Cottrell et al. (2009). These datasets satisfy the governing equations with different order of accuracy, where the error can be considered as noise in data. We find that the training converges faster on more accurate datasets, pointing to importance of higherorder numerical methods for pretraining ANNs. We also find that if the data is preprocessed properly, the training converges to the correct solution and correct parameters even on data generated with a coarse mesh and loworder FEM—an important result that illustrates the robustness of the proposed approach. Finally, we find that, due to the imposition of the physics constraints, the training converges on a very sparse data set, which is a crucial property in practice given that the installation of a dense network of sensors can be very costly.
Parameter estimation (identification) of complex models is a challenging task that requires a large number of forward simulations, depending on model complexity and the number of parameters. As a result, most inversion techniques have been applied to simplified models. The use of PINNs, however, allows us to perform identification simultaneously with fitting the ANN model on data
Raissi et al. (2019). This property highlights the potential of this approach compared with classical methods. We explore the application of PINN models for identification of multiple datasets generated with different parameters. Similar to transfer learning, where a pretrained model is used as the initial state of the network Taylor and Stone (2009), we perform retraining on new datasets starting from a previously trained network on a different dataset (with different parameters). We find that the retraining and identification of other datasets take far less time. Since the successfully trained PINN model should also satisfy the physics constraints, it is in effect a surrogate model that can be used for extrapolation on unexplored data. To test this property, we train a network on four datasets with different parameters and then test it on a wide range of new parameter sets, and find that the results remain relatively accurate. This property points to the applicability of PINN models for sensitivity analysis, where classical approaches typically require an exceedingly large number of forward simulations.2 PhysicsInformed Neural Networks: Linear Elasticity
In this section, we review the equations of linear elastostatics with emphasis on PINN implementation.
2.1 Linear elasticity
The equations expressing momentum balance, the constitutive model and the kinematic relations are, respectively,
(1) 
Here,
denotes the Cauchy stress tensor. For the twodimensional problems considered here
(or ). We use the summation convention, and an subscript comma denotes partial derivative. The function denotes a body force, represents the displacements, is the infinitesimal stress tensor and is the Kronecker delta. The Lamé parameters and are the quantities to be inferred using PINN.2.2 Introduction to PhysicsInformed Neural Networks
In this section, we provide an overview of the PhysicsInformed Neural Networks (PINN) architecture, with emphasis on their application to model inversion. Let be an
layer neural network with input vector
, output vector , and network parameters . This network is a feedforward network, meaning that each layer creates data for the next layer through the following nested transformations:(2) 
where and are inputs and outputs of the model, are parameters of each layer , known as weights and biases, respectively. The functions
are called activation functions and make the network nonlinear with respect to the inputs. For instance, an ANN functional of some field variable, such as displacement
, with three hidden layers and with as the activation function for all layers except the last can be written as(3) 
This model can be considered as an approximate solution for the field variable of a partial differential equation.
In the PINN architecture, the network inputs (also known as features) are space and time variables, i.e., in Cartesian coordinates, which makes it meaningful to perform the differentiation of the network’s output with respect to any of the input variables. Classical implementations based on finite difference approximations are not accurate when applied to deep networks (see Baydin et al. (2017) for a review). Thanks to modern graphbased implementation of the feedforward network (e.g., Theano Bergstra et al. (2010), Tensorflow Abadi et al. (2016), MXNet Chen et al. (2015)), this can be carried out using Automatic Differentiation at machine precision, therefore allowing for many hidden layers to represent nonlinear response. Hence, evaluation of a partial differential operator acting on is achieved naturally with graphbased differentiation and can then be incorporated in the cost function along with initial and boundary conditions as:
(4) 
where is the domain boundary, is the initial condition at , and indicates the expected (true) value for the differential relation at any given training point. The norm of a generic quantity defined in denotes where the ’s are the spatial points where the data is known. The dataset is then fed to the neural network and an optimization is performed to evaluate all the parameters of the model, including the parameters of the PDE.
2.3 Training PINN
Different algorithms that can be used to train a neural network. Among the choices available in Keras
Chollet and others (2015) we use the Adam optimization scheme Kingma and Ba (2014), which we have found to outperform other choices such as Adagrad Duchi et al. (2011), for this task. Several algorithmic parameters affect the rate of convergence of the network training. Here we adopt the terminology in Keras Chollet and others (2015), but the terminology in other modern machine learning packages is similar. The algorithmic parameters include batchsize, epochs, shuffle, and patience. Batchsize controls the number of samples from a dataset used to evaluate one gradient update. A batchsize of 1 would be associated with a full stochastic gradient descent optimization. One epoch is one round of training on a dataset. If a dataset is shuffled, then a new round of training (epoch) would result in an updated parameter set because the batchedgradients are evaluated on different batches. It is common to reshuffle a dataset many times and perform the backpropagation updates. The optimizer may, however, stop earlier if it finds that new rounds of epochs are not improving the cost function. That is where the last keyword, patience, comes in. This is mainly because we are dealing with nonconvex optimization and we need to test the training from different starting points and in different directions to build confidence on the parameters evaluated from minimization of the costfunction on a dataset. Patience is the parameter that controls when the optimizer should stop the training.
There are three ways to train the network: (1) generate a sufficiently large number of datasets and perform a oneepoch training on each dataset, (2) work on one dataset over many epochs by reshuffling the data, and (3) a combination of these. When dealing with synthetic data, all approaches are feasible to pursue. However, strategy (1) above is usually impossible to apply in practice, specially in space, where sensors are installed at fixed and limited locations. In the original work on PINN Raissi et al. (2019), approach (1) was used to train the model, where datasets are generated on random space discretizations at each epoch. Here, we follow approach (2) to use training data that we could realistically have in practice. For all examples, unless otherwise noted, we use a batchsize of 64, a limit of 10,000 epochs with shuffling, and a patience of 500 to perform the training.
3 Illustrative example and discussions
In this section, we use the PINN architecture on an illustrative linear elasticity problem.
3.1 Problem setup
To illustrate the application of the proposed approach, we consider an elastic planestrain problem on the unit square (Fig. 1), subject to the boundary conditions depicted in the figure. The body forces are:
(5) 
The exact solution of this problem is
(6)  
(7) 
which is plotted in Fig. 2, for parameter values of , , and .
3.2 Neural Network Setup
Due to the symmetry of the stress and strain tensors, the quantities of interest for a twodimensional problem are , , , , , , , . There are a few potential architectures that we can use to design our network. The input features (variables) are the spatial coordinates , for all the network choices. For the outputs, a potential design is to have a densely connected network with two outputs as . Another option is to have two densely connected independent networks with only one output each, associated with and , respectively (Fig. 3). Then, the remaining quantities of interest, i.e., , can be obtained through differentiation. Alternatively, we may have or as outputs of one network or multiple independent networks. As can be seen from Fig. 3, these choices affect the number of parameters of the network and how different quantities of interest are correlated. Equation (3
) shows that the the feedforward neural network imposes a special functional form to the network that may not necessarily follow any crossdependence between variables in the governing equations (
1). Our data shows that using separate networks for each variable results in a far more effective strategy. Therefore, we propose to have variables defined as independent ANNs as our architecture of choice (see Fig. 4), i.e.(8) 
The cost function is defined as
(9) 
The quantities with asterisks represent given data. We will train the networks so that their output values are as close as possible to the data, which may be real field data or, in this paper, synthetic data from the exact solution to the problem or the result of a highfidelity simulation. The values without asterisk represent either direct outputs of the network (e.g., or ; see Eq. (8)) or quantities obtained through automatic graphbased differentiation Baydin et al. (2017) of the network outputs (e.g., ). In Eq. (9), and represent data on the body forces obtained as .
The different terms in the cost function represent measures of the error in the displacement and stress fields, the momentum balance, and the constitutive law. This cost function can be used for deeplearningbased solution of PDEs as well as for identification of the model parameters. For the solution of PDEs, and are treated as fixed numbers in the network. For parameter identification, and are treated as network parameters that change during the training phase (see Fig. 4). In TensorFlow Abadi et al. (2016) this can be accomplished defining and as Constant (PDE solution) or Variable (parameter identification) objects, respectively. We set up the problem using the SciANN Haghighat and Juanes (2019) framework, a highlevel Keras Chollet and others (2015) wrapper for physicsinformed deep learning and scientific computations. Experimenting with all of the previously mentioned network choices can be easily done in SciANN with minimal coding.^{1}^{1}1The code for some of the examples solved here is available at: https://github.com/sciann/examples.
3.3 Identification of model parameters: PINN trained on the exact solution
Here, we use PINN to identify the model parameters and . Our data corresponds to the exact solution with parameter values , and . Our default dataset consists of 100
100 sample points, uniformly distributed. We study how the accuracy and the efficiency of the identification process depend on the architecture and functional form of the network; the available data; and whether we use one or several independent networks for the different quantities of interest. To study the impact of the architecture and functional form of the ANN, we use 4 different networks with either 5 or 10 hidden layers, and either 20 or 50 neurons per layer; see Table
1. The role of the network functional form is studied comparing the performance of the two most widely used activation functions, i.e.,and ReLU, where
Bishop (2006).Studying the impact of the available data on the identification process is crucial because we are interested in identifying the model parameters with as little data as possible. We undertake the analysis considering two scenarios:

Stresscomplete data: In this case, we have data at a set of points for the displacements and their firstorder derivatives, that is, , , , , . Because our cost function (9) involves also data that depends on the stress derivatives ( and ), this approach relies on an additional algorithmic procedure for differentiation of stresses. In this section we compute the stress derivatives using secondorder central finitedifference approximations.

Forcecomplete data: In this scenario, we have data at a set of points for the displacements, their first derivatives and their second derivatives. The availability of the displacement second derivatives allows us to determine data for the body forces and using the momentum balance equation without resorting to any differentiation algorithm.
Network  Layers  Neurons  Number of Parameters  
Independent Networks  Single Network  
i  5  20  12336  1893 
ii  5  50  72816  10713 
iii  10  20  27036  3993 
iv  10  50  162066  23463 
In Fig. 5 we compare the evolution of the cost function for stresscomplete data (Fig. 5a) and forcecomplete data (Fig. 5b). Both figures show a comparison of the four network architectures that we study; see Table 1. We find that training on the forcecomplete data performs slightly better (lower loss) at a given epoch.
The result of convergence of model identification is shown in Fig. 6. The training converges to the true values of parameters, i.e., and , for all cases. We find that the optimization is very quick on the parameters while it takes far more epochs to fit the network on the field variables. Additionally, we observe that deeper networks produce less accurate parameters. We attribute the loss of accuracy as we increase the ANN complexity to overfitting Bishop (2006); Goodfellow et al. (2016)
. Convergence of the individual terms in the loss function (
9) is shown in Fig. 7 for Netii (see Table 1). We find that all terms in the loss, i.e., datadriven and physicsinformed, show oscillations during the optimization. Therefore, no individual term is solely responsible for the oscillations in the total loss (Fig. 5).The impact of the ANN functional form can be examined comparing the data in Figs. 5b and 8a, which show the evolution of the cost function using the activation functions and ReLU, respectively. The function ReLU has discontinuous derivatives, which explains its poor performance for physicsinformed deep learning, whose effectiveness relies heavily on accurate evaluation of derivatives.
A comparison of Figs. 5b and 8b shows that using independent networks for displacements and stresses is more effective than using a single network. We find that the single network leads to less accurate elastic parameters because the crossdependencies of the network outputs through the kinematic and constitutive relations may not be adequately represented by the activation function.
Fig. 9 analyzes the effect of availability of data on the training. We computed the exact solution on four different uniform grids of size , , , and ; and carried out the parameter identification process. We performed the comparison using forcecomplete data and a network with 10 layers and 20 neurons per layer (network iii). The training process found good approximations to the parameters for all cases, including that with only points. The results show that fewer data points require many more epoch cycles, but the overall computational cost is far lower.
3.4 PINN models trained on the FEM solution
Here, we generate synthetic data from FEM solutions, and then perform the training. The domain is discretized with a mesh comprised of elements. Four datasets are prepared using quadrilateral bilinear, biquadratic, bicubic, and biquartic Lagrange elements using the commercial FEM software COMSOL. We evaluate the FEM displacements, strains, stresses and stress derivatives at the center of each element. Then, we map the data to a
training grid using SciPy’s griddata module with cubic interpolation. This step is performed as a dataaugmentation procedure, which is a common practice in machine learning
Bishop (2006).To analyze the importance of data satisfying the governing equations of the system, we focus our attention on network ii and we study cases with stress and forcecomplete data. The results of training are presented in Fig. 10. As can be seen here, the bilinear element performs poorly on the learning and identification. The performance of training on the other elements is good, comparable to that using the analytical solution. Further analysis shows that this is indeed expected as FEM differentiation of bilinear elements provides a poor approximation of the body forces. The error in the body forces is shown in Fig. 11, which indicates a high error for bilinear elements. We conclude that the standard bilinear elements are not suitable for this problem to generate numerical data for deep learning. Fig. 10a2 confirms that preprocessing the data can remove the error that was present in the numerical solution with bilinear elements, and enable the optimization to successfully complete the identification.
3.5 PINN models trained on the IGA solution
Observing the lowest loss on the analytical solution, we decided to study the influence of the global continuity of the numerical solution. We generated a continuous dataset using Isogeometric analysis Bazilevs et al. (2010). We, therefore, analyze the system using IGA elements with again a grid of dimension. The data are then mapped on to a grid of and used to train the PINN models. The training results are shown in Fig. 12. The outputs are very similar to the highorder FEM datasets.
3.6 Identification using transfer learning
Here we explore the applicability of our PINN framework to transfer learning: a neural network that is pretrained is used to perform identification on a new dataset. The expectation is that since the initial state of neural network is not randomly chosen anymore, training should converge faster to the solution and parameters of the data. This is crucial for many practical aspects including adaptation to new data for online search or purchase history Taylor and Stone (2009) or in geosciences, where we can train a representative PINN in highlyinstrumented regions and use them at other locations with limited observational datasets. To this end, we use the pretrained model on Netiii (Fig. 5), which was trained on a dataset with and and then we explore how the loss evolves and the training converges when data is generated with different values of .
In Fig. 13 we show the convergence of the model with different datasets. Note that the loss is normalized by the initial value from the pretrained network on (Fig. 5). As can be seen here, retraining on new datasets costs only a few hundred epochs with a smaller initial value for the loss. This is pointing to the advantage of deep learning and PINN, where retraining on similar data is much less costly than classical methods that rely on forward simulations.
3.7 Application to sensitivity analysis
Performing sensitivity analysis is an expensive task when the analytical solution is not available, since it requires performing many forward numerical simulations. Alternatively, if we can construct a surrogate model to be a function of parameters of interest, then performing sensitivity analysis becomes tractable. However, construction of such a surrogate model is itself an expensive task within classical frameworks. Within PINN, however, this seems to be naturally possible. Let us suppose that the parameter of interest is shear modulus . Consider an ANN model with inputs as and outputs as . We can, therefore, use a similar framework to construct a model that is a function of in addition to the space variables. Again, PINN can constrain the model to adapt to the physics of interest and therefore there is less data needed to construct such a model.
Here, we explore if a PINN model trained on multiple datasets generated with various material parameters, i.e., different values of , can be used as a surrogate model to perform sensitivity analysis. The network in Fig. 4 is now slightly adapted to carry as an extra input (in addition to ). The training set is prepared based on and . Note that there is no identification in this case, and therefore the parameters and are known at any given training data. The results of the analysis are shown in Fig. 14. For a wide range of values of , the model performs very well in terms of displacements; it is less accurate, but still very useful, in terms of stresses with a maximum error for nearincompressible conditions, .
4 Conclusions
We study the application of a new class of deep learning, known as PhysicsInformed Neural Networks (PINN), for solution and discovery in solid mechanics. In this work, we formulate and apply the framework to a linear elastostatics problem. We study the sensitivity of the proposed framework to noise in data coming from different numerical techniques. We find that the optimizer performs much better on data from highorder classical finite elements, or with methods with enhanced continuity such as Isogeometric Analysis. We analyze the impact of the size and depth of the network, and the size of the dataset from uniform sampling of the numerical solution—an aspect that is important in practice given the cost of a dense monitoring network. We find that the proposed PINN approach is able to converge to the solution and identify the parameters quite efficiently with as little as 100 data points.
We also explore transfer learning, that is, the use a pretrained neural network to perform training on new datasets with different parameters. We find that training converges much faster when this is done. Lastly, we study the applicability of the model as a surrogate model for sensitivity analysis. To this end, we introduce shear modulus as an input variable to the network. When training only on four values of , we find that the network predicts the solution quite accurately on a wide range of values for , a feature that is indicative of the robustness of the approach.
Despite the success exhibited by the PINN approach, we have found that it faces challenges when dealing with problems with discontinuous solutions. The network architecture is less accurate on problems with localized high gradients as a result of discontinuities in the material properties or boundary conditions. We find that, in those cases, the results are artificially diffuse where they should be sharp. We speculate that the underlying reason for this behavior is the particular architecture of the network, where the input variables are only the spatial dimensions ( and ), rendering the network unable to produce the required variability needed for gradientbased optimization that would capture solutions with high gradients. Addressing this extension is an exciting avenue for future work in machinelearning applications to solid mechanics.
References
 TensorFlow: A system for largescale machine learning. In 12th USENIX Symposium on Operating Systems Design and Implementation (OSDI 16), Savannah, GA, pp. 265–283. External Links: ISBN 9781931971331, Link Cited by: §1, §2.2, §3.2.
 Learning datadriven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116 (31), pp. 15344–15349. External Links: Document, ISSN 00278424, Link, https://www.pnas.org/content/116/31/15344.full.pdf Cited by: §1.
 Automatic differentiation in machine learning: a survey. The Journal of Machine Learning Research 18 (1), pp. 5595–5637. External Links: Link Cited by: §2.2, §3.2.
 Isogeometric analysis using Tsplines. Computer Methods in Applied Mechanics and Engineering 199 (58), pp. 229–263. External Links: Link, Document Cited by: §3.5.
 Machine learning for datadriven discovery in solid earth geoscience. Science 363 (6433). External Links: Document, Link, https://science.sciencemag.org/content/363/6433/eaau0323.full.pdf Cited by: §1.
 Theano: a CPU and GPU math expression compiler. In Proceedings of the Python for Scientific Computing Conference (SciPy), Vol. 4. Cited by: §1, §2.2.
 Pattern recognition and machine learning. SpringerVerlag, Berlin, Heidelberg. External Links: ISBN 0387310738, Document, Link Cited by: §1, §3.3, §3.3, §3.4.
 Perspective on machine learning for advancing fluid mechanics. Physical Review Fluids 4 (10), pp. 100501. External Links: Document, Link Cited by: §1.
 Methods for datadriven multiscale model discovery for materials. Journal of Physics: Materials 2 (4), pp. 044002. External Links: Document, Link Cited by: §1.
 Machine learning for fluid mechanics. Annual Review of Fluid Mechanics 52 (1), pp. 477–508. External Links: Document, Link Cited by: §1.
 Machine learning for molecular and materials science. Nature 559 (7715), pp. 547–555. External Links: Document, ISSN 14764687, Link Cited by: §1.
 MXNet: A flexible and efficient machine learning library for heterogeneous distributed systems. External Links: 1512.01274 Cited by: §2.2.
 Keras. Cited by: §2.3, §3.2.
 Isogeometric analysis: toward integration of CAD and FEA. John Wiley & Sons. External Links: Document Cited by: §1.
 Deep learning of aftershock patterns following large earthquakes. Nature 560 (7720), pp. 632–634. External Links: Document, ISSN 14764687, Link Cited by: §1.
 Adaptive subgradient methods for online learning and stochastic optimization. Journal of Machine Learning Research 12 (Jul), pp. 2121–2159. External Links: Link Cited by: §2.3.
 New nested adaptive neural networks (NANN) for constitutive modeling. Computers and Geotechnics 22 (1), pp. 29–52. Cited by: §1.
 Deep learning. MIT press. External Links: ISBN 9781405161251, Link, Document Cited by: §1, §3.3.
 SciANN: a Keras wrapper for scientific computations and physicsinformed deep learning using artificial neural networks. Note: https://sciann.com External Links: Link Cited by: §3.2.
 Solving highdimensional partial differential equations using deep learning. Proceedings of the National Academy of Sciences 115 (34), pp. 8505–8510. External Links: Document, Link Cited by: §1, §1.
 Isogeometric analysis: CAD, finite elements, NURBS, exact geometry and mesh refinement. Computer Methods in Applied Mechanics and Engineering 194 (39), pp. 4135–4195. External Links: ISSN 00457825, Document, Link Cited by: §1.
 Microstructure informatics using higherorder statistics and efficient datamining protocols. JOM 63 (4), pp. 34–41. External Links: Document, ISBN 15431851, Link, Document Cited by: §1.
 Adam: A method for stochastic optimization. External Links: 1412.6980 Cited by: §2.3.
 Machine learning in seismology: turning data into insights. Seismological Research Letters 90 (1), pp. 3–14. External Links: Document, ISSN 08950695 Cited by: §1.
 Artificial neural networks for solving ordinary and partial differential equations. IEEE Transactions on Neural Networks 9 (5), pp. 987–1000. External Links: Document, ISSN 19410093, Link Cited by: §1.
 Neuralnetwork methods for boundary value problems with irregular boundaries. IEEE Transactions on Neural Networks 11 (5), pp. 1041–1049. External Links: Document, ISSN 10459227, Link Cited by: §1.
 Reinforcement learning. Adaptation, Learning, and Optimization, Vol. 12, Springer Berlin Heidelberg, Berlin, Heidelberg. External Links: Document, ISBN 9783642276446, ISSN 18674542, Link Cited by: §1.
 Deep learning. Nature 521 (7553), pp. 436–444. External Links: Document, Link Cited by: §1.
 Machine learning applications in genetics and genomics. Nature Reviews Genetics 16 (6), pp. 321–332. External Links: Document, ISSN 14710064, Link Cited by: §1.
 The numerical solution of linear ordinary differential equations by feedforward neural networks. Mathematical and Computer Modelling 19 (12), pp. 1–25. External Links: Document, ISSN 08957177, Link Cited by: §1.
 Deep learning predicts pathdependent plasticity. Proceedings of the National Academy of Sciences 116 (52), pp. 26414–26420. External Links: Document, ISSN 00278424, Link, https://www.pnas.org/content/116/52/26414.full.pdf Cited by: §1.
 Accelerating materials property predictions using machine learning. Scientific Reports 3, pp. 1–6. External Links: Document, ISSN 20452322 Cited by: §1.
 A novel machine learningbased algorithm to detect damage in highrise building structures. Structural Design of Tall and Special Buildings 26 (18), pp. 1–11. External Links: Document, ISSN 15417808 Cited by: §1.
 Physicsinformed neural networks: A deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations. Journal of Computational Physics 378, pp. 686–707. External Links: Document, ISSN 10902716, Link Cited by: §1, §1, §1, §2.3.
 Machine learning reveals the state of intermittent frictional dynamics in a sheared granular fault. Geophysical Research Letters 46 (13), pp. 7395–7403. External Links: Document, Link, https://agupubs.onlinelibrary.wiley.com/doi/pdf/10.1029/2019GL082706 Cited by: §1.
 Datadriven identification of parametric partial differential equations. SIAM Journal on Applied Dynamical Systems 18 (2), pp. 643–660. External Links: Document, Link Cited by: §1, §1.

Datadriven semisupervised and supervised learning algorithms for health monitoring of pipes
. Mechanical Systems and Signal Processing 131, pp. 524–537. External Links: Document, ISSN 10961216, Link Cited by: §1.  Deep elastic strain engineering of bandgap through machine learning. Proceedings of the National Academy of Sciences 116 (10), pp. 4117–4122. External Links: Document, ISSN 00278424, Link Cited by: §1.
 Transfer learning for reinforcement learning domains: A survey. Journal of Machine Learning Research 10 (Jul), pp. 1633–1685. External Links: Document, Link Cited by: §1, §3.6.
 Earthquake detection through computationally efficient similarity search. Science Advances 1 (11), pp. e1501057. External Links: Document, ISSN 23752548, Link Cited by: §1.
 Physicsconstrained deep learning for highdimensional surrogate modeling and uncertainty quantification without labeled data. Journal of Computational Physics 394, pp. 56–81. External Links: Document, Link Cited by: §1.
Comments
There are no comments yet.