PyDEns: a Python Framework for Solving Differential Equations with Neural Networks

09/25/2019 ∙ by Alexander Koryagin, et al. ∙ 0

Recently, a lot of papers proposed to use neural networks to approximately solve partial differential equations (PDEs). Yet, there has been a lack of flexible framework for convenient experimentation. In an attempt to fill the gap, we introduce a PyDEns-module open-sourced on GitHub. Coupled with capabilities of BatchFlow, open-source framework for convenient and reproducible deep learning, PyDEns-module allows to 1) solve partial differential equations from a large family, including heat equation and wave equation 2) easily search for the best neural-network architecture among the zoo, that includes ResNet and DenseNet 3) fully control the process of model-training by testing different point-sampling schemes. With that in mind, our main contribution goes as follows: implementation of a ready-to-use and open-source numerical solver of PDEs of a novel format, based on neural networks.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 6

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

It is widely known that neural networks are universal approximators of functions (Cybenko1989). Lagaris1998

proposed to use this property of neural networks to solve partial differential equations. In doing so, they formed a trial function using neural network and ran an optimization procedure so that the differential equation be as valid as possible for the trial. In order to form the loss-function, the authors manually computed high-order (corresponds to the degree of equation) derivatives of the network output w.r.t to its inputs and weights. Later,

Lagaris2000 extended their approach on domains with more complex boundaries. The main idea was to form a trial function using two neural networks. The first one solves the equation inside the domain while the second one binds the boundary conditions.

After mentioned papers, the domain of research took a long pause. In recent years, however, following the development of convenient frameworks for automatic differentiation (e.g. TensorFlow and PyTorch), more and more papers (

Berg2018, Nabian2018, Aradi2018) attempt to solve differential equations using neural networks. To our understanding, all of the modern approaches can be traced back to a DeepGalerkin-method introduced by Sirignano2018. Essentially, Sirignano2018 rediscovered the approach of Lagaris1998 and used the power of modern automatic differentiation to test the approach at a larger scale, using deep neural networks for solving tasks at high dimensions.

In this paper, we wrap up the progress in the domain of solving PDEs with neural networks in a PyDEns python-module. The module is available for deep learning community on GitHub for usage and further improvement. Our goal is to simplify the experimentation in the emerging area of research. In doing so, we rely upon several pillars. First of all, a user should have an opportunity to set up complex problems in several lines of clear code. In that way, PyDEns allows to set up a PDE-problem from a wide class, including but not limited to heat equation and wave equation. Secondly, it is important not to impose constraints on the choice of neural network-architecture. With PyDEns, a user can either (i) choose a network from a zoo of implemented architectures, including ResNet, DenseNet, MobileNet

and others or (ii) build a complex neural network from scratch in a line of code, using convolutions and fully-connected layers. Lastly, a user has complete control over point-sampling scheme. One can form batches of training data using a large family of probabilistic distributions on the domain of PDE, e.g. truncated gaussian, uniform and exponential distributions or even mixtures of those three.

The rest of the paper is organized as follows: in the next section we briefly review DeepGalerkin-model as it was introduced in the paper of Sirignano2018 and present our modifications to the algorithm. In section (3) we explain in detail how to set up a PyDEns Python-model for solving a particular PDE-problem. In section (4) we give a few recommendation regarding (i) the choice of architecture, (ii) sampling schemes and (iii) batch size. We conclude by talking about further development of our approach and explaining how our work relates to oil-gas industry.

2 Original Deep Galerkin and our modifications

2.1 Original Deep Galerkin

Given evolution PDE of the form

(1)
(2)
(3)

Sirignano2018 seek to approximate the solution using neural network . In doing so, they minimize the following objective:

(4)

In other words, the neural network is fitted to validate all components (1)-(3) of PDE, both the equation itself and boundary/initial conditions. Importantly, each term of objective (4) is given by an integral along its respective domain w.r.t. some measure . Naturally, these integrals are rarely tractable. Hence, Sirignano2018

propose to replace the integrals on its sample-counterparts and use stochastic gradient descent. All in all, the optimization process looks as follows:

Fit to approximately solve the PDE.

1:procedure DG()
2:     repeat
3:         Form batches of points , , from , , . Draw from distributions , , .
4:

         Estimate the objective (

4) using sampled batches:
5:         Take a step of gradient descent:
6:     until convergence.
7:end procedure
Algorithm 1 Deep Galerkin-algorithm

Note that this algorithm allows only for approximate binding of initial and boundary conditions by means of incorporating additional terms into objective (4).

2.2 Problem description

With PyDEns one can solve almost any imaginable PDE. Still, the model focuses on evolution equations with time up to the second order:

(5)

where is an arbitrary function, composed from well-known operations including , , <<>>, <<>> as well as differential operators . The domain of the problem is given by -dimensional rectangle :

To form a well-posed equation of evolution one also needs boundary conditions:

(6)

and initial state along with evolution rate of the system111needed only if is present in the equation:

(7)
(8)

2.3 Introducing ansatz: binding boundary and initial conditions

When working on PDE-problems posed on rectangular domains, the out-of-the box approach of PyDEns-module uses ansatz for exact binding of initial/boundary conditions222the original algorithm from Sirignano2018 can also be easily implemented:

(9)

In other words, the solution to PDE is approximated by a transformation of a neural network-output rather than itself. Form (9) ensures exact binding of initial/boundaries conditions (6)-(8) whenever the following holds:

In all, the modified algorithm of PyDEns-module looks as follows (2) - compare this to algorithm (1):

Fit

to approximately solve the PDE.

1:procedure DG()
2:     repeat
3:         Form batch of points from . Draw from distribution .
4:         Estimate the objective (4) using sampled batches:
5:         Take a step of gradient descent:
6:     until convergence.
7:end procedure
Algorithm 2 Modified Deep Galerkin-algorithm

3 Problem setup and configuration

3.1 BatchFlow and config-dictionaries

In order to overcome many obstacles related to the training of neural networks, PyDEns relies on BatchFlow-framework. The main purpose of BatchFlow is to allow for creation of reproducible pipelines for deep learning. Most importantly, BatchFlow allows to set up a neural network model for training in a simple way, using configuration dictionary:

    config = {"loss": "mse",
              "optimizer": "Adam",
              "body": {…}}

The go-to approach of BatchFlow for creating a neural network-architecture from scratch is to use complex block conv_block. The block uses string layout for defining the network as a sequence of layers:

    body = {"layout": "faR fa fa+ f",
            "units": [10, 25, 10, 1],
            "activation": [tf.nn.tanh, tf.nn.tanh, tf.nn.tanh]}

In the code section above layout "faR fa fa+ f" stands for fully connected network with 3 hidden layers, hyperbolic tangent-activations and one ResNet-like skip connection: symbols R and + stand for the start and the end of the connection respectively. In the same manner, PyDEns can be configured to solve a particular PDE-problem via pde-key of the configuration-dict. The next section explains in detail how to correctly set up pde-key.

3.2 Configuring Pde-key

We start filling up the pde-key by specifying the dimensionality of the equation:

    pde = {"n_dims": 2}

The next step is to define the equation itself using the key form. This key sets up differential operator of equation (5) as a Python-callable. To define the callable we use the language of mathematical tokens. The list of tokens includes names like and differentiation-token . The first step is to add a set of tokens to the current namespace via ‘add tokens‘ function:

    from pydens import add_tokens
    add_tokens()

We can go on with defining the equation. For purposes of demonstration let us setup the equation

known as Poisson equation:

    Q = lambda x, y: 5 * sin(np.pi * (x + y))
    form = lambda u, x, y: D(D(u, x), x) + D(D(u, y), y) - Q(x, y)
    pde.update(form=form)

As you can see, the usage of tokens is rather straightforward. Note that token can be chained allowing for creation of higher-order derivatives. Default value for domain of the equation is unit -dimensional rectangle . To change it, one can pass ’domain’ key in PDE setup dictionary:

    pde.update(domain=[[0, 1], [0, 1])

To finish configuration of the PDE-problem, we must supply the boundary condition:

    pde.update({"boundary_condition": 1})

It is not difficult to define a more complex boundary condition:

    pde.update({"boundary_condition": lambda x, y: exp(y) * sin(x) + cos(y)})

3.3 Configuring the rest of the model

We go on with configuring our model and define a simple feed-forward architecture with three hidden layers to solve the PDE at hand:

    body = {"layout": "fa fa fa f",
            "units": [15, 25, 15, 1],
            "activation": [tf.nn.tanh]*3}

We can now assemble model-configuration dictionary using previously defined pde and body-subconfigs and adding mse-loss function:

    model_config = {"pde": pde,
                    "body": body,
                    "loss": "mse"}

Finally, we need to specify strategy of generating points from the domain. In this example we simply use uniform distribution over unit square in

:

    from pydens import NumpySampler
    sampler = NumpySampler("uniform", dim=2)

To learn more on how to create complex distributions, check out our https://github.com/analysiscenter/batchflow/blob/master/examples/tutorials/07_sampler.ipynb.

All that is left now is to train configured model in order to minimize error-function. For that we only need to wrap up the model-config in a model-instance and run the fitting procedure:

    from pydens import DGSolver
    dg = DGSolver(config)
    dg.fit(sampler=sampler, batch_size=200, n_iters=1000)

Change of loss-function (4) is illustrated on Figure 1. The fact that it achieves zero demonstrates that neural network approximates solution of the given PDE on desired domain. From graph of approximation, shown on Figure 2, we can easily verify that boundary conditions are satisfied.

In the next section we demonstrate how one can configure PyDEns to solve a more complex problem.

Figure 1: Neural network approximation: -view Figure 2: Neural network approximation: -view
Figure 3: Loss against training time

3.4 Heat equation

Lets turn our attention to heat equation of the form

To communicate this problem to the model, we use following syntax:

    form = lambda u, x, y, t: (D(u, t) - D(D(u, x), x) - D(D(u, y), y)
                                   - 5 * x * y * (1 - x) * (1 - y) * cos(np.pi * (x + y)))
    ic = lambda x, y: x * y * (1 - x) * (1 - y)
    pde = {"form": form,
           "rhs": rhs,
           "initial_condition": ic}

Since the problem at hand is not as simple as the previous one, we need more sophisticated neural network as approximator:

    body = {"layout": "faR fa fa+ f",
            "units": [10, 25, 10, 1],
            "activation": [tf.nn.tanh] * 3}
Figure 4: Heat equation. Loss against training time

Letter ’R’ in the layout convention stands for beginning of residual connection, while plus sign for its ending with summation. Rest of the solving pipeline is pretty much the same: uniform sampling over unit square and batch size of 200 points are used. Results are demonstrated on Figures

3.4, 5.

Figure 5: Heat equation. Neural network approximation

4 Choice of hyperparameters

In this section we discuss choice of important hyperparameters, namely, model architecture, number of points in every training batch, use of different sampling schemes and so on.

4.1 Model architecture

In previous examples we almost exclusively used fully-connected layers and residual connections. For harder equation in hand, especially with fast changing solutions, it is recommended to use gated connections, such as ones introduced in hochreiter1997long, cho2014learning. Number of layers and their respective size depends mostly on the dimensionality of PDE.

4.2 Batch size

Amount of points in each training batch is of utmost importance. If it is not big enough, then the estimation of true gradient is too inaccurate and leads to poor performance. On the other hand, if there are enough points to cover big part of the domain, then each training step does not account for any region-specific information, which also leads to low quality of the resulting model. The optimal middle-ground can be found by looking at graph of loss function during training: big oscillations usually speak for too low of a batch size, while unsatisfactory value of loss after plateauing can be viewed as a sign of too many points in the batch, provided that neural network is expressive enough.

4.3 Sampling schemes

Another way of communicating region-specific information to the model is by carefully choosing sampling strategy. Analogous to classical methods of solving PDE’s, we can sample points near provided boundary or initial condition during the first few iterations of model fitting. In the same manner, we can use different samplers during all of the training time: each of them would concentrate on some sufficiently small region of the initial domain, so that the model is trained to work better in this exact part of . It translates to code in a straight-forward manner:

    sampler_1 = NumpySampler(…)
    sampler_2 = NumpySampler(…)
    dg.fit(sampler=sampler_1, …)
    dg.fit(sampler=sampler_2, …)

In applications, we usually want to know solution only for narrow location. This is often the case for hydrodynamic modelling in oil-gas. Subsequent usage of

  • initial sampler near boundary or initial conditions

  • domain-wide sampler or combination of region-specific samplers to achieve small value of loss on the domain as a whole

  • sampler, concentrated around region of interest

allows to fine-tune model to perform best at the desired location.

5 Conclusion

We have presented PyDEns open-source python-module based on work of Sirignano2018. The framework allows to train neural networks to approximately solve second-order PDE’s of evolution. In order to set up a PDE-problem one only has to (i) communicate the problem itself in a python-dictionary (ii) set up a neural network-architecture using easy-to-comprehend layouts and (iii) prepare a point-sampling scheme using algebra of samplers, combining base numpy-distributions in mixtures and product-distributions. In all, the framework allows for more convenient experimentation in emerging domain of solving PDEs with neural networks.

In further work we plan to focus on (i) incorporating uncertainty into equation-inputs (for instance, its coefficients) in spirit of Aradi2018 and (ii) making coefficients of the equation trainable. This is of special importance in relation to oil-gas industry, as it sets a path for a drastically new approach for (i) predicting the evolution of oil-gas fields under uncertain geological properties and (ii) solving filtration problem (or other problems of inverse modeling).

References