Machine learning methods have shown promising performance in various scientific fields, ranging from computer vision to disease diagnosis and personalized therapy. One potential usage of these methods is the modeling of the behavior of a physical system, without using the classical mathematical equations. The classical models frequently suffer from very costly solution processes. A data-driven modeling approach has the capability of resolving such issues. Many studies have been recently conducted in this area, covering a wide range of problems and purposes, such as equation solution, parameter analysis or uncertainty quantification. For example, Vlachas et al.
extend the long short-term memory (LSTM) networks for data driven forecasting. The weather forecasting can be named as a potential application of this work. Raissi et al.
introduced the physics-informed neural networks. These networks add the governing equations to the loss function to force the training process to obey the physical laws. These methods are more effective when only a small amount of data is available.
Given the size of each data set in typical simulation applications, efficient training and inference becomes very important. For instance, training a neural network for weather prediction can be very costly due to the large amount of measurement points, both spatially and temporally. There are various approaches in literature to improve the efficiency of the training, i.e., to reduce the time-to-train, while preserving the learning quality. These approaches can be generally divided into data parallelization and model parallelization schemes . The first approach divides the data among different processes while the second approach shares all data among processes but distributes the computation among processes. Both approaches require data communication for synchronization. For instance, Vivian et al.  presented a data parallel approach, where the available training data are split into smaller chunks. Each chunk is given to a network and one step training is applied. Through a global reduction operation, the networks resulting from different training data chunks can share their weights. The weights are averaged and constitute a new network, which is shared among all individual MPI ranks. This procedure is repeated until all available data are fed into the network and the training completed. This approach is able to reduce the training time. However, it alters the learning algorithm resulting in decreased learning. In addition, the global reduction operations are potential performance bottlenecks.
In the current study, we propose a novel parallelization scheme for the training of deep neural networks (DNNs). The proposed method is primarily targeting simulations in different scientific areas, but can be generalized to be utilized for other fields as well. For simulation applications, DNNs can be used to either replace classical physics-based models (data-driven approach) or to enhance or dynamically adapt them (data-integrated approach). To optimize the time-efficiency of the training and, thus, maximize the benefit of such an approach, we propose to decompose the individual training data sets into smaller spatial sections, each, instead of distributing complete data sets among processes. Thus, each network learns the data for its subdomain and the training phase is communication-free. For inference, the method requires only boundary data exchange, which can be done in a parallel point-to-point way similar to classical simulations.
The rest of the paper is organized as follows: In Sec. II, we present the architecture of the neural network, used for each subdomain. Sec. III explains the MPI parallelization of the training and the inference. In Sec. IV, we showcase an aeroacoustics problem, where the linearized Euler equations are learned. We discuss both the accuracy of prediction and the scalability of the presented scheme. Finally, Sec. V summarizes and concludes the paper.
Ii Architecture of Neural Network
The current work aims to improve the efficiency of DNNs in simulation science. To preserve the spatial connectivity that exists in these sort of applications, we use convolutional (CNN) layers within the network. However, the proposed parallelization scheme can be incorporated with other type of layers. CNN layers incorporate a kernel with learnable elements to map a multi-dimensional input to an output (see Fig. 1). This can be used for both regression and classification, depending on the final layer of the network. In the rest of this section, we summarize the configuration of the neural network used in the current work.
Layers: We use four CNN layers with the following input and output channels presented in Table I.
As we intend to mimic the solution of the two-dimensional linearized Euler equations, the network input and output must have four channels, corresponding to the training data types: pressure, density, velocity in x-direction and velocity in y-direction.
To define a neural network, we need to select a non-linear activation function
, which transforms the input of a neuron to its output. There are plenty of suggestions for possible functions with different properties and use cases. Glorot et al.
showed that rectified linear units (ReLUs) have a better performance for training the NNs than sigmoid and hyperbolic tangent functions. Since 2017, they are therefore the most commonly used activation function in deep learning. In the simplest case, is given by
Although, the gradient is zero for all negative inputs, it does not vanish for large (as it does for sigmoid activation, e.g.). A vanishing gradient stops the network’s learning and weights are not updated any more. At , the gradient is undefined, but in practice, very rarely occurs. Nonetheless, a value for this unlikely case should be selected, e.g., zero. The problem for negative inputs can be fixed with leaky ReLUs
where is some small value such as . Alternatively, can be considered as a parameter that must be learned alongside the weights. We use a constant in the current work.
Optimizer: The most important factor for creating well-performing neural networks are the entries of the weight matrices
. During training, we essentially solve an optimization problem minimizing the loss for the given training data. The choice of the optimizer is heavily contributing to the result. All common optimizers work in the same fashion: They begin with some initial parameter estimateand update this estimate according to some rule . Their main difference is the choice of the search-direction and the learning-rate (or step-size) for step . The parameters are updated until either a maximum number of iterations is reached or some convergence criterion is fulfilled. After trying different available options, we found the ADAM  optimizer to have the best performance in our case.
ADAM is based on the well-known stochastic gradient descent method (SGD), but it uses a concept called momentum
to further improve the convergence of the optimization. The so-called momentum is conceived by observing a common problem in Gradient descent based optimizers. In cases where the eigenvalues of the Jacobian of the loss function with respect to the weights vary strongly, gradient descent oscillates between slopes while only slowly converging towards the minimum. Momentum () aims to improve the convergence by adding a fraction of the the previous search direction to the current one
where is the momentum, is the loss function and are weight matrix entries. This speeds up the optimization towards the bottom of the ravine and thus the minimum. In addition to the first momentum in equation (3), ADAM uses the second momentum
where, which leads to a bias towards , in particular during the first few steps. The parameters counteract this effect by exponential decay adjusting of the moments.
is the -th power of (the same holds for ). The ADAM update rule is then
Kingma et al.  suggest a global learning-rate of and a smoothing value of . Empirical evidence in  shows that ADAM outperforms other optimizers in terms of convergence speed and quality of the found solution in many cases.
Loss function: The most commonly used loss function for neural networks is the mean squared error (MSE). We have observed that the mean absolute percentage error (MAPE) is better suited for our specific application. This is due to the fact that the measured values have different orders of magnitudes and the mean squared error penalizes deviations on the larger data points much more. The MAPE is given by
where are the target values and are the corresponding predictions. Compared to the mean squared error, the MAPE loss is proportional to the magnitude of the data point and the prediction.
Iii Training and Inference Parallelization
We present a novel parallel scheme for efficient and scalable training and inference of neural networks. The goal is to efficiently exploit multi-core machines (CPU or GPU) in order to reduce the training and inference time while preserving the learning quality.
Training: We decompose each individual training data set into smaller sections and feed each subsection into an independent neural network. Suppose that we have a set of 1000 training data sets, each representing a two-dimensional square domain with 100 data points. We propose following steps for training:
Split each data set into smaller sections, for example 4 smaller subdomains, each with only 25 data points.
Feed each section into an individual network (see Fig. 2).
Use MPI to parallelize the training by assigning an MPI rank to each network.
Use an individual cost function and optimization process for each network.
Train each network for the specific part of the domain and use it to predict only its own section.
Since applying CNNs reduces the size of the two-dimensional input data set by (k-1) rows and (k-1) columns, if we use a convolution kernel, the network output can not be directly compared to the target data (whereas the input and target data have the same dimensions). For the first layer, we increase the input dimension to match its output dimensions with the target data. Meaning, that input data for neighboring processes are overlapping. This helps both to match dimensions and preserve the spatial connectivity between neighbouring processes. If only a single CNN layer is used, using the larger input removes the mismatch of output and target data dimension. For more than one layer, the following approaches would be possible to solve the dimension mismatch issue:
Padding the input data with zeros, to achieve the desired size of the output,
Padding the input data with data from neighboring subdomains,
Comparing only the inner data points of the target data to the network output,
Adding de-convolutional layers or the transpose convolution.
Comparing the inner data points would limit the usability of the output data, as substitutes of the actual simulation (data at subdomain interfaces are missing). Therefore, we currently use only the first two approaches. Applying the de-convolution is currently under investigation.
Since each network is responsible only for a single subsection, there is no need for data exchange between processes. The training data are directly feed into the network from the memory. This avoids possible bottlenecks due to the data communication.
Inference: Each network can be used for the inference of the subdomain, that it is trained for. Therefore, the inference can also be done in parallel. The network receives the input at time and predicts the output at time . For more than one time step prediction, the output at time can be used as the input for prediction at time and so on. However, the output can not be directly feed into the network, since its dimension is small. Extra data points must be received from the neighboring processes.
For this purpose, we use MPI communication. Each processor sends the boundary data to the corresponding neighbor. To avoid the communication bottleneck, we incorporate fully parallel point-to-point communication. Each processor communicates directly to its neighbors and no central instance is used. This can be particularly important, when massively parallel machines are targeted.
Iv Numerical Performance and Accuracy Analysis
In this section, we numerically investigate the performance and accuracy of the proposed scheme. We showcase the learning of linearized Euler equation which is widely used in aeroacoustics simulations.
Iv-a Test case: Linearized Euler Equation for Gaussian Pressure Pulse Investigation
For our numerical tests, we focus on the linearized Euler equations (cf. Eq.(8)), the linearization of the nonlinear Euler equations around a constant background. The equations to be solved allow to determine the perturbation (marked with ) given a known constant background (denoted with a subscript ) 
where , and are the density, pressure and velocity, respectively. The multiplication of different perturbations are neglected, as they are insignificantly small and result in even smaller values.
Boundary condition: At all four boundaries, outflow boundaries are considered, i.e., the pressure perturbation is set to zero, while all other quantities (density and velocity) have homogenized Neumann boundary conditions.
Initial condition: Initially, the fluid is at rest, the density perturbation is set to zero. The background pressure is and the background density is defined to be . A pressure perturbation can be prescribed which corresponds to the mentioned Gaussian pulse. For this purpose, we locate a Gaussian pressure pulse in the center of our square domain at P(0.0, 0.0). The half width of the pulse is set to and the amplitude is . The background velocity in both directions is zero.
Iv-B Numerical Accuracy Analysis
In this section we present the network output and compare them with validation data. The network receives the domain information, including pressure, density and velocities at time step and is expected to predict the domain status in the next time step. The training and validation data are produced using a numerical simulation as explained in Sec. IV-A. We have produced in total 1500 training and validation data, by running a single simulation. We use the first 1000 time steps for the training and he remaining ones for the validation.
compares the network prediction and the target solution for the validation data sets. The comparison demonstrates a very good agreement, especially for density and pressure. There are small discrepancies in the velocities, which is probably due to the padding at the inner layers. Note that the accuracy drops after one time step prediction. This is because DNNs consisting of only CNN layers are not capable to capture the temporal connectivity completely. We always train the network with a single time step, i.e. time stepis the input and is the output. With this setup, the network can predict a single time step accurately. However, if the output is used as a new input for the next time step prediction, the accumulative error decreases the accuracy.
To address this problem, authors are considering incorporation of more complex layers, such as recurrent and LSTM layers. For these layers, the data must be fed into the network as time-series. This will increase the training cost, but enables the network to capture the temporal connectivity more accurately.
Iv-C Numerical Performance Analysis
Each training data set consists of grid-type data points, 256 at each direction accumulating to a total of 65.536 points. For each point, the networks receive density, pressure, velocity in - and -direction. We gather the training data as a list of three-dimensional Python numpy arrays, where the - and -direction corresponds to the domain and the
-direction accounts for the various data types (density, pressure and velocities). We convert this list into standard Pytorch tensors to feed them into the network. For the parallelization, we decompose the domain into smaller subdomains as described above and use an individual network for each subdomain.
We present a strong scalability analysis for the proposed scheme up to 64 CPU cores in Fig. 4. We observe an almost perfect strong scaling, where the training time reduces as the number of CPU cores are increased. This behavior is expected, as parallelization reduces the size of training data and thus the training time. In addition, avoiding communication during training contributes to the observed efficiency. In addition, since we do the prediction for only a single time-step, inference time is very small compared to the training.
A parallel scheme for training and inference of deep neural networks is introduced. The proposed method is directly applicable to simulation science for time-dependent scenarios and can be generalized for other fields. The proposed scheme is based on dividing the prediction domain into smaller subdomains and assigning each to a single MPI rank running an independent network. Since each subdomain is assigned to an individual network, no data communication is required and training data are feed to the network directly from the memory. For the inference, however, neighbouring subdomains must communicate the boundary data to preserve the spatial connectivity. To avoid any performance drop, we used fully point-to-point communication.
The neural networks for each subdomain consists of four CNN layers with multiple channels. We showcase learning of linearized Euler equations, with an application for aeroacoustics simulations. Accordingly, the input data has four channels, corresponding to pressure, density and velocities in - and
-direction. To preserve the output dimension for each layer, we incorporated padding. The inputs of each neuron are mapped to the output using a leaky ReLU activation function. The MAPE loss function is used along with the ADAM optimization method to calculate the network’s weights.
A first numerical accuracy analysis demonstrated a very good agreement between network prediction and simulation data. In addition, the strong scalability analysis showed a very good reduction in training time by increasing the number of CPU cores. The predictions accuracy can be further improved by incorporating more complex layers such as LSTM and enhancing training data to account for the temporal connectivity. This topic is currently under investigation by the authors.
We thank the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) for supporting this work by funding - EXC2075 – 390740016 under Germany’s Excellence Strategy. We acknowledge the support by the Stuttgart Center for Simulation Science (SimTech). The current work as also financially supported by the priority program 1648–Software for Exascale Computing 214 (www.sppexa.de) of the Deutsche Forschungsgemeinschaft.
-  (2019) Demystifying parallel and distributed deep learning: an in-depth concurrency analysis. ACM Computing Surveys (CSUR) 52 (4), pp. 1–43. Cited by: §I.
Deep sparse rectifier neural networks.
Proceedings of the fourteenth international conference on artificial intelligence and statistics, pp. 315–323. Cited by: §II.
-  (2016) Deep learning. MIT Press. Note: http://www.deeplearningbook.org Cited by: §II.
-  (2015) Adam: a method for stochastic optimization (2014). arXiv preprint arXiv:1412.6980 15. Cited by: §II, §II.
-  (1999) On the momentum term in gradient descent learning algorithms. Neural networks 12 (1), pp. 145–151. Cited by: §II.
-  (2019) Physics-informed neural networks: a deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations. Journal of Computational Physics 378, pp. 686–707. Cited by: §I.
-  (2017) Searching for activation functions (2017). arXiv preprint arXiv:1710.05941. Cited by: §II.
-  (2012) An adaptable simulation framework based on a linearized octree. In High Performance Computing on Vector Systems 2011, M. Resch, X. Wang, W. Bez, E. Focht, H. Kobayashi, and S. Roller (Eds.), pp. 93–105. Note: Other Peer Reviewed Papers External Links: Cited by: §IV-A.
-  (2009) Riemann solvers and numerical methods for fluid dynamics: a practical introduction. Springer. Cited by: §IV-A.
-  (2019) Ateles source code. Note: https://osdn.net/projects/apes/scm/hg/ateles/[Online; accessed 26-08-2019] Cited by: §IV-A.
-  (2019) Deep learning at scale. In 2019 27th Euromicro International Conference on Parallel, Distributed and Network-Based Processing (PDP), pp. 124–131. Cited by: §I.
-  (2018) Data-driven forecasting of high-dimensional chaotic systems with long short-term memory networks. Proceedings of the Royal Society A: Mathematical, Physical and Engineering Sciences 474 (2213), pp. 20170844. Cited by: §I.