    # Multilevel Monte Carlo Simulation of the Eddy Current Problem With Random Parameters

The multilevel Monte Carlo method is applied to an academic example in the field of electromagnetism. The method exhibits a reduced variance by assigning the samples to multiple models with a varying spatial resolution. For the given example it is found that the main costs of the method are spent on the coarsest level.

## Authors

##### This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

## 1 Introduction

The two most known approaches to propagate uncertainties in the field of uncertainty quantification rely on generalized Polynomial Chaos (gPC) expansions  or use the classical Monte Carlo (MC) method. When the number of random parameters is small and the solution depends in a smooth way on these parameters, a gPC approach is known to have a superior convergence rate . However, as the number of uncertain parameters

increases, the method becomes inefficient. This is often called the curse of dimensionality (see e.g.



). For a large number of random variables and a lack of regularity one has to rely on MC methods, since their convergence doesn’t depend on

. However, the error of classical MC converges slowly with , with the necessary number of samples. Figure 1: Cross-sectional view of the conducting wire (region I) in a steel tube (region III). Region II is filled with air.

Many efforts have been undertaken to overcome the slow convergence rate. One way is to use quasi-MC methods e.g. . Another way out is using multilevel Monte Carlo (MLMC) 

. In this paper every level refers to a spatial discretization level. The gain in the cost of MLMC with respect to MC originates from the fact that a big part of the uncertainty can be captured by using models with coarse spatial discretization. This means that the number of evaluations on the finest grid can be reduced drastically. The idea of this paper is to apply the MLMC method to an academic example in the field of electromagnetism. Also a study of the computational costs is done. More details on MLMC in the context of (elliptic) partial differential equations, can be found in

.

## 2 Application

Let us consider a conducting wire centered in a steel tube, Fig. 1. The region between the wire and the tube is filled with air. The steel pipe has a relative permeability and conductivity . The wire and the air region are modeled by vacuum permeability but without conductivity. The wire has a source current density , with the magnitude of the current, the radius of the wire and the angular frequency. As a test case we consider three () random variables: , which is the inner radius of the steel tube, and

. They are uniformly distributed

 r1(θ) =¯r1+X(θ), X(θ)∼U(−0.1m,0.1m), (1) I0(θ) =¯I0+Y(θ), Y(θ)∼U(−10A,10A), (2) μIII(θ) =¯μIII+Z(θ), Z(θ)∼U(−400,400), (3)

where depicts the random outcome of a quantity. The nominal values are defined by

 ¯r1 =E[r1(θ)] =0.5m, (4) ¯I0 =E[I0(θ)] =100A, (5) ¯μIII =E[μIII(θ)] =1000, (6)

where denotes the expectation value. The problem can be treated by the magnetoquasistatic approximation of Maxwell’s equations.

Modeling the source current density as a time harmonic function one retrieves

 ∇×(1μ(θ)∇×→A(θ))+jωσ→A(θ)=→Js(θ), (7)

where

is the magnetic vector potential. Imposing Dirichlet boundary conditions on the outer radius of the pipe one can solve (

7) for . This allows the calculation of the energy

 W(θ)=∑i∈{I,II,III}∫V(i)μi→H(i)(θ)⋅→H––(i)(θ)2dV, (8)

where is the magnetic field and refers to the regions.

Since it assumed that , (7) is solved in 2D and lowest order finite elements are used. The discretization is done by a triangular grid .

## 3 Multilevel Monte Carlo

The main principles of MLMC are recalled from  in the following. Let be the quantity of interest and be the energy on level , with 0 and referring to the coarsest and finest level, respectively. Also, let denote an approximation to

, i.e. the MC or MLMC estimator. For the mean square error (MSE) there holds

 MSE=V[Y]+(E[Y−W])2. (9)

Hence, an overall MSE of can be achieved, by reducing both the weak error and the variance below . In MLMC, the weak error is dominated by the spatial resolution of the finest level and can be controlled accordingly. Figure 3: Y evaluated for different error bounds ε over computational costs.

The variance is reduced by adding more and more samples to the different levels. More precisely, by decomposing as

 Y=E[W0]+L∑ℓ=1E[Wℓ−Wℓ−1], (10)

the conventional MC method is used with samples for each term on the rhs in (10). The associated variance reads

 V[Y]=N−10V[W0]+L∑ℓ=1N−1ℓV[Wℓ−Wℓ−1]. (11)

For a fixed computational budget, there exists an optimal choice of minimizing the variance given by (11), see . This optimum is readily computable if asymptotic bounds on and the cost per level can be determined. The coefficients in these bounds are problem dependent and determined numerically in this work.

## 4 Results

The variance is nearly constant over all levels as shown in Fig. 1(a)

and thus independent of the number of degrees of freedom (DoF)

. This observation confirms the fact that the MSE can be divided into an error determined by the variance of the estimator and an error determined by the FEM approximation. In Fig. 1(b) the number of samples is plotted. Different error bounds are used. As anticipated, the lower the bounds the more samples are needed. Taking more samples also implies an increase in the overall computational cost for every level. It is found that the biggest computational cost is spent on the coarsest level. The reason can be seen by looking to Fig. 1(a). More samples are needed on the coarsest level for the convergence of the variance, however less samples are needed on the finer levels since is small. In Fig. 3 the costs of MLMC and MC are compared in dependence of the demanded accuracy. The number of MC samples is depicted by . One clearly sees that for the costs of MLMC are lower than the ones using MC.

## 5 Conclusion

The cost of MLMC is determined by the costs on the coarsest level since the reduction of the variance is dominating. Adding more levels does not alter the variance much, but helps reducing the weak error. Comparing the costs of MLMC with classical MC, one sees that for an accuracy the costs of MLMC are lower than the ones of MC.

## Acknowledgment

This work is supported by the German BMBF in the context of the SIMUROM project, by the ’Excellence Initiative’ of the German Federal and State Governments and the Graduate School of CE at TU Darmstadt.

## References

•  D. Xiu and G.E. Karniadakis, “The Wiener–Askey Polynomial Chaos for Stochastic Differential Equations,” SIAM J. Sci. Comput., vol. 24, no. 2, pp. 619–644, 2002.
•  F.Y. Kuo and I. H. Sloan, “Lifting the curse of dimensionality,” Notices Amer. Math. Soc., vol 52., no. 11, pp. 1320–1328, 2005.
•  J. Dick, F. Y. Kuo and I.H. Sloan, “High-dimensional integration: the quasi-Monte Carlo way," Acta. Num., vol. 22, pp. 133–288, 2013.
•  M.B. Giles, “Multilevel Monte Carlo,” Acta. Num., vol. 24, pp. 259–328, 2015.
•  A. L. Teckentrup, R. Scheichl, M. B. Giles and E. Ullmann, “Further analysis of multilevel Monte Carlo methods for elliptic PDEs with random coefficients,” Numer. Math., vol. 125, no. 3, pp. 569–600, 2013.
•  S.J. Salon, Finite Element Analysis of Electrical Machines, Kluwer, 1995.