Deep Eikonal Solvers

by   Moshe Lichtenstein, et al.

A deep learning approach to numerically approximate the solution to the Eikonal equation is introduced. The proposed method is built on the fast marching scheme which comprises of two components: a local numerical solver and an update scheme. We replace the formulaic local numerical solver with a trained neural network to provide highly accurate estimates of local distances for a variety of different geometries and sampling conditions. Our learning approach generalizes not only to flat Euclidean domains but also to curved surfaces enabled by the incorporation of certain invariant features in the neural network architecture. We show a considerable gain in performance, validated by smaller errors and higher orders of accuracy for the numerical solutions of the Eikonal equation computed on different surfaces The proposed approach leverages the approximation power of neural networks to enhance the performance of numerical algorithms, thereby, connecting the somewhat disparate themes of numerical geometry and learning.


Numerical Solution of the L^1-Optimal Transport Problem on Surfaces

In this article we study the numerical solution of the L^1-Optimal Trans...

Numerical wave propagation aided by deep learning

We propose a deep learning approach for wave propagation in media with m...

A deep learning based nonlinear upscaling method for transport equations

We will develop a nonlinear upscaling method for nonlinear transport equ...

Deep FPF: Gain function approximation in high-dimensional setting

In this paper, we present a novel approach to approximate the gain funct...

Neural Solvers for Fast and Accurate Numerical Optimal Control

Synthesizing optimal controllers for dynamical systems often involves so...

Ordered Line Integral Methods for Solving the Eikonal Equation

We present a family of fast and accurate Dijkstra-like solvers for the e...

Neural Network Approximations for Calabi-Yau Metrics

Ricci flat metrics for Calabi-Yau threefolds are not known analytically....

1 Introduction

Fast and accurate computation of distances is fundamental to innumerable problems in computer science. Specifically, in the sub-fields of computer vision, geometry processing and robotics, distance computation is imperative for applications like navigating robots

[14, 10, 13], video object segmentation [28], image segmentation [3] and shape matching [30]. A distance function has a gradient with unit magnitude at every point in the domain, therefore, they are computed by estimating the viscosity solutions to a Hamilton-Jacobi type non-linear PDE called the Eikonal equation. Fast marching methods are the prominent numerical schemes to estimate distance functions on discretized Euclidean domains as well as triangulated curved surfaces. Fast marching methods are known to have a computational complexity (for a discrete domain consist in sample points). Different versions of the fast marching algorithm have been developed yielding different accuracies [22], [23] and [1] for 2D Cartesian grids and for triangulated surfaces [12], where is the resolution of the discretization. A fast marching method comprises of a local numerical solver and an update step. The local numerical solver approximates the gradient locally to estimate the distance of a point in the advancing unit speed wavefront. Similar to the strategy employed in the celebrated Dijkstra’s algorithm [6], the update step involves selecting the least distant point in order to propagate the wavefront. The accuracy of fast marching scheme relies heavily on the local solver which is responsible for approximations to the gradient at the point of the advancing front [26, 22].

The success of deep learning has shown that neural networks can be utilized as powerful function approximators of complex attributes governing various visual and auditory phenomena. The availability of large amounts of data and computational power, coupled with parallel streaming architectures and improved optimization techniques, have led to computational frameworks that efficiently exploit their representational power. However, the utility of neural networks to provide accurate solutions to complex PDE’s is still a very nascent topic in computational research. Notable efforts include methods like [24]

which attempt to solve high-dimensional parabolic partial differential equations, whereas

[18] demonstrates the learning of differential operators from data as convolution kernels.

The main contribution of this paper is to develop a deep-learning infrastructure that enables accurate distance computation. Using the identical computational schematic of the fast marching method, the basic premise of this paper is that one can learn the local numerical solver by training a neural network and use it in conjunction with the upwind update scheme. Experiments in sections 3.2 and 4.1 demonstrate lower errors and higher order of accuracy in the solutions. Importantly, we show that our method generalizes to different geometries and sampling conditions.

2 Background

2.1 The Eikonal Equation

Given a domain and a curve or surface , we wish to find the distance, denoted by , for each from the given . The distance function satisfies


where refers to the gradient operator. Equation (1) can be generalized to arbitrary boundary conditions , and also to weighted domains where the local metric at a point is defined by a positive scalar function . As shown in [4, 8, 9], the viscosity solution for Equation (1) is unique.

Next, we consider curved manifolds, specifically, surfaces embedded in . Let

be a Riemannian manifold with metric tensor

. The geodesic distance function satisfies


where represents the gradient operator defined on the Riemannian manifold.

2.2 Numerical Approximation

Consider the task of numerically approximating the function in Equation (1). Let , where defines the distance between neighboring grid points along the and axis. The following numerical approximation is shown [19, 21] to pick the unique viscosity solution of Equation (1),


Where are the forward and backward difference operators for point along the and directions, respectively. Based on approximation (5), the solution of , given the values at its four neighbors requires a solution of the quadratic equation


algorithmEikonal Estimation on Discretized Domain

2:,   Tag as ;
3:, Tag as ;
4:while there is a non- point do
     Denote the points adjacent to the newly points as
5:     for all  do
6:         Estimate based on points.
7:         Tag as      
8:     Tag the least distant point as .

Typically, the approximation in Equation (5) is based on first order Taylor series expansion of . However, more sophisticated approximations such as those using two points [23] or three points [1] away from in each direction, or using diagonal neighbors [7] yield more accurate solutions. Unlike Euclidean domains which enjoy the benefit of regular sampling, developing local approximations similar to Equation (5) for curved manifolds are evidently more challenging due to the lack of a universal regular sampling strategy.

2.3 Fast Eikonal Solvers

Fast Eikonal solvers estimate distance functions in linear or quasilinear time. They typically comprise of two computational parts: a local numerical solver and an ordering/update method. The local solver provides an approximation for the distance function at a desired point and the ordering method chooses the next point to approximate. Fast marching methods [26, 22] is the most prominent subclass of fast Eikonal solvers that use approximations similar to Equation (5) for computing the distance function at a point using the known distances of its neighbors. By employing Dijkstra’s ordering strategy, fast marching schemes display an upwind nature, where the solution can be seen to grow outward from the source, equivalent to the propagation of a unit speed wavefront (See Figure 2.2). The other prominent subclass of fast Eikonal solvers are the fast sweeping methods [32, 33, 11, 29], iterative schemes that use alternating sweeping ordering. Some variants [17] of fast sweeping establish their local solver on finite element techniques, however most of them use the same upwind difference local solver as fast marching.

Kimmel and Sethian [12] extended the fast marching scheme to approximate geodesic distances on triangulated surfaces by treating the sampled manifold as piecewise planar. In order to estimate for some mesh point , each triangle involving is processed independently. Given a triangle comprising of the points , the distance is estimated based on by approximating Equation (1) on the triangle’s plane. The minimum of the solutions computed on all the triangles involving is chosen as the final distance assigned to . This algorithm is analogous to the first-order fast marching method that operates on regular grids and therefore has an accuracy .

3 Deep Eikonal Solver

We present a fast Eikonal solver estimating the geodesic distance for a discrete set of points from a subset of source points . The local solver (step 6 in Algorithm 2.2) in our scheme is represented by a neural network. The process of choosing subsequent points for evaluation is similar to Dijkstra’s algorithm [6], according to which each point is tagged into one of three disjoint sets,

  1. Visited: where is already computed (Black points in Figure 2.2).

  2. WaveFront: where calculation is in process. (Red points in Figure 2.2).

  3. Unvisited: where is yet to be determined (Green points in Figure 2.2).

Since we use the same update method used in the fast marching scheme, the computational complexity of our method is . In Section 3.1 we describe the general philosophy behind training the network. In Sections 3.2 and 4.1 we elaborate on the exact procedure and architecture of the network specific to the corresponding domain.

3.1 Training the Local Solver

Our basic premise is that the local solver can be learned using a neural network. The input to the local solver which is in action at a certain point is the set of points in its neighborhood denoted by and their corresponding distances . Based on the local geometry of the Visited points in ’s neighborhood, an estimate of the local distance is outputted by the local solver. See Figures 2.2, (b)b and 3 for a visual schematic.

Following a supervised training procedure, we train the local solver with examples containing the ground truth distances. By ground truth we refer to the most accurate available solution for distance (elaborated further in sections 3.2 and 4). For a given point , denote this distance as . The network inputs neighborhood information: and is trained to minimize the difference between its output and the corresponding ground truth distance at the point .

To simulate the propagating unit speed wavefront in various representative scenarios, we employ the following strategy. We develop a variety of different sources and construct a dataset of local neighborhood patches at different locations relative to the sources. We allow a patch point to participate in the prediction of according to the following rule.


Therefore, given a point and a patch of points constituting its neighborhood: , the network inputs , where all points that are declared Visited according to rule 7 input their respective ground truth distance. For points that do not satisfy condition 7, we simulate that the wavefront starting from the source arrives at their location later than it arrives to . Hence, such points are not considered Visited and they input a constant value to the network.


By employing the rules 7 and 8 stated above, we create a rich database of such local patches coupled with the desired outputs. The parameters of the network are optimized to minimize the MSE loss


over a dataset comprising of a variety of points and their corresponding neighborhoods.

The intuition behind our approach is twofold. First, we expect that using a larger local support leads to a more accurate estimate of the solution to the differential equation. Secondly, by training the network to follow a ground truth distance, we avoid the need to develop a sophisticated formula for locally approximating the gradient with high accuracy. Rather, we expect the network to learn the necessary relationship between the local patch and the ground truth distance. As demonstrated in our results section, our learning-based approach generalizes to different scenarios including different shapes and different datasets.







3.2 Deep Eikonal Solver for Cartesian Grids

We begin by evaluating our scheme on a Euclidean domain sampled regularly from . As the network’s input patch for point , we choose all the points located within a radius of from (yellow points in Figure (b)b

). We use a multilayer perceptron architecture (henceforth abbreviated as

) with four fully connected layers having number of nodes: respectively. After each linear stage we applied ReLU function as the non-linearity. Since the grid spacing is uniform in the Cartesian case, encodes all necessary spatial information regarding neighborhood . Accordingly, we input only the grid spacing to the network instead of the explicit coordinates .

figureOrder of Accuracy: The slope of each error plot is the order of accuracy of the corresponding scheme.

We trained our network by generating 10,000 synthetic examples constructed as per the strategy enumerated in Section 3.1. We design a variety of different source configurations like points, circles, arbitrary curves etc, and construct different local patches using the ground truth distance from these sources: . For the Cartesian case, we evaluate the ground truth distance as


We pipeline each example by subtracting from each distance and by scaling the example to mean magnitude 0.5, thereby simplifying the diversity of inputs prior to its feeding into the network. Note, that the subtraction is equivalent to solving Equation (1) with corresponding initial condition and the scaling is equivalent to solving Equation (1) on correspondingly scaled coordinates. Before proceeding with the update step, the network’s output is re-scaled and the bias is added to achieve the distance estimate.

We test our method using the benchmark shown in [23] comprising of two point sources. The learned scheme shows a superior and more accurate approximation of the distance function compared to fast marching local solvers (See Figure (b)b). We measure the order of accuracy of the proposed scheme by evaluating the error as a function of the grid spacing [15]. Let be the ground truth and represent the solution of a numerical scheme on a grid spacing .


where is a constant. We evaluated our scheme for a range of grid sizes and plotted as a function of . The slope of this line gives the order of accuracy . From Figure 3.2 we can see that our local solver has a higher order of accuracy as compared to the classical fast marching local solvers in addition to lower errors.

4 Deep Eikonal Solver for Triangulated Meshes

For triangulated meshes, we chose the second-ring neighbors of a point to constitute the local patch (See Figure 3). In Euclidean domains, the regular sampling allows us to encode the spatial information of a patch using only the grid spacing . However, the unordered nature of the sampled points within a mesh patch demands an explicit description of the spatial position associated with each point in a given patch. For each point

, we construct a vector comprising of four values: The 3D coordinates along with the distance

, namely,


The architecture we employ (see Figure 3) has the following structure. It comprises of input vectors, each encoding the information about a single point in the neighborhood point of some patch as shown in Figure 3. Each vector is processed independently in linear layers

which produces a 1024-dimensional feature vector for each point. These 1024-dimensional feature vectors are max-pooled to generate a single vector of the same dimension and finally passed through a

which generates the desired output distance . Our architecture is motivated from previous deep learning frameworks for point clouds like [20]. This scheme of learning functions over non-Euclidean domains was analytically validated in [31].


Linear, ReLU

Linear, ReLU

Linear, ReLU


Linear, ReLU

Figure 3: Deep Eikonal Solver for triangulated meshes: (a) shows a local patch on a surface, the red line corresponds to the advancing WaveFront passing from Visited (green) to Unvisited (white). (b) shows the network architecture (described in section 4.1) for processing the local information.

4.1 Experimental results

We create training examples from 8 TOSCA shapes [2] using the methodology described in Section 3.1. In order to train for rotation invariance, we augment the data by multiplying each example by a random rotation matrix. As a pre-processing stage, we practice the same pipeline described in 3.2.

We compare our Deep Eikonal solver to two different axiomatic approaches: our direct competitor, the fast marching method and recent geodesic approximation, the heat method [5]. The heat method uses the heat equation solutions to estimate geodesics on surfaces, motivated by Varadhan’s formula [27], that connects the two. As shown in Figures 4.1 and 4.1, the Deep Eikonal solver approximates the geodesic distance more accurately than the axiomatic approaches. Fast marching and heat method yield smooth but inaccurate approximations in comparison to the polyhedral scheme [25] which is known to be the most accurate scheme for computing distances on surfaces. Tables 4.1 and 4.1 show quantitative results on TOSCA [2] and SHREC [16] databases respectively. The error at point was computed as where the ground truth is taken as the polyhedral scheme solution. To evaluate our method’s order of accuracy as described in Section 3.2, we use different resolutions of unit sphere, since the geodesic distance on a sphere can be computed analytically. Figure 4 demonstrates that the order of accuracy of the Deep Eikonal solver is comparable with the polyhedral scheme accuracy. Finally, we test the robustness of our method to noise in the sampled manifold. Figure 5 demonstrates that the isolines for our method remain robust to noise in comparison to the fast marching.

figureIntra-Dataset Generalization: Iso-contours shown for (left to right) the polyhedral scheme, heat method, fast marching and our method. The network in our scheme was trained on patches extracted from 8 TOSCA shapes. The evaluation was conducted on a separate test set. Model Heat FMM Ours Heat FMM Ours Cat 0.2237 0.0060 0.0011 0.7775 0.0493 0.0352 Centaur 0.1185 0.0131 0.0042 0.3504 0.1622 0.1153 Dog 0.0562 0.0121 0.0022 0.3393 0.2291 0.1034 Michael 0.0625 0.0082 0.0015 0.4196 0.2855 0.1042 Victoria 0.1129 0.0087 0.0015 0.6174 0.1885 0.0764 Wolf 0.0254 0.0164 0.0054 0.2721 0.1465 0.0763 tableIntra-Dataset Generalization: quantitative evaluation conducted on TOSCA. The errors were computed relative to the polyhedral scheme. figureInter-Dataset Generalization: Iso-contours shown for (left to right) the polyhedral scheme, heat method, fast marching and our method. The evaluation was conducted on SHREC whereas the network was trained with TOSCA. Model Heat FMM Ours Heat FMM Ours Male 1 0.0288 0.0105 0.0027 0.2570 0.0981 0.0681 Male 2 0.0152 0.0122 0.0026 0.1553 0.1694 0.0296 Male 3 0.0274 0.0105 0.0048 0.1926 0.1479 0.0762 Female 1 0.0236 0.0089 0.0034 0.3184 0.1535 0.0504 Female 2 0.0229 0.0083 0.0019 0.1350 0.0970 0.0383 Female 3 0.0410 0.0087 0.0026 0.4205 0.2070 0.0671 tableInter-Dataset Generalization: quantitative evaluation conducted on SHREC. The errors were computed relative to the polyhedral scheme.

Figure 4: Order of Accuracy: The plots shows the edge resolution impact on the error. Each scheme’s accuracy associated with its corresponding slope. We construct a unit sphere mesh with various edge length using Loop’s subdivision scheme.
Figure 5: Robustness to noise: Equidistant isolines drawn on cat and its noisy version. The black isolines were computed on the noisy cat while the colorful isolines were computed on the original cat.

5 Conclusion

In this paper, we introduce a new methodology by which we train a neural network to numerically solve the Eikonal equation on regularly and non-regularly sampled domains. We develop a numerical scheme which uses a data-centric learning-based computation in conjunction with a well-established axiomatic update method. In comparison to the axiomatic counterparts, we demonstrate that our hybrid scheme results in a considerable gain in performance measured by lower errors and larger orders of accuracy tested on a variety of different settings. Our approach advocates combining the best of both worlds: approximation power of neural networks combined by meaningful axiomatic numerical computations in order to achieve performance as well as better understanding of computational algorithms.


  • [1] Ahmed, S., Bak, S., McLaughlin, J., Renzi, D.: A third order accurate fast marching method for the eikonal equation in two dimensions. SIAM Journal on Scientific Computing 33(5), 2402–2420 (2011)
  • [2] Bronstein, A.M., Bronstein, M.M., Kimmel, R.: Numerical geometry of non-rigid shapes. Springer Science & Business Media (2008)
  • [3] Chen, D., Cohen, L.D.: Fast asymmetric fronts propagation for image segmentation. Journal of Mathematical Imaging and Vision 60(6), 766–783
  • [4] Crandall, M.G., Lions, P.L.: Viscosity solutions of hamilton-jacobi equations. Transactions of the American mathematical society 277(1), 1–42 (1983)
  • [5] Crane, K., Weischedel, C., Wardetzky, M.: Geodesics in heat: A new approach to computing distance based on heat flow. ACM Transactions on Graphics (TOG) 32(5),  152 (2013)
  • [6] Dijkstra, E.W.: A note on two problems in connexion with graphs. Numerische mathematik 1(1), 269–271 (1959)
  • [7] Hassouna, M.S., Farag, A.A.: Multistencils fast marching methods: A highly accurate solution to the eikonal equation on cartesian domains. IEEE transactions on pattern analysis and machine intelligence 29(9) (2007)
  • [8] Ishii, H.: Uniqueness of unbounded viscosity solution of hamilton-jacobi equations. Indiana University Mathematics Journal 33(5), 721–748 (1984)
  • [9] Ishii, H.: A simple, direct proof of uniqueness for solutions of the hamilton-jacobi equations of eikonal type. Proceedings of the American Mathematical Society pp. 247–251 (1987)
  • [10] Kimmel, R., Kiryati, N., Bruckstein, A.M.: Multivalued distance maps for motion planning on surfaces with moving obstacles. IEEE Transactions on Robotics and Automation 14(3), 427–436 (1998)
  • [11] Kimmel, R., Maurer, R.P.: Method of computing sub-pixel euclidean distance maps (Sep 26 2006), uS Patent 7,113,617
  • [12] Kimmel, R., Sethian, J.A.: Computing geodesic paths on manifolds. Proceedings of the national academy of Sciences 95(15), 8431–8435 (1998)
  • [13] Kimmel, R., Sethian, J.A.: Optimal algorithm for shape from shading and path planning. Journal of Mathematical Imaging and Vision 14(3), 237–244 (2001)
  • [14] Lee, S.K., Fekete, S.P., McLurkin, J.: Structured triangulation in multi-robot systems: Coverage, patrolling, voronoi partitions, and geodesic centers. The International Journal of Robotics Research 35(10), 1234–1260 (2016)
  • [15] LeVeque, R.J.: Finite difference methods for differential equations
  • [16] Li, B., Lu, Y., Li, C., Godil, A., Schreck, T., Aono, M., Burtscher, M., Chen, Q., Chowdhury, N.K., Fang, B., et al.: A comparison of 3d shape retrieval methods based on a large-scale benchmark supporting multimodal queries. Computer Vision and Image Understanding 131, 1–27 (2015)
  • [17] Li, F., Shu, C.W., Zhang, Y.T., Zhao, H.: A second order discontinuous galerkin fast sweeping method for eikonal equations. Journal of Computational Physics 227(17), 8191–8208 (2008)
  • [18] Long, Z., Lu, Y., Ma, X., Dong, B.: Pde-net: Learning pdes from data. arXiv preprint arXiv:1710.09668 (2017)
  • [19] Osher, S., Sethian, J.A.: Fronts propagating with curvature-dependent speed: algorithms based on hamilton-jacobi formulations. Journal of computational physics 79(1), 12–49 (1988)
  • [20]

    Qi, C.R., Su, H., Mo, K., Guibas, L.J.: Pointnet: Deep learning on point sets for 3d classification and segmentation. Proc. Computer Vision and Pattern Recognition (CVPR), IEEE

    1(2),  4 (2017)
  • [21] Rouy, E., Tourin, A.: A viscosity solutions approach to shape-from-shading. SIAM Journal on Numerical Analysis 29(3), 867–884 (1992)
  • [22] Sethian, J.A.: A fast marching level set method for monotonically advancing fronts. Proceedings of the National Academy of Sciences 93(4), 1591–1595 (1996)
  • [23] Sethian, J.A.: Level set methods and fast marching methods: evolving interfaces in computational geometry, fluid mechanics, computer vision, and materials science, vol. 3. Cambridge university press (1999)
  • [24] Sirignano, J., Spiliopoulos, K.: Dgm: A deep learning algorithm for solving partial differential equations. Journal of Computational Physics 375, 1339–1364 (2018)
  • [25] Surazhsky, V., Surazhsky, T., Kirsanov, D., Gortler, S.J., Hoppe, H.: Fast exact and approximate geodesics on meshes. In: ACM transactions on graphics (TOG). vol. 24, pp. 553–560. Acm (2005)
  • [26] Tsitsiklis, J.N.: Efficient algorithms for globally optimal trajectories. IEEE Transactions on Automatic Control 40(9), 1528–1538 (1995)
  • [27] Varadhan, S.R.S.: On the behavior of the fundamental solution of the heat equation with variable coefficients. Communications on Pure and Applied Mathematics 20(2), 431–455 (1967)
  • [28] Wang, W., Shen, J., Porikli, F.: Saliency-aware geodesic video object segmentation. In: Proceedings of the IEEE conference on computer vision and pattern recognition. pp. 3395–3402 (2015)
  • [29] Weber, O., Devir, Y.S., Bronstein, A.M., Bronstein, M.M., Kimmel, R.: Parallel algorithms for approximation of distance maps on parametric surfaces. ACM Transactions on Graphics (TOG) 27(4),  104 (2008)
  • [30] Younes, L.: Spaces and manifolds of shapes in computer vision: An overview. Image and Vision Computing 30(6-7), 389–397 (2012)
  • [31] Zaheer, M., Kottur, S., Ravanbakhsh, S., Poczos, B., Salakhutdinov, R.R., Smola, A.J.: Deep sets. In: Advances in Neural Information Processing Systems. pp. 3391–3401 (2017)
  • [32] Zhao, H.K., Osher, S., Merriman, B., Kang, M.: Implicit and nonparametric shape reconstruction from unorganized data using a variational level set method. Computer Vision and Image Understanding 80(3), 295–314 (2000)
  • [33] Zhao, H.: A fast sweeping method for eikonal equations. Mathematics of computation 74(250), 603–627 (2005)