# A High-Order Scheme for Image Segmentation via a modified Level-Set method

The method is based on an adaptive "filtered" scheme recently introduced by the authors. The main feature of the scheme is the possibility to stabilize an a priori unstable high-order scheme via a filter function which allows to combine a high-order scheme in the regularity regions and a monotone scheme elsewhere, in presence of singularities. The filtered scheme considered in this paper uses the local Lax-Friedrichs scheme as monotone scheme and the Lax-Wendroff scheme as high-order scheme but other couplings are possible. Moreover, we introduce also a modified velocity function for the level-set model used in segmentation, this velocity allows to obtain more accurate results with respect to other velocities proposed in the literature. Some numerical tests on synthetic and real images confirm the accuracy of the proposed method and the advantages given by the new velocity.

## Authors

• 5 publications
• 2 publications
• 5 publications
• ### Multidimensional smoothness indicators for first-order Hamilton-Jacobi equations

The lack of smoothness is a common feature of weak solutions of nonlinea...
02/25/2020 ∙ by Maurizio Falcone, et al. ∙ 0

• ### THINC-scaling scheme that unifies VOF and level set methods

We present a novel interface-capturing scheme, THINC-scaling, to unify t...
08/13/2019 ∙ by Ronit Kumar, et al. ∙ 0

• ### On the monotonicity of high order discrete Laplacian

The monotonicity of discrete Laplacian, i.e., inverse positivity of stif...
10/14/2020 ∙ by Logan J. Cross, et al. ∙ 0

• ### High-Order Central-Upwind shock capturing scheme using a Boundary Variation Diminishing (BVD) Algorithm

In this paper, we present a novel hybrid nonlinear explicit-compact sche...
12/17/2020 ∙ by Amareshwara Sainadh Chamarthi, et al. ∙ 0

• ### Dynamical p-adaptivity for LES of compressible flows in a high order DG framework

We investigate the possibility of reducing the computational burden of L...
11/04/2019 ∙ by Antonella Abbà, et al. ∙ 0

• ### "RAPID" Regions-of-Interest Detection In Big Histopathological Images

The sheer volume and size of histopathological images (e.g.,10^6 MPixel)...
04/07/2017 ∙ by Li Sulimowicz, et al. ∙ 0

• ### High order numerical simulations for the polymer self-consistent field theory using the adaptive virtual element and spectral deferred correction methods

In this paper, we propose an efficient and accurate numerical method, wh...
02/18/2020 ∙ by Kai Jiang, et al. ∙ 0

##### This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

## 1 Introduction

The Level-Set (LS) Method has been introduced by Osher and Sethian in the 1980s [24, 21]

and then used to deal with several applications, e.g. fronts propagation, computer vision, computational fluids dynamics (see the monographs by Sethian

[25] and by Osher and Fedkiw [20] for several interesting examples). This method is nowadays very popular for its simplicity and its capability to deal with topological changes. In fact, the main advantage of the level-set method is the possibility to easily describe time-varying objects, follow shapes that change topology, for example, when a shape splits in two, develops holes, or the reverse of these operations. In fact, a shape that changes its topology arriving to split into two objects is difficult to describe numerically in order to follow its evolution. It is much easier to consider the evolution in time of the shape through its level-set function. In , the level-set method corresponds to representing a closed curve (such as the shape boundary in our case) using an auxiliary function , called the level-set function. is represented as the zero-level set of by , and the level-set method manipulates implicitly, through the function . This function will take positive values inside the region delimited by the curve and negative values outside.

Here, we are interesting in following the evolution of a front that moves in its normal direction with a velocity . Since the velocity function depends only on the point , this problem can be written in terms of a first order evolutive Hamilton-Jacobi equation of eikonal type

 {vt+c(x,y)|∇v|=0,(t,x,y)∈(0,T)×R2,v(0,x,y)=v0(x,y),(x,y)∈R2, (1)

where must be a proper representation of the initial front , i.e. if we identify the first configuration of the front as the -level set of the viscosity solution, then we require

 ⎧⎪ ⎪⎨⎪ ⎪⎩v0(x,y)<0,(x,y)∈Ω0,v0(x,y)=0,(x,y)∈Γ0,v0(x,y)>0,(x,y)∈R2∖¯¯¯¯Ω0, (2)

where is the region delimited by .
The front at time is defined as the -level set of the solution of (1), i.e.

 Γt:={(x,y):v(t,x,y)=0}. (3)

The velocity cannot change sign during the evolution, the orientation depends on the type of evolution (outward for an expansion and inward for a shrinking). In general, the level-set method can handle velocities depending on physical quantities in order to describe several phenomena, e.g.

• , isotropic growth with time varying velocity

• , anisotropic growth, dependent on normal direction

• , Mean Curvature Motion, with mean curvature to the front at time ,

• , velocity dependent on the level set.

Depending on the choice of the velocity, the front evolution will be described by first or second order PDEs. Here we will consider only first order problems (1) and velocities which depending only on the position.
In order to set our contribution into perspective, let us mention that the segmentation problem has been solved by various techniques which mainly rely on two different approaches: variational methods and active contour methods. For the first approach the interested reader can look at [7, 6] and the references therein. For the link between the two class of method, see [9]. As already said, we will apply the level set method based on (1) looking for an accurate numerical method. High-order methods have been proposed for (1

) and most of them were based on non oscillatory local interpolation techniques that allow to avoid spurious oscillation around discontinuities of the solution and/or jumps in the gradient. These techniques were originally developed for conservation laws (see the seminal paper

[13] and the references therein), the research activity on essentially non oscillatory (so called ENO) methods has been rather effective and a number of improvements have been proposed in e.g. [17, 15, 3] so that now ENO/WENO techniques are rather popular in many applications (see [27] for a recent survey). We should also mention that later these techniques were successfully applied to the numerical solution of Hamilton-Jacobi equations [22] opening the way to other applications. The reader interested in image processing will find in [1, 2] an application to image compression and in [26] an application to image segmentation. However, a general convergence theorem for ENO/WENO schemes is still missing and their application is a rather delicate issue. These limitations have motivated further investigations and a new class of high-order methods for evolutive Hamilton-Jacobi equations have been proposed looking for a different approach based on “filtered schemes” introduced in this framework by Lions and Souganidis [16]. The class of filtered schemes is based on a simple coupling of a monotone scheme with a high-order accurate scheme. Monotone scheme are convergent to the weak (viscosity) solution but they are known to be at most first order accurate, whereas high-order schemes gives a higher accuracy but in general are not stable. The crucial point is the coupling between the two schemes which is obtained via a filter function which selects which scheme has to be applied at a node of the grid in order to guarantee (under appropriate assumptions) a global convergence. The construction of these schemes is rather simple as explained by Oberman and Salvador [19] because one can couple various numerical methods and leave the filter function decide the switch between the two schemes. A general convergence result has been proved by Bokanowski, Falcone and Sahu in [5] and recently improved by Falcone, Paolucci and Tozza [11] with an adaptive and automatic choice of the parameter governing the switch. Note that the adaptation of the parameter depends on some regularity indicators in every cell, these indicators are computed at every iteration and this guarantees convergence.

Our contribution here is to construct a modified level-set approach for the segmentation problem where the velocity is slightly modified with respect to the standard velocity in order to have a regular evolution (see the details in Section 2) and to benefit of the high-order accuracy of the filtered scheme. Moreover, we apply the filtered scheme to the segmentation problem in two dimensions and we give some hints on how it should be implemented to take into account efficiently the modified velocity (Section 4). For readers convenience a short presentation of the filtered scheme is contained in Section 3 (more details and the convergence result are given in [12, 11]). The performances of this new method for the segmentation problem are illustrated in Section 5 where we compare it with a classical monotone scheme (first order accurate) on several virtual and real images presenting a detailed error analysis of the numerical results.

## 2 Image segmentation via a modified LS method

The key idea behind the use of the LS technique for the image segmentation problem is that the boundaries of one (or more) object(s) inside a given image are characterized by an abrupt change of the intensity values of the image, so that the magnitude of can be used as an indication of the edges. In this context, the definition of the velocity in (1) plays an important role and must be defined in a proper way, close to when the front is close to an edge, since the evolution should stop. The velocity will be positive or negative depending on the case, expanding or shrinking, respectively, without the possibility to change sign during the evolution.

Several possible definitions of the velocity function has been proposed in literature. Typical examples are the following:

 c1(x,y)=1(1+|∇(G∗I(x,y))|μ),μ≥1, (4)

where is used to give more weight to the changes in the gradient, if necessary. This function has been proposed in [8] with , and in [18] with . According to this definition, the velocity takes values in and has values that are closer to zero in regions of high image gradient and values that are closer to unity in regions with relatively constant intensity.
Another possible choice, defined in [18], has the form

 c2(x,y)=1−|∇(G∗I(x,y))|−M2M1−M2, (5)

where and are the maximum and minimum values of . This latter velocity has similar properties with respect to the previous one, having values in and being close to if the magnitude of the image gradient is close to its maximal value, and basically equal to otherwise. It is clear that both definitions have the desired properties, but with slightly different features. More precisely, in the first case the velocity depends more heavily on the changes in the magnitude of the gradient, thus giving an easier detection of the edges but also possibly producing false edges inside the object (e.g. in presence of specularity effects). On the other hand, the latter velocity is smoother inside the objects, being less dependent on the relative changes in the gradient, but might present some problems in the detection of all the edges if at least one of those is “more marked”.

### 2.1 Extension of the velocity function

As observed also in [18], the edge-stopping function (defined choosing one of the possible velocities previously introduced), has meaning only on the front , since it was designed precisely to force the -level set to stop in the proximity of an edge. Consequently, it derives its meaning not from the geometry of but only from the configuration of the front . Using one of the classical velocity functions introduced before, as will be more clear from the numerical tests in Sect. 5 (especially the first one concerning a synthetic rhombus), high-order schemes produce unstable results since the representative function produces oscillations near the edges where the front should stop. This problem can be solved by adding a limiter [5]. Here, we are interesting in avoid the use of any limiter still producing stable results. That is why we need to extend the image-based velocity function to all the level sets of the representation in order to give a physical motivation also to the speed used to make all the other (infinite) level sets evolve.

In that spirit, we follow the idea discussed in [18], proposing a new and simple way to extend the velocity function which depends only on the initial condition and allows to avoid all the heavy computations required by the first solution proposed by the authors in [18], still obtaining stable results. Thus, recalling their approach, the first property that the velocity has to satisfy is:

###### Property 2.1.

Any external (image-based) speed function that is used in the equation of motion written for the function should not cause the level sets to collide and cross each other during the evolutionary process.

As we previously said, the appropriate extension depends on the choice of the initial representation. To present the basic idea, let us consider the distance (to the initial -level set) function, that is

 v0(x,y)=dist{(x,y),Γ0}. (6)

Therefore, with this choice we can define the velocity extension as follows:

###### Property 2.2.

The value of the speed function at a point lying on a level set is exactly the value of at a point , such that the point is a distance away from and lies on the level set .

Note that the point is uniquely determined whenever the normal direction in is well defined. In fact, , where is the outgoing normal.
In order to apply this construction, in [18] the authors propose a simple but heavy procedure to track the point on the -level set associated to each point of any level set. These computations clearly lead to the necessity of some modifications, such as the reinitialization for stability purposes and the narrow band approach to reduce the computational cost.

In this work, we avoid such problems in tracking the -level set, making use of the knowledge on the evolution and on the initial condition. The idea is straightforward and it is based on the fact that the evolution is oriented in the normal direction to the front. Whence, if the reciprocal disposition of the level sets is also known (that is why we must choose wisely the initial condition) and we make all the points in the normal direction to the -level set evolve according to the same law, then it is reasonable to expect that all such points will keep their relative distance unchanged as time flows.

In order to present our modification, let us still consider the distance to (6) as initial condition. Then, by construction, all the -level set are at a distance from the -level set, as stated by Property 2.2. Hence, if we consider a generic point on a -level set, then it is reasonable to assume that the closest point on should be

 (x0,y0)=(xc,yc)−v(t,xc,yc)∇v(t,xc,yc)|∇v(t,xc,yc)|. (7)

Therefore, it seems natural to define the extended velocity as

 ˜c(x,y,v,vx,vy)=c(x−vvx|∇v|,y−vvy|∇v|), (8)

which coincides with on the -level set, as it is needed. The same approach can be applied as long as the initial distance between the level sets is known. In that case, if we want higher regularity to the evolving surface, which would be preferable in the case of high-order schemes such as the Adaptive Filtered scheme that we use in the numerical tests, we can define an appropriate initial condition, for example, by simply rotating a regular function in one space dimension. More precisely, let us consider a regular function such that , where is the radius of the initial circle (e.g. the right branch of a parabola centered in the origin), and let us define rotating its profile, that is

 v0(x,y)=¯¯¯v0(√x2+y2). (9)

Then, it is clear that the C-level set of are located at a distance

 d(C):=¯¯¯v−10(C)−r0,with ¯¯¯v−10(C)≥0, (10)

from the -level set and, according to our previous remarks, they should keep this property as time evolves. Consequently, also in this case we can define

 ˜c(x,y,v,vx,vy)=c(x−d(v)vx|∇v|,y−d(v)vy|∇v|). (11)

More details on the function will be given in the next Section 2.2. For simplicity, in the last construction we assumed the representation function to be centered in the origin, but it is straightforward to extend the same procedure to more general situations. Note also that if we have only one object to be segmented (or we are considering the shrinking from the frame of the picture) we can always use a representation function centered in the origin since we can freely choose the domain of integration, given by the pixels of the image.

### 2.2 Motivations of the new velocity function

Since the idea behind the modification of the velocity into defined in (11) with if , is to follow the evolution of the -level set and then to define the evolution on the other level sets accordingly, we can see the new definition as a characteristic based velocity. Consequently, in order to justify our approach, as a first step we analyze the characteristics of the equation, assuming the regularity necessary for the computations. Therefore, we have to assume (or at least in space and in time) and , although the original problem does not satisfy (in general) these requirements. Let us assume that the characteristics do not cross each other during the evolution.

In order to simplify the presentation, we introduce the notations of the vectors

and . We will use these notations only in this section. Afterwards, the gradient will be denoted by the usual notation .
With these vectorial notations, the Hamiltonian in our case will be

 H(z,v,p)=˜c(z,v,p)|p|. (12)

Let us introduce the method of characteristics, writing the usual system

 ⎧⎪⎨⎪⎩˙z(s)=∇pH˙v(s)=∇pH⋅p−H˙p(s)=−∇H−Hvp, (13)

where denotes the derivative with respect to the variable . In our case, defining for brevity the point , we compute

 ∂H∂p1= ∂˜c∂p1|p|+˜c∂|p|∂p1 (14) = (∂c∂ξ⋅∂ξ∂p1+∂c∂ζ⋅∂ζ∂p1)|p|+˜c∂p1∂|p| = −d(v)∂c∂ξ⎛⎜ ⎜ ⎜⎝|p|−p21|p||p|2⎞⎟ ⎟ ⎟⎠|p|−d(v)∂c∂ζ(−p1p2|p|3)|p|+˜cp1|p| = −d(v)∂c∂ξp22|p|2+d(v)∂c∂ζp1p2|p|2+˜cp1|p|,

and analogously , so that we obtain

 ∇pH=⎛⎜ ⎜⎝d(v)p2|p|2(p1∂c∂ζ−p2∂c∂ξ)+˜cp1|p|d(v)p1|p|2(p2∂c∂ξ−p1∂c∂ζ)+˜cp2|p|⎞⎟ ⎟⎠⇒∇pH⋅p=˜c(z,v,p)|p|. (15)

Therefore, the system (13) becomes in our case the following:

 ⎧⎪ ⎪⎨⎪ ⎪⎩˙z(s)=∇pH˙v(s)=˜c(z,v,p)|p|−˜c(z,v,p)|p|=0˙p(s)=−∇˜c(z,v,p)|p|+d′(v)∇˜c(z,v,p)|p|2=∇˜c(z,v,p)|p|(d′(v)|p|−1). (16)

Now, if we define such that

 d′(v)=|p|−1, (17)

we have the final system

 ⎧⎪⎨⎪⎩˙z(s)=∇pH,˙v(s)=0,˙p(s)=0, (18)

which states that, as long as the function remains smooth enough ( and ), the characteristics are basically directed in the normal direction and along them both the height and the gradient are preserved. These last two properties are still valid when is no longer smooth. Looking at the third relation of (18) and at the definition (17), since along the characteristics, we can choose simply

 d′(v)=|∇v0|−1, (19)

which is the trivial case with the function and also for given by the previous definition (10). In fact, using the inverse function theorem, we have

 d′(v)=ddv(¯¯¯v−10(v))=1¯¯¯v′0(w), (20)

with such that . Moreover, recalling the definition (9), we can compute

 |∇v0(x,y)| =∣∣∣∇¯¯¯v0(√x2+y2)∣∣∣ (21) =∣∣ ∣ ∣∣⎛⎜ ⎜⎝¯¯¯v′0(√x2+y2)x√x2+y2,¯¯¯v′0(√x2+y2)y√x2+y2⎞⎟ ⎟⎠∣∣ ∣ ∣∣ =¯¯¯v′0(√x2+y2)√x2+y2√x2+y2=¯¯¯v′0(√x2+y2),

and then it is enough to consider such that .

###### Remark 2.3.

From a numerical point of view, the equations (17)-(19) give two different ways to compute the velocity at each time step. If we prefer to compute the function analytically, through the knowledge of the initial condition , we have to use (19), whereas if we prefer to compute independently on we can use, for example, a numerical integration for

 d(v)=∫1|∇v|dx, (22)

where the integral is taken on the projected characteristic

. The latter choice would probably produce an even more stable scheme, assuming to be able to compute an accurate approximation of (

22).

Thanks to the previous computations, we reached a good understanding of the nature of the evolution given by (1)-(11), but we still have not justified the main motivation that led us to define (11), that is to make all the level sets of evolve according to the same law. More precisely, we have to show that, if we consider the evolution of two points on the same characteristic but on two different level sets, say the -level set and a generic level set , then their relative distance (along the characteristic) does not change during the evolution. This fact would imply that, if we choose the level sets of to be such that

 z0(0)=zℓ(0)−d(v0)∇v0|∇v0|, (23)

than the points are always on the -level set of . In order to prove this last statement, let us proceed by a simple differentiation

 ˙z–(s)= ˙zℓ(s)−dds(d(v)p|p|) (24) = ˙zℓ(s)−[d′(v)˙v(s)p|p|+d(v)|p|2(˙p(s)|p|−dds(|p(s)|)p)].

Recalling the relations in the system (18) with respect to and , we can write

 ˙z–(s)= ˙zℓ(s)+d(v)|p|2(p⋅˙p(s)|p|)p (25) = ˙zℓ(s).

This last computation states that all the level sets evolve according to the same law along characteristics. As a consequence, if (23) holds then till the characteristics do not cross, as we wanted.

One of the main consequences of this property, which will be very useful in the numerical implementation, is that the points are on the -level set of as long as the gradient is preserved. Then, assuming to have a coherent way to recover the point on the associated characteristic, we can approximate the problem by previously computing the points and then updating the solution considering the simplified problem with (locally in time) isotropic velocity

 vt+c(z–)|∇v|=0,(t,x,y)∈(tn,tn+1)×R2. (26)

This can be done in a very simple and direct manner, with only a slight increase in the computational cost, as we will see in Section 5. Otherwise, we should consider the full problem (1)-(11) and treat numerically all the dependence of .

## 3 The Adaptive Filtered Scheme

In this section we will introduce and illustrate the adaptive “filtered” (AF) scheme we will use to approximate the viscosity solution of the problem (1). For more details on the AF scheme, see [12]. We assume that the hamiltonian and the initial data are Lipschitz continuous functions in order to ensure the existence and uniqueness of the viscosity solution [10]. For a detailed presentation of uniqueness and existence results for viscosity solutions, we refer the reader to [10] and [4].

Now, let us define a uniform grid in space , ,, and in time , , with . Then, we compute the numerical approximation with the simple formula

 un+1i,j=SAF(un)i,j:=SM(un)i,j+ϕni,jεnΔtF(SA(un)i,j−SM(un)i,jεnΔt), (27)

where and are respectively the monotone and the high-order scheme dependent on both space variables, is the filter function needed to switch between the two schemes, is the switching parameter at time , and is the smoothness indicator function at the node and time , based on the 2D-smoothness indicators defined in [23]. The AF scheme here introduced is convergent, as proven in [11].

The two scheme composing the AF scheme can be freely chosen, provided that they satisfy the following assumptions:
Assumptions on : The scheme is consistent, monotone and can be written in differenced form

 un+1i,j=SM(un)i,j:=uni,j−Δt hM(xj,yi,D−xuni,j,D+xuni,j,D−yuni,j,D+yuni,j) (28)

for a Lipschitz continuous function , with and .
Assumptions on : The scheme has a high-order consistency and can be written in differenced form

 un+1i,j=SA(un)i,j:=uni,j−ΔthA (xj,yi,D−k,xui,j,…,D−xuni,j,D+xuni,j,…,D+k,xuni,j, D−k,yui,j,…,D−yuni,j,D+yuni,j,…,D+k,yuni,j), (29)

for a Lipschitz continuous function (in short), with and .

###### Example 3.1.

As examples of monotone schemes in differenced form satisfying the hypotheses stated before, we can consider the simple numerical hamiltonian

 hM(p−,p+,q−,q+):=√max{p−,−p+,0}2+max{q−,−q+,0}2 (30)

for the eikonal equation

 vt+√v2x+v2y=0, (31)

or, for more general equations also depending on the space variables, we can use the 2D-version of the local Lax-Friedrichs hamiltonian

 hM(x,y,p−,p+,q−,q+):= H(x,y,p++p−2,q++q−2) −αx(p−,p+)2(p+−p−)−αy(q−,q+)2(q+−q−), (32)

with

 αx(p−,p+):=maxx,y,q,p∈I(p−,p+)|Hp(x,y,p,q)|,αy(q−,q+):=maxx,y,p,q∈I(q−,q+)|Hq(x,y,p,q)|, (33)

where . This scheme is monotone under the restrictions .

###### Example 3.2.

An example of numerical hamiltonian satisfying the assumptions required is the Lax-Wendroff hamiltonian

 hA(x,y,D±xu,D±yu) :=H(x,y,Dxu,Dyu)− Δt2[Hp(x,y,Dxu,Dyu)(Hp(x,y,Dxu,Dyu)D2xu+Hx(x,y,Dxu,Dyu))+ +Hq(x,y,Dxu,Dyu)(Hq(x,y,Dxu,Dyu)D2yu+Hy(x,y,Dxu,Dyu))+ +2Hp(x,y,Dxu,Dyu)Hq(x,y,Dxu,Dyu)D2xyu], (34)

where , , are, respectively, the usual one-sided and centered one-dimensional finite difference approximations of the first and second derivative in the -direction (analogously for the -direction), whereas for the mixed derivative we use

 D2xyui,j:=ui+1,j+1−ui−1,j+1−ui+1,j−1+ui−1,j−14ΔxΔy. (35)

Note that the derivatives of can be computed either analytically or by some second order numerical approximation. In particular, to compute the derivative , we can simply use

 (Hx)i,j:=H(xj+1,yi,Dxui,j,Dyui,j)−H(xj−1,yi,Dxui,j,Dyui,j)2Δx, (36)

and analogously for .

For more details on the construction of and and other examples of possible numerical hamiltonians, see [23, 11].

In our approach, in order to couple the two schemes, we need to define three key quantities:

1. The filter function F, which must satisfy

1. for so that if and ,

2. for so that if or .

Several choices for are possible, different for regularity properties. In this paper, we will consider the discontinuous filter already used in [5] and defined as follows:

 F(r):={r if |r|≤10 otherwise, (37)

which is clearly discontinuous at and satisfies trivially the two required properties.

2. If we want the scheme (27) to switch to the high-order scheme when some regularity is detected, we have to choose such that

 ∣∣ ∣∣SA(vn)i,j−SM(vn)i,jεnΔt∣∣ ∣∣=∣∣∣hA(⋅,⋅)−hM(⋅,⋅)εn∣∣∣≤1, for (Δt,Δx,Δy)→0, (38)

in the region of regularity at time , that is

 Rn:={(xj,yi):ϕni,j=1}. (39)

Proceeding by Taylor expansion for the monotone and the high-order hamiltonians, by (38) we arrive to a lower bound for . The simplest numerical approximation of that lower bound is the following

 εn=max(xj,yi)∈RnK ∣∣∣Δt2[Hp(Hx+HpD2xun)+Hq(Hy+HqD2yun)+2HpHqD2xyun)]+ (˜hMp+−˜hMp−)+(˜hMq+−˜hMq−)∣∣, (40)

in which we have used the usual notation for the gradient, i.e. and

 ˜hMp+:=hM(x,y,Dxun,D+xun,Dyun,Dyun)−hM(x,y,Dxun,D−xun,Dyun,Dyun). (41)

The definition of follows from (41) in an analogous way. All the derivatives of are computed at and the finite difference approximations around the point , using . See [23] for more details.

3. For the definition of a function , needed to detect the region , we require

 ϕni,j:={1 if the solution un is regular in Ii,j,0 if Ii,j contains a point of singularity, (42)

with . In order to proceed with the construction, we split the cell into four subcells, denoted by the superscript ‘’, for , according to the shift with respect to the center . Then, we measure the regularity of the solution inside each subcell by first computing the smoothness coefficients

 βϑ1ϑ2k =1ΔxΔy[u[3,1]2+u[1,3]2+u[2,2]2+1712(u[3,2]2+u[2,3]2)+317720u[3,3]2+u[3,1]u[3,2]+ +u[1,3]u[2,3]−16(u[3,1]u[3,3]+u[1,3]u[3,3])−112(u[3,2]u[3,3]+u[2,3]u[3,3])] (43)

for , where we have dropped the dependence on the time step for brevity and we have used the shorter notation to denote the multivariate undivided difference of of order in and in . Note that the previous formula can be used to obtain all the needed quantities as long as the following ordered stencils are used to compute the undivided differences

• , ;

• , ;

• , ;

• , .

Since these coefficients are such that

• if the solution is smooth in ;

• if there is a singularity in ,

according to the usual WENO procedure we weight the obtained information and focus on the ‘inner’ stencil, denoted by the subscript ‘’, by computing

 ωϑ1ϑ2=αϑ1ϑ20αϑ1ϑ20+αϑ1ϑ21, (44)

where , with , which represents the measure of smoothness of the solution in each subcell. Once we have computed the four indicators, we couple the information by defining

 ω=min{ω−−,ω+−,ω−+,ω++}. (45)

At this point, in order to reduce spurious oscillations in regular regions, we use the mapping first introduced in [14] to propose a modification of the original WENO procedure, called M-WENO, that is

 ω∗=g(ω)=4ω(34−3<