# Well-posedness of a nonlinear integro-differential problem and its rearranged formulation

We study the existence and uniqueness of solutions of a nonlinear integro-differential problem which we reformulate introducing the notion of the decreasing rearrangement of the solution. A dimensional reduction of the problem is obtained and a detailed analysis of the properties of the solutions of the model is provided. Finally, a fast numerical method is devised and implemented to show the performance of the model when typical image processing tasks such as filtering and segmentation are performed.

## Authors

• 7 publications
• 3 publications
• 5 publications
12/21/2020

### Existence results and iterative method for solving a fourth order nonlinear integro-differential equation

In this paper we consider a class of fourth order nonlinear integro-diff...
11/24/2021

### Numerical solution of a nonlinear functional integro-differential equation

In this paper, we consider a boundary value problem (BVP) for a fourth o...
12/23/2021

### Well-posedness of weak solution for a nonlinear poroelasticity model

In this paper, we study the existence and uniqueness of weak solution of...
06/23/2020

### A matrix-oriented POD-DEIM algorithm applied to nonlinear differential matrix equations

We are interested in approximating the numerical solution U(t) of the la...
05/19/2020

### Analysis for Allen-Cahn-Ohta-Nakazawa Model in a Ternary System

In this paper we study the global well-posedness of the Allen-Cahn Ohta-...
01/22/2021

### Homotopy Methods for Eigenvector-Dependent Nonlinear Eigenvalue Problems

Eigenvector-dependent nonlinear eigenvalue problems are considered which...
07/03/2014

### Solving QVIs for Image Restoration with Adaptive Constraint Sets

We consider a class of quasi-variational inequalities (QVIs) for adaptiv...
##### This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

## 1 Introduction

 ∂tu(t,x)= ∫ΩKh(u(t,y)−u(t,x))(u(t,y)−u(t,x))dy (1) +λ(u0(x)−u(t,x)), u(0,x)= u0(x) (2)

for . Here, denotes an open and bounded set, , and . The range kernel is given as a rescaling of a kernel satisfying the usual properties of nonnegativity and smoothness. We shall give the precise assumptions in Section 3. We shall refer to problem (1)-(2) as to problem P. The main results contained in this article are:

• Theorem 1. The well-posedness of problem P, the stability property of its solutions with respect to the initial datum, and the time invariance of the level set structure of its solutions.

• Theorem 2. The equivalence between solutions of problem P and the one-dimensional problem P, where , and is the decreasing rearrangement of , see Section 2 for definitions.

• Theorem 3. The asymptotic behavior of the solution of problem P with respect to the window size parameter, , as a shock filter.

Problem P is related to some problems arising in Image Analysis, Population Dynamics and other disciplines. The general formulation in (1) includes, for example, a time-continuous version of the Neighborhood filter (NF) operator:

 NFhu(x)=1C(x)∫Ω%e−|u(x)−u(y)|2h2u(y)dy,

where is a positive constant, and is a normalization factor. In terms of the notation introduced for problem P  the NF is recovered setting and . This well known denoising filter is usually employed in the image community through an iterative scheme,

 u(n+1)(x)=1Cn(x)∫ΩKh(u(n)(x)−u(n)(y))u(n)(y)dy, (3)

with . It is the simplest particular case of other related filters involving nonlocal terms, notably the Yaroslavsky filter [31, 32], the Bilateral filter [27, 29], and the Nonlocal Means filter [8].

These methods have been introduced in the last decades as efficient alternatives to local methods such as those expressed in terms of nonlinear diffusion partial differential equations (PDE’s), among which the pioneering nonlinear anti-diffusive model of Perona and Malik

[19], the theoretical approach of Álvarez et al. [1] and the celebrated ROF model of Rudin et al. [22]. We refer the reader to [9] for a review comparing these local and non-local methods.

Another image processing task encapsulated by problem P is the histogram prescription, used for image contrast enhancement: Given an initial image , find a companion image such that and share the same level sets structure, and the histogram distribution of is given by a prescribed function . A widely used choice is , implying that has a uniform histogram distribution. In this case, and is related to the image size and its dynamic range, see Sapiro and Caselles [23] for the formulation and analysis of the problem. Nonlinear integro-differential of the form

 ∂tu(t,x)=∫Ω(u(t,y)−u(t,x))w(x−y)dy (4)

and other nonlinear variations of it have also been recently used (Andreu et al. [6]) to model diffusion processes in Population Dynamics and other areas. More precisely, if is thought of as a density at the point at time and

is thought of as the probability distribution of jumping from location

to location , then is the rate at which individuals are arriving at position from all other places and is the rate at which they are leaving location . In the absence of external or internal sources this consideration leads immediately to the fact that the density satisfies the equation (4).

These kind of equations are called nonlocal diffusion equations since in them the diffusion of the density at a point and time depends not only on but also on the values of in a set determined (and weighted) by the space kernel . A thoroughfull study of this problem may be found in the monograph by Andreu et al. [6]. Observe that in problem P, the space kernel is taken as , meaning that the influence of nonlocal diffusion is spread to the whole domain.

As noticed by Sapiro and Caselles [23] for the histogram prescription problem, and later by Kindermann et al. [16] for the iterative Neighborhood filter (3), or by Andreu et al. [6] for continuous time problems like (4), these formulations may be deduced from variational considerations. For instance, in [16], the authors consider, for , the functional

 J(u)=∫Ω×Ωg(u(x)−u(y))w(x−y)dxdy, (5)

with an appropriate spatial kernel , and a differentiable filter function . Then, the authors formally deduce the equation for the critical points of . These critical points coincide with the fixed points of the nonlocal filters they study. For instance, if and , the critical points satisfy

 u(x)=1C(x)∫ΩKh(u(x)−u(y))u(y)dy,

which can be solved through a fixed point iteration mimicking the iterative Neighborhood filter scheme (3). On the other hand, choosing (or some suitable nonlinear variant) and considering a gradient descent method to approximate the stationary solution, equation (4) is deduced. Similarly, and leads to the histogram prescription problem.

Although the functional (5) is not convex in general, Kindermann et al. prove that when is the Gaussian kernel then the addition to of a convex fidelity term, e.g.

 ~J(u;u0)=J(u)+λ∥u−u0∥2L2(Ω),

gives, for large enough, a convex functional , see [16, Theorem 3.1].

Thus, the functional may be seen as the starting point for the deduction of problem P, representing the continuous gradient descent formulation of the minimization problem modeling Gaussian image denoising. Notice that although the convexity of is only ensured for large enough, the results obtained in this article are independent of such value, and only the usual non-negativity condition on is assumed.

The outline of the article is as follows. In Section 2, we introduce some basic notation and the definition of decreasing rearrangement of a function. This is later used to show the equivalence between the general problem P and the reformulation P  in terms of a problem with a identical structure but defined in a one-dimensional space domain. This technique was already used in [12] for dealing with the time-discrete version of problem P, in the form of the iterative scheme (3). See also [13, 14] for the problem with non-uniform spatial kernel. In Section 3, we state our main results. Then, in Section 4, we introduce a discretization scheme for the efficient approximation of solutions of problem P, and demonstrate its performance with some examples. In Section 5, we provide the proofs of our results, and finally, in Section 6, we give our conclusions.

## 2 The decreasing rearrangement

Given an open and bounded (measurable) set , let us denote by its Lebesgue measure and set . For a Lebesgue measurable function , the function is called the distribution function corresponding to . Function is, by definition, non-increasing and therefore admits a unique generalized inverse, called its decreasing rearrangement. This inverse takes the usual pointwise meaning when the function has not flat regions, i.e. when for any . In general, the decreasing rearrangement is given by:

 u∗(s)=⎧⎪⎨⎪⎩esssup{u(x):x∈Ω}if s=0,inf{q∈R:mu(q)≤s}if s∈Ω∗,essinf{u(x):x∈Ω}if s=|Ω|.

Notice that since is non-increasing in , it is continuous but at most a countable subset of . In particular, it is right-continuous for all .

The notion of rearrangement of a function is classical and was introduced by Hardy, Littlewood and Polya [15]. Applications include the study of isoperimetric and variational inequalities [20, 7, 17, 18], comparison of solutions of partial differential equations [28, 3, 30, 10, 11, 4], and others. We refer the reader to the monograph [21] for a extensive research on this topic.

Two of the most remarkable properties of the decreasing rearrangement are the equi-measurability property

 ∫Ωf(u(y))dy=∫|Ω|0f(u∗(s))ds, (6)

for any Borel function , and the contractivity

 ∥u∗−v∗∥Lp(Ω∗)≤∥u−v∥Lp(Ω), (7)

for , .

For the extension of the decreasing rearrangement to families of functions depending on a parameter, e.g. , we first consider, for fixed, the function given by , for any . Then we define by .

## 3 Main results

Our first result ensures the well-posedness of problem P  for initial data with bounded total variation. In addition, we show that the level sets structure of the solution is time invariant. Before stating our results, we collect here the main assumptions on the data problem, to which we shall refer to as (H):

• is an open, bounded, and connected set ().

• The final time, , which simulate the time horizon of the diffusion process is a real, positive fixed number.

• The parameter is a real, nonnegative fixed number.

• is nonnegative.

• is assumed to be, without loss of generality, non-negative.

Basic facts but also advanced results about the space of bounded variation can be found in the book by Ambrosio et al. [5]. Notice that, depending on the space dimension we have the continuous injections . When we have .

###### Theorem 1

Assume (H). Then there exists a unique solution of problem P. In addition, if and are the corresponding solutions to problems then

 ∥u1−u2∥L∞(0,T;L2(Ω))≤C∥u01−u02∥L2(Ω), (8)

for some constant .

Finally, suppose that for some . Then for all .

###### Remark 1

The existence and stability results of Theorem 1 may be extended to more general zero-order terms in the equation (1) of problem P. For instance, we can consider a function satisfying , , and . This regularity coincides with the initially obtained for the integral term of equation (1) in the approximation procedure to construct the solution. In addition, if implies , then the time invariance of level sets holds.

Replacing the set by and the initial data by , Theorem 1 ensures the existence of a solution of problem P. Observe that is bounded because is bounded (assumption ) and this implies and .

In the following result we obtain some properties of solutions of the one-dimensional problem P. Although the corollary is valid for any interval in , we keep the notation for simplicity. The corresponding result for the discrete-time version, with , of problem P may be found in [12].

###### Corollary 1

Assume (H), and let be the solution of problem P, for some nonincreasing . Then

1. a.e. in , for all .

2. For , and .

3. If

is odd then

, for .

4. If and , for and , then .

5. If and is such that then as .

###### Remark 2

Condition in point 3 is a natural symmetry condition for convolution kernels and it is satisfied, for instance, by the Gaussian kernel. Condition in point 5 is also satisfied by the Gaussian kernel, if is large enough.

The next result establishes the connection between problems P and P.

###### Theorem 2

Assume (H). Then, is a solution of P if and only if is a solution of P.

Theorem 2 implies that the solution of the multi-dimensional problem P  may be constructed by solving the one-dimensional problem P. Indeed, using the level sets invariance asserted in Theorem 1, we deduce

 u(t,x)=u∗(t,s)for a.e. x∈{y∈Ω:u0(y)=u0∗(s)},

for all . When image processing applications are considered, by property 1 of Corollary 1, the solution to P may be understood as a contrast change of the initial image, .

Indeed, this property also implies that if, initially, has no flat regions, and therefore is decreasing, then the solution of P verifies this property for all . Then, Theorem 1 implies that the solution of P has no flat regions for all .

The last theorem is an extension of a result given in [12] for the discrete-time formulation with . In it, we deduce the asymptotic behavior of the solution of problem P (and thus of of problem P) in terms of the window size parameter, . Although we state it for the Gaussian kernel, more general choices are possible, see [12, Remark 2].

###### Theorem 3

Assume (H) with and having no flat regions. Suppose, in addition, that . Then, for all , there exist positive constants independent of such that the solution of P satisfies

 ∂tu∗(t,s)=λ(u0∗(s)−u∗(t,s))+α1~kh(t,s)h2−α2∂2ssu∗(t,s)|∂su∗(t,s)|3h3+O(h7/2), (9)

with

 ~kh(t,s)=Kh(u∗(t,|Ω|)−u∗(t,s))|∂su∗(t,|Ω|)|−Kh(u∗(t,0)−u∗(t,s))|∂su∗(t,0)|, (10)

and with , and .

Two interesting effects captured by (9) are the following:

1. The border effect (range shrinking). Function is active only when is close to the boundaries, and . For , contributes to the decrease of the largest values of while for we have , increasing the smallest values of . Therefore, this term tends to flatten . In image processing terms, a loss of contrast is induced.

2. The term

 −∂2ssu∗(t,s)|∂su∗(t,s)|3

is anti-diffusive, inducing large gradients on in a neighborhood of inflexion points. In this sense, the scheme (9) is related to the shock filter introduced by Álvarez and Mazorra [2]

 vt+F(Gσvxx,Gσvx)vx=0, (11)

where is a smoothing kernel and function satisfies for any . Indeed, neglecting the fidelity, the border and the lower order terms, and defining , we render (9) to the form (11).

This property can be exploited to produce a partition of the image so the model can be interpreted as a tool for fast segmentation and classification. An example is proposed in the numerical experiments where a time-continuous version of the NF is implemented.

## 4 Discretization and numerical examples

For the discretization of problem P, for , we take advantage of the equivalence result stated in Theorem 2. Thus, we first calculate a numerical approximation, , to the decreasing rearrangement and consider the problem P. Then, we discretize this one-dimensional problem and compute a numerical approximation, . By Theorem 2, is, in fact, an approximation to , where is a solution to problem P. Then, we finally recover an approximation, , to by defining

 ~u(t,x)=v(t,s)for a.e. x∈{y∈Ω:~u0(y)=~u0∗(s)}. (12)

Inspired by the image processing application of problem P, we consider a piecewise constant approximation to its solutions. Let be, for simplicity, a rectangle domain and consider a uniform mesh on enclosing square elements (pixels), , of unit area, with barycenters denoted by , for and . Given

, we consider its piecewise constant interpolator

if .

The interpolator has a finite number, , of quantized levels that we denote by , with . That is where are the level sets of ,

 Ej={x∈Ω:~u0(x)=qj},j=1,…,Q.

Since is piecewise constant, the decreasing rearrangement of is piecewise constant too, and given by

 ~u0∗(s)=Q∑j=1qjχIj(s), (13)

with for , and , , ,,.

Let be a candidate to solve problem P. Due to the time-invariance of the level sets structure of the solution to this problem, see Theorem 1, we may express as

 v(t,s)=Q∑j=1cj(t)χIj(s), (14)

with , for , , for . Substituting in equation (1), we get, for and ,

 c′j(t)=Q∑k=1Kh(ck(t)−cj(t))(ck(t)−cj(t))μk+λ(c0j−cj(t)), (15)

with . Since, by assumptions (H), the right hand side of (15) is Lipschitz continuous, the existence and uniqueness of a smooth satisfying (15) and follows.

For the time discretization, we take a uniform mesh of the interval of size , and use the notation , with , and Then, we consider the following implicit time discretization of problem (15). For and , solve

 cnj=cn−1j+τQ∑k=1Kh(cnk−cnj)(cnk−cnj)μk+τλ(c0j−cnj). (16)

Since problem (16) is a nonlinear algebraic system of equations, we use a fixed point argument to approximate its solution, , at each discrete time , from the previous approximation . Let . Then, for the problem is to find solving the linear system

 cn,mj=cn−1j+τQ∑k=1Kh(cn,m−1k−cn,m−1j)(cn,mk−cn,mj)μk+τλ(c0j−cn,mj), (17)

for . We choose the stopping criterion , for values of chosen empirically, and then set .

Finally, using formula (12), the expression of the initial datum (13), and the definition (14), we recover a piecewise constant approximation to the original problem, P, taking

 ~u(t,x)=cnjif t∈[tn,tn+1),x∈{y∈Ω:~u0(y)=qj}.

### 4.1 Example. Histogram based image segmentation

As an application we consider a Grand Challenge in Biomedical Image Analysis. This is a computer vision problem in biomedicine which consists of overlapping cells segmentation and subcellular nucleus and cytoplasm detection, see

The data set is composed by real and synthetic images containing two or more cells with different degrees of overlapping, contrast, and texture. The phantom images allow the quantitative analysis of segmentation procedures through their ground-truth, which is carried out by using the Dice similarity coefficient, : for two sets (images) and ,

 DC=2|A∩B||A|+|B|.

Observe that values of close to one indicate high coincidence of the images, that is, of the ground-truth segmentation and the segmentation obtained with our method.

For running our algorithm, that is, providing an approximation, , of (15), we consider the usual number of image quantization levels, . The fidelity term is ignored (), and the range window parameter, , is set as for nucleus detection, and as for cytoplasm detection. The tolerance in the fixed point loop (17) is taken as . As a stopping criterion, we consider a combination of a maximum number of time iterations (1000), and an energy stabilization criterion,

 |J(cn)−J(cn−1)|<1.e−10,

where is the discrete version of the functional given by (5), for and . Finally, we implement a variable time step, , inspired by the proof of existence of solutions and given by, for ,

 τ(n)=τ(0)|J(cn−1)−J(cn−2)|

with . In the experiments, we observed that ranges from order in the first iterations to order just before convergence.

We summarize our results for the test90 dataset in Table 1, where we show the for some specific samples, and the mean of the ninety samples contained in the dataset. We may check that values are very high for the segmentation of both regions of interest (cytoplasm and nucleus), always above the range obtained in [26, 25]. The execution times are given for a Matlab implementation of the algorithm, running on a standard laptop (Intel Core i7-2.80 GHz processor, 8GB RAM).

In Figure 1, we show the segmentation process for the two regions of interest. The first column corresponds to the initial image. The second column, to the background extraction, and the third column to the nucleus segmentation. Thus, the cytoplasm is the difference between the images shown in the third and second column. Finally, the fourth column shows the difference between the ground-truth nucleus segmentation and the obtained with our method.

## 5 Proofs

Proof of Theorem 1. We divide the proof in several steps.

Step 1. Existence of a local in time solution to an auxiliary problem with smooth data.

We assume , and consider the following auxiliary problem, obtained using the change of unknown in (1), for some positive constant to be fixed:

 ∂tw(t,x)= ∫ΩKh(eμt(w(t,y)−w(t,x)))(w(t,y)−w(t,x))dy +λ(w0(x)−w(t,x))−μw(t,x), (18)

for , and for the initial data . Here, will be fixed later.

Time discretization. Let , and , for . Assume that is given and consider the functional given by

 A(φ(x))= 11+τ(λ+μ)(wj(x)+τ∫Ω∗Kh(eμtj(wj(y)−wj(x)))(φ(y)−φ(x))dy +τλw0(x)),

for . Observe that if has a fixed point , then we may define to get the following semi-implicit version of (18)

 wj+1(x)= wj(x)+τ∫ΩKh(e% μtj(wj(y)−wj(x)))(wj+1(y)−wj+1(x))dy +τλ(w0(x)−wj+1(x))−τμwj+1(x). (19)

We have,

 |A(φ(x))−A(ψ(x))| =τ1+τμ∫ΩKh(e% μtj(wj(y)−wj(x))) ×∣∣φ(y)−ψ(y)−(φ(x)−ψ(x))∣∣dy ≤2τ|Ω|1+τμ∥Kh∥∞∥φ−ψ∥∞.

Therefore, for , the mapping is contractive in , and a unique fixed point, verifying (19) does exist.

We have the following uniform estimates for

. One one hand, from (19) we obtain

 ∥wj+1∥∞≤11+τ(λ+μ−2|Ω|∥Kh∥∞)(∥wj∥∞+τλ∥w0∥∞),

which gives (recall ) the uniform estimate

 ∥wj+1∥∞≤M0 (20)

with depending only on .

On the other hand, since , we deduce from (19) . This regularity allows to differentiate in (19) with respect to the th component of , denoted by , to obtain for a.e. ,

with

 F1(x)= 1+τ(λ+μ+∫ΩKh(eμtj(wj(y)−wj(x)))dy), F2(x)= 1−τeμtj∫ΩK′h(eμtj(wj(y)−wj(x)))(wj+1(y)−wj+1(x))dy,

from where we deduce

Solving this differences inequality, we find that, by redefining to satisfy , we obtain the uniform estimate , with depending only on . This election of is possible by restricting to be

 T0<1μlogμ2|Ω|∥K′h∥∞M0. (21)

Time interpolators and passing to the limit . We define, for , the piecewise constant and piecewise linear interpolators

 w(τ)(t,x)=wj+1(x),~w(τ)(t,x)=wj+1(x)+tj+1−tτ(wj(x)−wj+1(x)).

Using the uniform estimates of and , we deduce the corresponding uniform estimates for , and , implying the existence of and such that, at least in a subsequence (not relabeled), as ,

 w(τ)→wweakly* in L∞(0,T0;W1,∞(Ω)), ~w(τ)→~wweakly* in W1,∞(QT0). (22)

In particular, by compactness

 ~w(τ)→~wuniformly in C([0,T0]×¯Ω).

Since, for ,

 |w(τ)(t,x)−~w(τ)(t,x)|=|(j+1)τ−tτ(wj(x)−wj+1(x))|≤τ∥∂t~w(τ)∥L∞(QT0),

we deduce both and

 w(τ)→wuniformly in C([0,T0]×¯Ω). (23)

Considering the shift operator , and introducing the approximation , for , we may rewrite (19) as

 ∂t~w(τ)(t,