# Improved PDE Models for Image Restoration through Backpropagation

In this paper we focus on learning optimized partial differential equation (PDE) models for image filtering. In this approach, the grey-scaled images are represented by a vector field of two real-valued functions and the image restoration problem is modelled by an evolutionary process such that the restored image at any time satisfies an initial-boundary-value problem of cross-diffusion with reaction type. The coupled evolution of the two components of the image is determined by a nondiagonal matrix that depends on those components. A critical question when designing a good-performing filter lies in the selection of the optimal coefficients and influence functions which define the cross-diffusion matrix. We propose the use of deep learning techniques in order to optimize the parameters of the model. In particular, we use a back propagation technique in order to minimize a cost function related to the quality of the denoising processe, while we ensure stability during the learning procedure. Consequently, we obtain improved image restoration models with solid mathematical foundations. The learning framework and resulting models are presented along with related numerical results and image comparisons.

## Authors

• 1 publication
• 2 publications
11/23/2021

### Results of improved fractional/integer order PDE-based binarization model

In this report, we present and compare the results of an improved fracti...
05/05/2015

### Adaptive diffusion constrained total variation scheme with application to `cartoon + texture + edge' image decomposition

We consider an image decomposition model involving a variational (minimi...
03/19/2015

### On learning optimized reaction diffusion processes for effective image restoration

For several decades, image restoration remains an active research topic ...
07/17/2018

### Learning Generic Diffusion Processes for Image Restoration

Image restoration problems are typical ill-posed problems where the regu...
02/28/2005

### Gradient Vector Flow Models for Boundary Extraction in 2D Images

The Gradient Vector Flow (GVF) is a vector diffusion approach based on P...
12/07/2011

### Re-initialization Free Level Set Evolution via Reaction Diffusion

This paper presents a novel reaction-diffusion (RD) method for implicit ...
02/04/2017

### Entropy-guided Retinex anisotropic diffusion algorithm based on partial differential equations (PDE) for illumination correction

This report describes the experimental results obtained using a proposed...
##### This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

## 1 Introduction

Nonlinear diffusion processes are well-known and widely used for image noise removal. Roughly speaking, the idea is to combine an effective noise reduction by diffusion with the preservation of the edges and other important image features. Amongst the better known approaches are those where the diffusion coefficient depends on the gradient with an inverse proportion, [15, 18].

The use of nonlinear complex diffusion filters (NCDF) where the image is represented by a complex function and the process of filtering is governed by a diffusion equation with a complex-valued diffusion coefficient, was investigated in [12]. Those filters, which bring the advantage of using the imaginary part of the solution as an edge detector avoiding the computation of the gradient to control the diffusion coefficient, can be successfully applied for denoising in particular for medical imaging despeckling [6, 16].

A complex diffusion equation can be written as a cross-diffusion system for the real and imaginary parts. The use of general cross-diffusion systems for image processing, which encompasses the complex-diffusion equations as particular cases, was investigated in [2]. The theoretical studies of the correspondent initial valued boundary problems were presented in [1] and [2] for the linear and nonlinear cases, respectively, where well-posedness and some scale-space representation properties were derived under hypothesis on the cross-diffusion coefficient matrix. However, besides these conditions, there is not much insight on the form that these coefficients should take in practical applications. In [3], the same authors applied the cross-diffusion model to several examples related to the problem of reducing the speckle noise in optical coherence tomography (OCT) images. The conclusion is that the performance of the model is greatly influenced by the choice of the cross-diffusion coefficient matrix.

All those methods based on diffusion filtering are highly dependent on some crucial parameters (e.g. [2, 17]). The parameters that lead to the most effective methods vary depending on the acquisition methods, the nature of the images and the associated noise profile. Furthermore, diffusion filters require the specification of a stopping time. To circumvent this issue, a reaction term can be introduced which keeps the steady-state solution close to the original image [18].

The challenge lies in the formulation of robust models with optimal parameters corresponding to each architecture. The use of neural networks for image processing tasks has set the pace of current research in this area. However, the general intrinsic “black box” nature of such algorithms is a drawback, particularly in the context of medical applications. In

[10] the authors use a learning framework inspired in nonlinear reaction diffusion processes where the filters and the influence functions are simultaneously learned from training data. The results are auspicious. There are, however, some important questions remaining: the numerical solution is not related with the original reaction diffusion problem, and more importantly, the qualitative properties of the computed solution are not derived. Moreover, the method does not cover promising models as the one based on cross-diffusion.

We propose the use of deep learning techniques such as backpropagation [13] and Adam algorithm [14] to optimize the parameters of a system of PDEs for a specific task. In this work, we focus on nonlinear cross-diffusion with reaction filters in order to optimize the reaction parameter and the influence functions of the cross-diffusion matrix for the removal of gaussian noise.

The optimization process we suggest includes the stability of the inherent numerical method, which is crucial for obtaining successful results. We drive our attention first to synthetic images in order to develop our learning strategy. The methodology consists in the parametrization of the functions that govern the cross-diffusion system, the computation of the gradient of a cost function with respect to these parameters and the application of the backpropagation technique, widely used in neural network supervised learning, to learn the optimal parameters. Experiments in real noisy images show that the training process improves significantly the performance of the filters.

Our contributions can be summarized as follows:

• we derive the stability conditions for a broad class of cross-diffusion models;

• we build a recurrent neural network equivalent to a cross-diffusion system where its neurons are the points of the spatial discretization and the weights are the non-linear diffusion coefficients;

• we obtain improved stable cross-diffusion models for image denoising through backpropagation.

The article is organized as follows. In Section 2 the fully discrete cross-diffusion with reaction model is presented. Next, the stability conditions are derived. Section 4 is devoted to the learning model. We present the numerical experiments in Section 5. We end the paper with a section dedicated to conclusions.

## 2 Cross-diffusion reaction model

In this section we present a fully discrete cross-diffusion reaction model for image restoration. The image is represented by a two-component vector field, , and the restoration process is governed by the nonlinear cross-diffusion reaction system

 ⎧⎪ ⎪ ⎪⎨⎪ ⎪ ⎪⎩ut=∇⋅(d1(w)∇u+d2(w)∇v)−λ(u−u0) in Ω×R+,vt=∇⋅(d3(w)∇u+d4(w)∇v) in Ω×R+,u(x,0)=u0(x), v(x,0)=v0(x) in Ω,uη=0, vη=0 on Γ×R+, (1)

where is the domain of interest, and are the given initial conditions for and and denotes the outward normal vector to the boundary . The cross-diffusion matrix of the model is given by

 D(w)=(d1(w)d2(w)d3(w)d4(w)). (2)

is a time dependent non-negative parameter.

The domain is discretized by the points , where

 xj1=a1+h1j1,xj2=a2+h2j2,jk=0,1,…,Nk,hk=bk−akNk,k=1,2,

for two given integers , and . This spatial mesh on is denoted by and . Points halfway between two adjacent grid points are denoted by , , where is the canonical basis, that is, is the standard basis unit vector in the th direction.

For the discretization in time, we consider a mesh with time step , , where .

We denote by the value of a mesh function at the point . For the formulation of the finite difference approximations, we use the centered finite difference quotients in the th spatial direction, for ,

 δkZj=Zj+(1/2)ek−Zj−(1/2)ekhk,δkZj+(1/2)ek=Zj+ek−Zjhk.

In order to formulate the discrete cross-diffusion restoration problem, let be a discrete real-valued function standing for the grey level values on of the noisy image to be restored. From , an initial distribution , for the cross-diffusion, is required. This is given by two real-valued functions , that can be selected following different criteria. A simple choice, that will be considered in the experiments we present later in this paper, consists of taking and , . A more detailed discussion on the initial data can be seen in [1, 2].

Let , such that . Given the initial solution , the numerical solution of (1) at the time is obtained considering the following finite difference scheme

 Um+1j−UmjΔt = 2∑k=1δk(d1(Wm)jδkUm+θj+d2(Wm)jδkVm+θj) (3) −λm+θ(Um+θj−U0j), Vm+1j−VmjΔt = 2∑k=1δk(d3(Wm)jδkUm+θj+d4(Wm)jδkVm+θj), (4)

where

 D(Wm)j+(1/2)ek=D(Wmj)+D(Wmj+ek)2

and . If then (3) is an explicit method. If then method (3) is semi-implicit and its solution is obtained by solving a system of linear equations. In the next section we will derive the stability properties of both, explicit and semi-implicit, methods. Later, in sections 4.2 and 4.3, we will discuss their role in our learning strategy: to optimize the cross-diffusion model we will use the explicit scheme ; to denoise the images we will use the semi-implicit discretization of the optimized model.

Equations (3) and (4), for such that , are defined using points of the form and , which don’t belong to and are not yet defined. For those points and , we consider the solution in and , respectively. This corresponds to the usual discretization of the homogeneous Neumann boundary conditions on with central finite differences.

## 3 Stability of the numerical scheme

We will now investigate the stability of the finite difference scheme (3)-(4). The approach generalizes the strategies in [4] used to study the stability of the particular case of complex-diffusion equations written as a cross-diffusion system.

For each , we define the rectangle and denote by the measure of . We consider the discrete inner products

 (U,V)h = ∑□j⊂Ω|□j|4(Uj1,j2Vj1,j2+Uj1+1,j2Vj1+1,j2 +Uj1,j2+1Vj1,j2+1+Uj1+1,j2+1Vj1+1,j2+1),
 (U,V)h∗1 = ∑□j⊂Ω|□j|2(Uj1+1/2,j2Vj1+1/2,j2+Uj1+1/2,j2+1Vj1+1/2,j2+1),

and

 (U,V)h∗2 = ∑□j⊂Ω|□j|2(Uj1,j2+1/2Vj1,j2+1/2+Uj1+1,j2+1/2Vj1+1,j2+1/2).

Their correspondent norms are denoted by , and , respectively. For we define .

To simplify the notation and where it is clear from the context, we write in what follows instead of or , for

Multiplying both members of equations (3) and (4) by and , respectively, according to the discrete inner product , and using summation by parts we get

 (Um+1−UmΔt,Um+θ)h+(Vm+1−VmΔt,Vm+θ)h+2∑k=1(||(d1)12δkUm+θ||2h∗k +||(d4)12δkVm+θ||2h∗k+(d2δkVm+θ,δkUm+θ)h∗k+(d3δkUm+θ,δkVm+θ)h∗k) =−λm+θ(Um+θ−U0,Um+θ)h.

We can write

 Wm+θ=Wm+1+Wm2+(θ−12)ΔtWm+1−WmΔt

and then

 ||Wm+1||2h−||Wm||2h2Δt+(θ−12)Δt∣∣∣∣∣∣Wm+1−WmΔt∣∣∣∣∣∣2h (5) +2∑k=1(||(d1)12δkUm+θ||2h∗k+||(d4)12δkVm+θ||2h∗k+(d2δkVm+θ,δkUm+θ)h∗k +(d3δkUm+θ,δkVm+θ)h∗k)≤λm+θ∥Um+θ−U0∥h∥Um+θ∥h.

For any we have that

 λm+θ∥Um+θ−U0∥h∥Um+θ∥h ≤λm+θ(∥Um+θ∥h+∥U0∥h)∥Um+θ∥h (6) ≤λm+θ∥Um+θ∥2h+(λm+θ)24ϵ∥U0∥2h+ϵ∥Um+θ∥2h.

### 3.1 Stability of the semi-implicit scheme

We start by considering the case where . With the assumption that

 2∑k=1(||(d1)12δkUm+1||2h∗k+||(d4)12δkVm+1||2h∗k (7) +(d2δkVm+1,δkUm+1)h∗k+(d3δkUm+1,δkVm+1)h∗k)>0,

which corresponds to the natural assumption of the diffusion matrix to be uniformly positive definite, from (5) and (6) we get

 (1−2Δtλm+1−2Δtϵ)||Wm+1||2h≤||Wm||2h+2Δt(λm+1)24ϵ∥U0∥2h.

Assuming that

 λm+1≤λmax,0<ζ<1−2Δtλm+1−2Δtϵ,m=1,2,… (8)

for some where is a constant arbitrarily chosen, we get

 ||Wm+1||2h≤(1+2Δt(λ+ϵ)ζ−1)||Wm||2h+Δtλ2max2ϵζ∥U0∥2h.

If (8) holds, using the Duhamel’s principle ([9], Lemma 4.1 in Appendix B) we get

 ||Wm+1||2h≤e(1+2(λmax+ϵ)ζ−1)tm+1(1+tm+1λ2max2ϵζ)||W0||2h

and we conclude that the method is stable.

### 3.2 Stability of the explicit scheme

We now consider the case where . Applying the norm on both sides of equations (3) and (4), and using the inequalities and for , we obtain, respectively,

 ∣∣∣∣∣∣Um+1−UmΔt∣∣∣∣∣∣2h≤ 2∑k=18(1+η)h2k(||d1δkUm||2h∗k+||d2δkVm||2h∗k (9) +2(d1δkUm,d2δkVm)h∗k) +2(1+η−1)(λm)2(∥Um∥2h+∥U0∥2h),

for any and

 ∣∣∣∣∣∣Vm+1−VmΔt∣∣∣∣∣∣2h≤2∑k=18h2k(||d3δkUm||2h∗k+||d4δkVm||2h∗k+2(d3δkUm,d4δkVm)h∗k). (10)

Combining the inequalities (6), (9) and (10) with (5) leads to

 ||Wm+1||2h−||Wm||2h2Δt+2∑k=1(||(d1)12δkUm||2h∗k+||(d4)12δkVm||2h∗k +(d2δkVm,δkUm)h∗k+(d3δkUm,δkVm)h∗k −4Δth2k((1+η1)(||d1δkUm||2h∗k+||d2δkVm||2h∗k+2(d1δkUm,d2δkVm)h∗k) ≤(λm+ϵ+Δt(1+η−1)(λm)2)∥Um∥2h +((λm)24ϵ+Δt(1+η−1)(λm)2)∥U0∥2h.

Let . In order to obtain a stable scheme we impose that

 2∑k=1(||(d1)12δkUm||2h∗k+||(d4)12δkVm||2h∗k (11) +(d2δkVm,δkUm)h∗k+(d3δkUm,δkVm)h∗k −4Δth2k((1+ϵ)(||d1δkUm||2h∗k+||d2δkVm||2h∗k+2(d1δkUm,d2δkVm)h∗k)

for some .

If (11) holds, then

 ||Wm+1||2h≤ ∥Wm∥2h+2Δt(λm+ϵ+Δt(1+ϵ−1)(λm)2)∥Um∥2h (12) +2Δt((λm)24ϵ+Δt(1+ϵ−1)(λm)2)∥U0∥2h.

We now take and . From (12) and using the Duhamel’s principle we get

 ||Wm+1||2h≤eaϵtm+1(1+tm+1bϵ)||W0||2h

and we conclude that the method is stable.

Note that we can rewrite (11) as

 2∑k=1(δkW)m⊤MδkWm≥0, (13)

where is a square matrix of dimension . In order for (13) to hold we require

to be semi-positive definite, that is, the eigenvalues of

are all non-negative. Using the Gershgorin Theorem, a sufficient condition for the method to be stable is to impose that the influence functions satisfy, for some ,

 d1−4Δth2k((1+ϵ)d21+d23)≥|12(d2+d3)−4Δth2k((1+ϵ)d1d2+d3d4)|, (14) d4−4Δth2k((1+ϵ)d22+d24)≥|12(d2+d3)−4Δth2k((1+ϵ)d1d2+d3d4)|.

## 4 Learning model

Our goal is to optimize the parameters of our model using deep learning techniques. In order to adapt the cross-diffusion scheme (3)-(4) to a neural network architecture, we concatenate and into a single column vector with entries:

 w=(u,v)⊤,u=⎡⎢ ⎢ ⎢ ⎢ ⎢⎣U1,1U1,2⋮UN1,N2⎤⎥ ⎥ ⎥ ⎥ ⎥⎦,v=⎡⎢ ⎢ ⎢ ⎢ ⎢⎣V1,1V1,2⋮VN1,N2⎤⎥ ⎥ ⎥ ⎥ ⎥⎦.

The -th iteration of the scheme (3)-(4) can now be written in the vector formulation

 wm+1−wmΔt (15) = (KxlDx,mLKxrL+KylDy,mLKyrL+KxlDx,mRKxrR+KylDy,mRKyrR)wm+θ −λm+θ(um+θ−u0),

with , and where the first difference matrices () are block matrices:

 KxrL=[kxr00kxr],KxrR=[0kxrkxr0],KyrL=[kyr00kyr],KyrR=[0kyrkyr0],

and denote the backward difference operators with respect to and , respectively. The matrices and are diagonal matrices. Each entry of and depend on the function for and on the function for , at the time . The entries of and follow the same pattern, replacing and by e , respectively. It remains to define the difference matrices and :

 Kxl=[kxl00kxl],Kyl=[kyl00kyl],

where and denote the forward difference operators with respect to and , respectively.

We reiterate that the scheme (15) is equivalent to the cross-diffusion scheme (3)-(4).

### 4.1 Parameterization of the influence functions

An important aspect of the methodology of our learning strategy consists in the parametrization of arbitrary influence functions. We parameterize the influence functions of the cross-diffusion matrix (2

) through gaussian radial basis functions (RBFs)

[8]. Here we consider that they only depend on the second component of the vector , i.e., they depend on the component which plays the role of edge detector. Each of these functions has the expression

 dℓ(v)=P∑i=1δℓ,iϕ(|v−μi|2ν),ϕ(z)=e−||z||2, (16)

where , for , are equidistant points in the set of edge-detector range of values, is a positive scalar (the so called scale of the RBF), and are, for each , the interpolants od . A good approximation requires a careful balance between and .

In the learning model, we consider in (15) , where is a positive scalar.

### 4.2 Constrained optimization

In order to optimize the influence functions of the cross diffusion matrix and and also the scalar which characterizes the reaction term, a training cost function and a training dataset are required. We will use a set of gray-scale images which will serve as basis for our training set. We fix the time-step and a number of steps , which will result in a cross-diffusion stopping time of . In the learning viewpoint, we will have a

-layered convolutional neural network. We aim to minimize the loss function

,

 L(Θ,wm1,…,wmB,wnl1,…,wnlB)=B∑i=1l(Θ,wmi,wnli)=B∑i=112||umi−unli||2, (17)

with , , where and are, respectively, the - th iteration of (15) of the -th corrupted image and the non-corrupted image, respectively, is the learning batch size, and is the set of parameters we want to optimize. As such, the set of parameters is

 Θ={λ,δ1,1,…,δ1,P,δ4,1,…,δ4,P,δ2,1,…,δ2,P,δ3,1,…,δ3,P},

and we must now obtain to solve the minimization problem.

We need to guarantee that the learning procedure ends up with a stable scheme. For that, we will make use of the stability conditions derived in Section 3. Although the use of the explicit method (15) with can bring some advantages in terms of computational effort, the stability conditions are much more restrictive when compared to the semi-implicit method (15) with . Moreover, since we need to impose the stability conditions over the values of , the nonlinearities in the stability condition (14) corresponding to the explicit scheme carry out some issues that we want to avoid here. For that reason our goal is to learn a semi-implicit scheme for image denoising.

To obtain a stable semi-implicit scheme, instead of minimizing the loss function (17) we seek the solution of the constrained optimization problem

 minΘ L(Θ,wM1,…,wMB,wnl1,…,wnlB) (18a) s.t. d1(x)≥12|d2(x)+d3(x)|,∀x∈A, (18b) d4(x)≥12|d2(x)+d3(x)|,∀x∈A. (18c)

This constraints ensure that (7) hold. We first note that (18b)-(18c) are not in the standard formulation over the values of . As the radial basis functions (16) are strictly positive, we replace (18b)-(18c) with

 c1,i(Θ)=δ1,i−12(δ2,i+δ3,i)≥0, i=1,…,P, (19a) c2,i(Θ)=δ1,i+12(δ2,i+δ3,i)≥0, i=1,…,P, (19b) c3,i(Θ)=δ4,i−12(δ2,i+δ3,i)≥0, i=1,…,P, (19c) c4,i(Θ)=δ4,i+12(δ2,i+δ3,i)≥0, i=1,…,P, (19d)

using the fact that (19a)-(19d) implies (18b)-(18c). To solve the inequality constrained problem we define the augmented Lagrangian as

 L(Θ,μ,ρ)=L(Θ)+ρ2P∑i=14∑ℓ=1max(0,μℓ,iρ−c<