DeepAI

# Regularization by architecture: A deep prior approach for inverse problems

The present paper studies the so called deep image prior (DIP) technique in the context of inverse problems. DIP networks have been introduced recently for applications in image processing, also first experimental results for applying DIP to inverse problems have been reported. This paper aims at discussing different interpretations of DIP and to obtain analytic results for specific network designs and linear operators. The main contribution is to introduce the idea of viewing these approaches as the optimization of Tiknonov functionals rather than optimizing networks. Besides theoretical results, we present numerical verifications for an academic example (integration operator) as well as for the inverse problem of magnetic particle imaging (MPI). The reconstructions obtained by deep prior networks are compared with state of the art methods.

• 11 publications
• 5 publications
• 17 publications
• 3 publications
03/25/2019

### DeepRED: Deep Image Prior Powered by RED

Inverse problems in imaging are extensively studied, with a variety of s...
05/13/2022

### Regularization Theory of the Analytic Deep Prior Approach

The analytic deep prior (ADP) approach was recently introduced for the t...
11/24/2022

### To be or not to be stable, that is the question: understanding neural networks for inverse problems

The solution of linear inverse problems arising, for example, in signal ...
04/11/2017

### Learning Deep CNN Denoiser Prior for Image Restoration

Model-based optimization methods and discriminative learning methods hav...
01/03/2018

### Phase Transition of Convex Programs for Linear Inverse Problems with Multiple Prior Constraints

A sharp phase transition emerges in convex programs when solving the lin...
11/24/2020

### Inverse problems for semiconductors: models and methods

We consider the problem of identifying discontinuous doping profiles in ...
07/03/2020

### Deep image prior for 3D magnetic particle imaging: A quantitative comparison of regularization techniques on Open MPI dataset

Magnetic particle imaging (MPI) is an imaging modality exploiting the no...

## 1 Introduction

Deep image priors (DIP) have been recently introduced as a machine learning approach for some tasks in image processing

ulyanov2017dip . Usually, such machine learning approaches utilize large sets of training data, hence, it was somewhat surprising that deep image priors are based on a single data point The task of DIP is to train a network with parameters

by minimizing the simple loss function

 ∥AφW(z)−yδ∥2. (1)

The minimization is with respect to the input is kept fixed. After training, the solution of the problem is approximated by In image processing typical choices for are the identity operator (denoising) or a projection operator to a subset of the image domain (inpainting). For these applications it has been observed, that minimizing the functional iteratively by gradient descent methods in combination with a suitable stopping criterion leads to amazing results ulyanov2017dip .

This approach is remarkable for several reasons. First of all, the parametrization of a neural network is typically determined during a training phase, afterwards it is fixed and only the input varies in applications. This is based on the assumption, that the set of suitable solutions

to any of the problems mentioned above obeys some probability distribution and that the neural network approach is capable of reproducing this probability distribution by varying the inputs

of a trained network bora2017compressed . The DIP approach, however, uses a fixed input and aims at scanning suitable solutions by varying the parametrization while

is kept fixed. Secondly, as said above DIP approaches do not use large sets of supervised training data, but rely on a single unsupervised data point. Thirdly, one might assume, that this is only possible, if the network architecture is fine tuned to the specific task. This is partially true, nevertheless the presented numerical results perform well with somewhat generic network architectures such as autoencoder networks or even general convolutional feedforward networks.

In this paper we are interested in the analysis of DIP approaches and in particular in proving some convergence properties for iteratively minimizing (1). We will do so in the context of inverse problems, which are modeled by a non-linear or linear operator between Hilbert spaces and . Contrary to the applications in image processing mentioned above, we assume, that the range of is not closed, which implies, that the inversion or any generalized type of inversion is ill-posed louis ; engl ; rieder

. Typical examples are compact linear operators or parameter-to-state mappings for partial differential equations.

Naive applications of neural networks fail for even the most simple inverse problems, see maass2018trivial , but there is a growing number of very convincing numerical experiments using suitable network designs for some of the toughest inverse problems such as photo-acoustic tomography hauptmann2018model or X-ray tomography with very few measurements jin2017deep ; adler2018learned . Concerning networks based on deep prior approaches for inverse problems, first experimental investigation have been reported, see ulyanov2017dip ; van2018compressed . Similar as for the above mentioned tasks in image processing, DIP for inverse problems is based on two ingredients:

1. A suitable network design, which leads to our phrase “regularization by architecture”.

2. Training algorithms for iteratively minimizing Expression (1) with respect to in combination with a suitable stopping criterion.

In this paper we present different mathematical interpretations of DIP approaches (Section 2) and we analyze three network designs in the context of inverse problems in more detail.

Our paper is organized as follows. In the next section we discuss some interpretations of DIP approaches and we add a first mathematical result for a trivial network design, which yields a connection to Landweber iterations, which is known to converge for general inverse problems with arbitrary . Readers solely interested in the mathematical part of the paper may jump directly to Section 3, where we consider a fully connected feed forward network with identical layers. This leads to the notion of analytic deep prior networks, where one can strictly analyze the convergence properties. The key to the theoretical findings is a change of view, which allows to interpret DIP approaches as optimizing families of Tikhonov functionals. We will exemplify our theoretical finding with numerical examples for the standard linear integration operator (fully connected feed forward network) as well as for the challenging inverse problem posed by magnetic particle imaging (U-Net with skip connections, Section 4).

## 2 Deep inverse priors and their interpretations

We start with a description of the state of the art of deep image priors before addressing their generalization to inverse problems. In general, a feed forward neural network is an algorithm that starts with an input

, computes iteratively

 xk+1=ϕ(Wkxk+bk)

and outputs

 φW(z)=xL .

The parameters of this system are denoted by

 W={W0,..,WL−1,b0,..,bL−1}

and

denotes a non-linear activation function.

In order to highlight one of the special features of deep image priors, let us first refer to classical generative networks, which e.g. are trained on large sets of “naturally-looking” images. There, is fixed after training and one expects, that changing the input will generate different naturally looking images as well bora2017compressed . Hence, after training the distribution of naturally looking images is parametrized by . In contrast to that, deep image priors keep fixed and aim at parameterizing the distribution of images with .

In general, this is based on the underlying assumption, that complex distributions, e.g. the distribution of natural images, can be obtained by transforming a simpler distribution. This may be the distribution of the coefficients of a deep network ulyanov2017dip or the coefficients of a wavelet representations, see e.g. fista ; unser2007fista , moreover it has been argued, that “naturally-looking” images allow a sparse representation in such a basis. In a recent paper that makes use of DIP for compressed sensing van2018compressed , the parameter distribution of the network is additionally modeled by some Gaussians that are optimized to fit the distribution of some previously obtained image parametrizations.

To some extend, the success of deep image priors are based on a careful designed network architecture. E.g. ulyanov2017dip used a U-Net-like “hourglass” architecture with skip connections. In their results they show, that such an architecture can capture the statistics of natural images. However this is still not enough to explain the amazing results they show. In general the DIP learning process may converge towards noisy images or bad reconstructions. The whole success relies on a combination of the network architecture with a suitable stopping rule for the optimization method. The authors of ulyanov2017dip claim that the architecture has an impact on how the solution space is searched during the iterative optimization of the parameters that minimizes (1). In their experiments ulyanov2017dip observed that the training process descends quickly to “naturaly-looking” images while requiring much more steps to produce noisy images. Note that this can be somewhat supported by the theoretical findings of saxe2013exact .

This is also in line with the observations of  zhang2016understanding , which shows that deep networks are able to fit noise very well, but need more training time to do so. They consider the CIFAR-10 krizhevsky2009learning classification task with different noise models and different common architectures like szegedy2016rethinking krizhevsky2012imagenet

and a vanilla multilayer perceptron. Another paper that hints in this direction is

anonymous2019on , which analyzes whether neural networks could have a bias towards approximating low frequencies.

### 2.1 A trivial remark

We now switch to deep priors for solving ill-posed inverse problems. For a given operator the general task in inverse problems is to recover an approximation for form measured noisy data

 yδ=Ax†+τ,

where , with describes the noise in the measurement.

The deep image prior approach to inverse problems asks to train a network with parameters and fixed input by minimizing . After training a final run of the network computes as an approximation to .

We start with an observation for the training process of a trivial network, which simply outputs i.e.  Hence, the approximate solution to the inverse problem is given by . Thus can be identified with an element in and training this network by gradient descent of with respect to is equivalent to the classical Landweber iteration, which is a gradient descent method for with respect to .

Despite the obvious trivialization of the neural network approach, this shows that there is potential for training networks with a single data point. Moreover, Landweber iterations converge from rather arbitrary starting points, which indicates that the choice of in the general case is indeed of minor importance.

### 2.2 Two Perspectives Based on Regression

In this subsection we present two different perspectives on solving inverse problems with the DIP via the minimization of a functional as discussed in the subsection above. The first perspective is based on a reinterpretation of the minimization of the functional

 ∥AφW(z)−yδ∥2 (2)

in the finite, real setting, i.e. . This setting lets us write

 minW∥AφW(z)−yδ∥2 =minx∈R(φ⋅(z))∥Ax−yδ∥2 (3) =minx∈R(φ⋅(z))m∑i=1(x∗ai−yδi)2 (4)

where denotes the range of the network with regard to for a fixed and the rows of the matrix as well as

the entries of the vector

.

This allows for the interpretation that we are solving a linear regression, parameterized by

, which is constrained by a deep learning hypothesis space and given by data pairs of the form .

The second perspective is based on a rewriting of the optimization problem via the method of Lagrange multipliers. We start by considering the constrained optimization problem

 minx∈X,W∥Ax−yδ∥2 s.t. ∥x−φW(z)∥2=0. (5)

If we now assume that has continuous first partial derivatives with regard to , the Lagrange functional

 L(W,x,λ)=∥Ax−yδ∥2+λ∥x−φW(z)∥2, (6)

with the correct Lagrange multiplier , has a stationary point at each minimum of the original constraint optimization problem. This gives us a direct connection to unconstrained variational approaches like Tikhonov functionals.

### 2.3 The Bayesian point of view

The Bayesian approach to inverse problems focuses on computing MAP (maximum a posteriori probability) estimators, i.e. one aims for

 ^x=argmaxx∈Xp(x|yδ), (7)

where is a conditional PDF. From standard Bayesian theory we obtain

 ^x=argminx∈X{−log[p(yδ|x)]−log[p(x)]} . (8)

The setting for inverse problems, i.e.  with , yields ()

 ^x =:argminx∈X∥Ax−yδ∥2−λlog[p(x)] .

We now decompose into ,  , where , resp. , denotes the orthogonal projection onto , resp. . Setting yields

 ^x =argminx∈X∥Ax⊥−yδ∥2−λlog p(xN,x⊥) =argminx∈X∥Ax⊥−yδ∥2−λlog p(x⊥)−λlog p(xN|x⊥). =argminx∈X to0.0pt$(I)∥Ax⊥−yδ∥2−λlog p(x⊥)$∥Ax⊥−yδ∥2−λlog p(x⊥)−λlog p(xN|x⊥)(II).

The data only contains information about , which in classical regularization is exploited by restricting any reconstruction to .

However, if available, is a measure on how to extend with an to a suitable . The classical regularization of inverse problems simply uses an extension by zero, i.e. , which not necessarily is optimal. If we accept the interpretation that a network can be a meaningful parametrization of the set of suitable solutions , then for all not in the range of the network and optimizing the network will indeed yield a non-trivial completion . More precisely (I) can be interpreted to be a deep prior on the measurement and (II) to be a deep prior on the nullspace part of the problem.

## 3 Deep inverse priors and Tikhonov functionals

### 3.1 A general setting for deep priors for inverse problems

In this section we consider linear operators and aim at rephrasing DIP, i.e. the minimization of (1) with respect to , as an approach for learning optimized Tikhonov functionals. This change of view, i.e. regarding deep inverse priors as an optimization of functionals rather then networks, opens the way for analytic investigations.

We now follow the well known approach of LISTA lista (learned iterated soft thresholding algorithm) or learned PG (proximal gradient) methods as stated in Appendix I. Note, that our loss function is identical with the loss functions used in LISTA or learned PG methods, but using only a single data point. More precisely, we use the particular architecture of a fully connected feedforward network with layers of identical size defined by

 φW(z)=xL, (9)

where

 xk+1=ϕ(Wxk+b) (10)

for and

I.e. the affine linear map given by is the same for all layers. Moreover the activation function of the network is chosen as the proximal mapping of a regularizing functional , we restrict the matrix by enforcing ( denotes the identity operator) for some and the bias is determined via .

###### Remark 1

This kind of generalized ISTA or PG schemes converge for rather arbitrary starting points, i.e. the particular choice of in the deep prior context indeed is of minor importance.

###### Remark 2

Restricting activation functions to be proximal mappings is not as severe as it might look at first glance. E.g. ReLU is the proximal mapping for the indicator function of positive real numbers and soft shrinkage is the proximal mapping for the modulus function.

In this setting, the output of the network is identical to the -th iterate of a proximal gradient descent method for minimizing

 JB(x)=12∥Bx−yδ∥2+αR(x), (11)

see Appendix I. Hence, updating or changes the discrepancy term in this Tikhonov functional.

In this section we neglect the difference between and and assume that the PG scheme has fully converged, i.e.

 φW(z)=argminJB(x). (12)

This leads to the definition of analytic deep priors.

###### Definition 1

Consider a fully connected neural network with layers, whose activation function is a proximal mapping with respect to a convex functional , i.e.

 φW(z)=xL, (13)

where

 xk+1=ProxαλR(Wxk+b) (14)

and Further assume that can be decomposed as with a bounded operator and that the bias satisfies . We define the associated Tikhonov functional and assume that a unique minimizer exists. We call this setting an analytic deep prior if , resp. , is trained from a single data point by gradient descent applied to

 minB∥Ax(B)−yδ∥2. (15)

We now examine the training process for computing resp. in the setting of such analytic deep prior models. This can be either regarded as training a neural network or as determining an optimized Tikhonov functional. In the following we focus on the minimization with respect to and then set which yields a more convenient notation. Hence, the training of the network for given data is achieved by a gradient descent method with respect to for the loss function

 F(B)=12∥Ax(B)−yδ∥2\ \ s.t.\ \ x(B)=argminx JB(x). (16)

The stationary points are characterized by and gradient descent iterations with stepsize are given by

 Bℓ+1=Bℓ−η∂F(Bℓ). (17)

Hence we need to compute the derivative of with respect to .

###### Lemma 1

Consider an analytic deep prior network. We define

 x(B)=argmin JB(x), (18)
 ψ(x,B)=ProxλαR(x−λB∗(Bx−yδ))−x. (19)

Then

 ∂F(B)=∂x(B)∗A∗(Ax(B)−yδ) (20)

with

 ∂x(B)=−ψx(x(B),B)−1ψB(x(B),B), (21)

 Bℓ+1=Bℓ−η∂F(Bℓ). (22)
###### Proof

is a functional which maps operators to real numbers, hence, its derivative is given by

 ∂F(B)=∂x(B)∗A∗(Ax(B)−yδ),

which follows from classical variational calculus, see e.g. engl .

The derivative of with respect to can be computed using the fix point condition for a minimizer of :

 ProxλαR(x(B)−λB∗(Bx−yδ))−x(B)=0,

which is equivalent to

 ψ(x(B),B)=0.

We apply implicit function theorem applied and obtain the derivative

 ∂x(B)=−ψx(x(B),B)−1ψB(x(B),B).

Combining with yields the required result.

This lemma allows to obtain an explicit description of the gradient descent for or , which in turn leads to an iteration of functionals and minimizers . We will now exemplify this derivation for a rather academic example, which however highlights in particular the differences between a classical Tikhonov minimizer, i.e.

 xT=argminJA(x)=argmin12∥Ax−yδ∥2+αR(x),

and the solution of the DIP approach.

#### 3.1.1 Example

In this example we examine analytic deep priors for linear inverse problems , i.e. , and

 R(x)=12∥x∥2. (23)

The rather abstract characterization of the previous section can be made explicit for this setting. We consider the classical Tikhonov regularization, i.e.

 xT=(A∗A+αI)−1A∗yδ. (24)

This is equivalent to the solution obtained by the analytic deep prior approach with without any iteration. We then take as a starting point. First of all we compute one step of gradient descent for minimizing with respect to and see, how the the resulting differs from .

The proximal mapping corresponding to the functional above is given by

 ProxλαR(z)=11+λαz. (25)

We use the explicit description of the iteration

 Bℓ+1=Bℓ−η∂F(Bℓ) (26)

with

 ∂F(B)=∂x(B)∗A∗(Ax(B)−yδ) (27)

In this case , which is a mapping , can be compute explicitly as

 x(B)=argminJB(x)=(B∗B+αI)−1B∗yδ. (28)

The derivative of with respect to is a linear map For we obtain

 ∂x(B)(δB)=−(B∗B+αI)−2(δB∗B+B∗δB)B∗yδ+(B∗B+αI)−1δB∗yδ. (29)

The adjoint operator is a mapping from to , which can be derived from the defining relation

 ⟨∂x(B)(δB),z⟩X=⟨δB,[∂x(B)]∗z⟩L(X,Y) . (30)

Hence,

 [∂x(B)]∗z=−BB∗yδz∗(B∗B+αI)−2−B(B∗B+αI)−2z(yδ)∗B+yδz∗(B∗B+αI)−1. (31)

Here, denotes a linear map, which maps an to .

First of all, we now aim at determining explicitly

 ∂F(B)=∂x(B)∗A∗(Ax(B)−yδ) (32)

at the starting point of our iteration, i.e. with . I.e. we initialize the iteration with where is the classical Tikhonov regularization. Inserting this yields

 A∗(Ax(A)−yδ)=−α(A∗A+αI)−1A∗yδ. (33)

From this follows a rather lengthy expression for

 ∂F(A) =∂x(A)∗A∗(Ax(A)−yδ) (34) =αAA∗yδ(yδ)∗A(A∗A+αI)−3   +αA(A∗A+αI)−3A∗yδ(yδ)∗A   −αyδ(yδ)∗A(A∗A+αI)−2. (35)

and we can compute the update

 B1=A−η∂F(A) (36)

as well as the output of the analytic deep prior approach after one iteration of updating (assuming a suitably chosen )

 x(B1)=(B∗1B1+αI)−1B∗1yδ. (37)

This expression nicely collapses if commutes with . For illustration we assume the rather unrealistic case that , where is a singular function for

with singular value

. The dual singular function is denoted by , i.e.  and and we further assume, that the measurement noise in is in the direction of this singular function, i.e. . In this case, the problem is indeed one-dimensional and we obtain an iteration restricted to the span of , resp. the span of .

A lengthy computation exploiting and shows that the singular value of in the spectral decomposition of obeys the iteration

 βℓ+1=βℓ−ησ(σ+δ)2(α+β2ℓ−σβℓ)β2ℓ−α(β2ℓ+α)3 ,

i.e.

 Bℓ+1=Bℓ−cℓvu∗ (38)

with

 cℓ=c(α,δ,σ,η)=ησ(σ+δ)2(α+β2ℓ−σβℓ)β2ℓ−α(β2ℓ+α)3 .

This iteration has a contracting fix point at , which yields

 x(B)=12√α(σ+δ)u

as opposed to the Tikhonov minimizer .

For fixed , our value is the maximum of for varying . Our artificial setting is one-dimensional and hence well-posed, i.e we do not want to regularize or diminish the reconstruction. In this sense, DIP iterations converge to the optimal value for Tikhonov type regularizers for this one-dimensional case.

### 3.2 Some numerical experiments

We now use the analytic deep inverse prior approach for solving an inverse problem with the following integration operator

 (Ax)(t)=∫t0x(s)ds. (39)

is linear and compact, hence the inverse problem is ill-posed. We let the matrix be a discretization of and choose to be one of its singular vectors . Then we set the noisy data with and equals of the largest coefficient of see Figure 1.

We aim to recover from considering the setting established in Def. 1 for and a fixed value of . That means the solution is parametrized by the weight matrix of the network, i.e. with and some arbitrary fixed input. Solving the inverse problem is now equivalent to training the network , i.e. finding optimal , to minimize the loss function (1) for the single data point . Afterwards, the reconstruction is obtained by computing .

In order to properly update by back-propagation, the network was implemented taking special care ensuring that (12) holds. For more details please refer to Appendix II.

In Figure 2 some reconstruction results are shown. The first plot contains the true solution the standard Tikhonov solution and the reconstruction obtained with the analytic deep inverse approach after seemed to have converged, in this case that means is the resulting matrix after training iterations. Figure 2 provides some additional reconstructions and plots depicting:

• The true error of the network’s output after each update of in a logarithmic scale.

• The Frobenius norm of after each update of .

• The matrix .

For all choices of the training of converges to a matrix , such that has a smaller true error than . As it can be observed in the last plot, contains some patterns that reflect what was previously obtained in Equation 38. The analytic deep inverse prior approach seems in principle to behave differently than the Tikhonov approach and shows some promising results.

So far, we considered the regularization parameter to be fixed. In a real application one needs to chose it by a technique such as the L-curve hansen1993lcurve or the discrepancy principle. However, they usually involve finding reconstructions for many different values of . In our case, that would mean to retrain the network each time, which would amount to high computational costs. This motivates an adaptive choice of during the optimization, which could be achieved by letting to be also a trainable weight of the network. The results for the same example and different starting values are shown in Figure 3.

Since we are minimizing (1), it becomes now important to choose when to stop the training, otherwise will probably converge to and will converge to , which would result in bad reconstructions. We did that by monitoring the trajectory/change of over the course of the training. We stopped at an iteration , chosen to be close to the corner of the trajectory depicted in Figure 3, i.e. when starts decreasing very slow. We discuss this further in Appendix II.

## 4 Deep priors for magnetic particle imaging

In the remainder of this paper we apply a deep inverse prior network for solving the reconstruction problem in magnetic particle imaging (MPI). MPI is an imaging modality based on injecting ferromagnetic nanoparticles, which are consequently transported by the blood flow. Reconstructing the resulting spatial distribution of those nanoparticles is based on exploiting the nonlinear magnetization behavior of ferromagnetic nanoparticles Gleich2005 .

More precisely, one applies a magnetic field, which is a superposition of a static gradient field, which generates a field-free-point (FFP), and a highly dynamic spatially homogeneous field, which moves the FFP in space. The mean magnetic moment of the nanoparticles in the neighborhood of the FFP will generate an electro-magnetic field, whose voltages can be measured by so-called receive coils. The time-dependent measurements

in the receive coils constitute the data for the inversion process, i.e. for reconstructing .

MPI benefits from a high temporal resolution and a potentially high spatial resolution which makes it suitable for several in-vivo applications, such as imaging blood flow weizenecker2009three ; khandhar2017evaluation , instrument tracking haegele2012magnetic and guidance Salamon:2016cz , flow estimation franke2017 , cancer detection Yu2017 and treatment by hyperthermia murase2015usefulness .

Due to the nonmagnetic coating of the nanoparticles, which largely suppresses particle-particle interactions, MPI is usually modeled by a linear Fredholm integral equation of the first kind describing the relationship between particle concentration and the measured voltage. After removing the voltage induced by the applied magnetic field one obtains a measured signal from the -th receive coil as

 vℓ(t)δ=vℓ(t)+τ

where

 vℓ(t) = Sℓc(t) := ∫Ω c(x) sℓ(x,t) dt,

, with some noise and

the kernel of the linear operator. Combining the measurements of all receive coils yields – after discretization and appying the Fourier transform – a linear system of equations

Typically, the rows of are normalized resulting in the final form of the linearized inverse problem denoted by

 Ac=yδ. (40)

This is a coarse simplification of the physical setup, which neglects non-linear magnetization effects of the nanoparticles as well as the non-homogeneity of the spatial sensitivity of the receive coils and also the small, but non-negligible particle-particle interactions. Hence, this is a perfect setup for exploiting the potential of neural networks for matching complex and high-dimensional non-linear models.

For a more detailed introduction to MPI, details on data preprocessing as well as on the implementation using Tensorflow

tensorflow2015-whitepaper , see Appendix III. In this section we just report some numerical results.

We test the capability of the Deep Imaging Prior approach to improve image reconstruction obtained by standard Tikhonov regularization. For the experiments we use the datasets (D1) and in the Appendix III also (D2) generated by the Bruker preclinical MPI system at the University Medical Center Hamburg-Eppendorf, see Appendix III. We use the deep image prior network introduced by ulyanov2017dip , specifically their U-net architecture. For further details on this architecture we refer to Appendix III and ulyanov2017dip .

The measured voltages are given as a sequence of complex numbers, hence we split up our loss function into the form

 ∥AϕW(z)−yδ∥2 =∥R(A)ϕW(z)−R(yδ)∥2 (41) +∥I(A)ϕW(z)−I(yδ)∥2, (42)

where and denote the real and imaginary parts respectively.

For comparison to our deep inverse prior MPI reconstructions, we also compute sparse and classical Tikhonov reconstructions. We produce the Tikhonov reconstruction, usually associated with the minimization of the functional

 ∥Ac−yδ∥2+λ∥c∥2, (43)

via the algebraic reconstruction technique (Kaczmarz) as generalized to allow for the constraint by dax1993row . We produce the sparsity reconstruction, usually associated with the minimization of the functional

 ∥Ac−yδ∥2+λ∥c∥1, (44)

via simply implementing this functional in Tensorflow and minimizing it via gradient descent. In the end we project each entry into

We start by presenting direct comparisons of the Kaczmarz, sparsity and DIP reconstructions in Table 1. Beneath each image we state the parameters we used for the reconstruction where denotes the Frobenius norm and is the regularization parameter as used in Equations 43 or 44 and the learning rate used for Adam. For DIP we always used early stopping after optimization steps. The images started to deteriorate slowly for more iterations.

As one can see, the results presented in Table 1 are roughly on par with, if not even better than, the classical regularization methods. In particular we want to point out the reconstruction of the “2mm” phantom for which only the DIP approach achieves a separation of the two different lines.

We now want to compare the DIP based reconstruction process with a Landweber reconstruction with early stopping in more detail. Here we use a phantom in the shape of a “one” provided in DS1. Specifically we compare a DIP reconstruction obtained with Adam kingma2014adam with a DIP reconstruction obtained with gradient descent and with the previously mentioned Landweber reconstruction, see Table 2. To compare them we present the following quantities:

1. Error: The quantity we are minimizing over the iterations of the optimization. I.e.  (y-axis) over (x-axis), where is the iteration index and the reconstruction at iteration .

2. L-curve: The path of the points over , where is on the x-axis and on the y-axis. Note that these paths tend to start in the bottom right corner and moves gradually towards the upper left corner.

3. Change: The change in that happens over the course of the optimization, separately displayed for the spaces and I.e. we plot (y-axis) and (y-axis) over (x-axis). Here denotes the orthogonal projection onto the space

4. Errors per Singular Value: For each we plot the normalized errors associated with each of the one-dimensional linear subspaces spanned by the singular vectors, ordered by the size of their associated singular values, starting with the biggest singular value at

. I.e. for the singular value decomposition

of , where a ordered diagonal matrix and and orthogonal, we plot the quantity over the index and the index

5. Final Reconstruction: The reconstruction at the end of the optimization, i.e.  where the total number of iterations.

As one can see the error curves of the three different methods displayed in Table 2 look quite similar, although the error curve of Adam has, as one would expect from Adam, minor disruptions. Interestingly these disruptions are lining up very well with the disruptions of the change curve of Adam. We found that a DIP reconstruction tends to produce good results, when the choice of the optimizer and its learning rate leads the changes in the null space to be roughly on the same order of magnitude as the changes in the orthogonal of the null space. This is the case for our two DIP examples in the table. For the normal Landweber method we used the L-curve to decide when to stop the optimization. We also present the L-curves for the DIP reconstructions. In the DIP with gradient descent reconstruction starts to stagnate in the upper left corner and we could not observe any “exploding” behavior. For DIP with Adam one can see that the L-curve starts the form a “blob”. In our experiments we saw this blob slightly further extending upward if one used further iterations. The visually best results where usually reached shortly after the “blob”-formation started (in this case for example at ca.  iterations). The Table 2 also presents the “errors per singular value”. We show them since one often uses so called filter functions engl , defined on the singular values, to describe the properties of regularization methods for linear inverse problems. These filter functions specify how the regularization methods deal differently with the minimization of the errors associated with different singular values – more precisely with the different one dimensional subspaces spanned by the singular vectors. We would like to point out that one can see that the DIP reconstructions, allow for much bigger errors in subspaces associated with large singular values. This hints at the DIP being influential in these subspaces, since for the plain Landweber approach one can clearly see a flat region of small errors for the big singular values at later stages of the optimization.

## 5 Summary and conclusion

In this paper we investigated the concept of deep inverse priors / regularization by architecture. To our knowledge this is the only deep learning method to solve inverse problems that does not require massive amounts of data, which one usually can only acquire after the inverse problem is solved already. In fact, the method requires one measurement only. We started out by giving different qualitative interpretations of what a regularization is and specifically how regularization by architecture fits into this context.

We followed up with the introduction of the analytic deep inverse prior by specifically showing how proximate gradient descent based architectures, not unlike LISTA, allow for a somewhat transparent regularization by architecture. Specifically we showed that their results can be interpreted as solutions of optimized Tikhonov functionals. This was further investigated with an academic example, were we implemented the analytic deep inverse prior and tested its numerical applicability. The results confirmed our mathematical findings and showed some promising results.

To conclude we applied a deep image prior to the real world problem of computing reconstructions in magnetic particle imaging. We found that this type of regularization compares very well to established and widely used regularization methods, in some cases even surpasses them.

There is obviously, like in deep learning in general, still much work to be done in order to have a good understanding of deep inverse priors, but we see much potential in the idea to use deep architectures to regularize inverse problems; especially since an enormous part of the deep learning research is concerned with the understanding of deep architectures already.

###### Acknowledgements.

The authors would like to thank P. Szwargulski and T. Knopp from the University Medical Center Hamburg-Eppendorf for their support in conducting the experiments for the dataset DS1. S. Dittmer and T. Kluth acknowledge the support by the Deutsche Forschungsgemeinschaft (DFG) within the framework of GRK 2224/1 “Pi 3 : Parameter Identification—Analysis, Algorithms, Applications”. D. Otero Baguer acknowledges the financial support by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) –- Projektnummer 276397488 –- SFB 1232, sub-project “P02-Heuristic, Statistical and Analytical Experimental Design”. Peter Maass acknowledges funding by the European Union’s Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie Grant Agreement No. 765374, sub-project ’Data driven model adaptations of coil sensitivities in MR systems’.

## References

• (1) Abadi, M., Agarwal, A., Barham, P., Brevdo, E., Chen, Z., Citro, C., Corrado, G.S., Davis, A., Dean, J., Devin, M., Ghemawat, S., Goodfellow, I., Harp, A., Irving, G., Isard, M., Jia, Y., Jozefowicz, R., Kaiser, L., Kudlur, M., Levenberg, J., Mané, D., Monga, R., Moore, S., Murray, D., Olah, C., Schuster, M., Shlens, J., Steiner, B., Sutskever, I., Talwar, K., Tucker, P., Vanhoucke, V., Vasudevan, V., Viégas, F., Vinyals, O., Warden, P., Wattenberg, M., Wicke, M., Yu, Y., Zheng, X.: TensorFlow: Large-scale machine learning on heterogeneous systems (2015). Software available from tensorflow.org
• (2) Adler, J., Öktem, O.: Learned primal-dual reconstruction. IEEE transactions on medical imaging 37(6), 1322–1332 (2018)
• (3) Anonymous: On the spectral bias of neural networks. In: Submitted to International Conference on Learning Representations (2019). Under review
• (4) Bathke, C., Kluth, T., Brandt, C., Maass, P.: Improved image reconstruction in magnetic particle imaging using structural a priori information. International Journal on Magnetic Particle Imaging 3(1), ID 1703015, 10 pages (2017).
• (5) Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences 2(1), 183–202 (2009).
• (6) Bora, A., Jalal, A., Price, E., Dimakis, A.G.: Compressed sensing using generative models. In: Proceedings of the 34th International Conference on Machine Learning, ICML 2017, Sydney, NSW, Australia, 6-11 August 2017, pp. 537–546 (2017).
• (7)

Chollet, F., et al.: Keras.

https://keras.io (2015)
• (8) Combettes, P., Wajs, V.: Signal recovery by proximal forward-backward splitting. Multiscale Modeling & Simulation 4(4), 1168–1200 (2005).
• (9) Dax, A.: On row relaxation methods for large constrained least squares problems. SIAM Journal on Scientific Computing 14(3), 570–584 (1993)
• (10) Engl, H.W., Hanke, M., Neubauer, A.: Regularization of inverse problems, Mathematics and its Applications, vol. 375. Kluwer Academic Publishers Group, Dordrecht (1996)
• (11) Erb, W., Weinmann, A., Ahlborg, M., Brandt, C., Bringout, G., Buzug, T.M., Frikel, J., Kaethner, C., Knopp, T., März, T., M, M., Storath, M., Weber, A.: Mathematical analysis of the 1D model and reconstruction schemes for magnetic particle imaging. Inverse Problems 34(5), 055012, 21 pp. (2018)
• (12) Franke, J., Heinen, U., Lehr, H., Weber, A., Jaspard, F., Ruhm, W., Heidenreich, M., Schulz, V.: System characterization of a highly integrated preclinical hybrid mpi-mri scanner. IEEE Transactions on Medical Imaging 35(9), 1993–2004 (2016).
• (13) Franke, J., Lacroix, R., Lehr, H., Heidenreich, M., Heinen, U., Schulz, V.: Mpi flow analysis toolbox exploiting pulsed tracer information – an aneurysm phantom proof. International Journal on Magnetic Particle Imaging 3(1) (2017).
• (14) Gleich, B., Weizenecker, J.: Tomographic imaging using the nonlinear response of magnetic particles. Nature 435(7046), 1214–1217 (2005)
• (15) Gregor, K., LeCun, Y.: Learning fast approximations of sparse coding. In: ICML 2010 - Proceedings, 27th International Conference on Machine Learning, pp. 399–406 (2010)
• (16) Haegele, J., Rahmer, J., Gleich, B., Borgert, J., Wojtczyk, H., Panagiotopoulos, N., Buzug, T., Barkhausen, J., Vogt, F.: Magnetic particle imaging: visualization of instruments for cardiovascular intervention. Radiology 265(3), 933–938 (2012)
• (17) Hammernik, K., Klatzer, T., Kobler, E., Recht, M.P., Sodickson, D.K., Pock, T., Knoll, F.: Learning a variational network for reconstruction of accelerated mri data. Magnetic resonance in medicine 79(6), 3055–3071 (2018)
• (18) Hansen, P., O’Leary, D.: The use of the l-curve in the regularization of discrete ill-posed problems. SIAM Journal on Scientific Computing 14(6), 1487–1503 (1993)
• (19) Hauptmann, A., Lucka, F., Betcke, M., Huynh, N., Adler, J., Cox, B., Beard, P., Ourselin, S., Arridge, S.: Model-based learning for accelerated, limited-view 3-d photoacoustic tomography. IEEE transactions on medical imaging 37(6), 1382–1393 (2018)
• (20)

Jin, K.H., McCann, M.T., Froustey, E., Unser, M.: Deep convolutional neural network for inverse problems in imaging.

IEEE Transactions on Image Processing 26(9), 4509–4522 (2017)
• (21) Khandhar, A., Keselman, P., Kemp, S., Ferguson, R., Goodwill, P., Conolly, S., Krishnan, K.: Evaluation of peg-coated iron oxide nanoparticles as blood pool tracers for preclinical magnetic particle imaging. Nanoscale 9(3), 1299–1306 (2017)
• (22) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014)
• (23) Kluth, T.: Mathematical models for magnetic particle imaging. Inverse Problems 34(8), 083001 (2018).
• (24) Kluth, T., Jin, B., Li, G.: On the degree of ill-posedness of multi-dimensional magnetic particle imaging. Inverse Problems 34(9), 095006 (2018).
• (25) Kluth, T., Maass, P.: Model uncertainty in magnetic particle imaging: Nonlinear problem formulation and model-based sparse reconstruction. International Journal on Magnetic Particle Imaging 3(2), ID 1707004, 10 pages (2017).
• (26) Knopp, T.: Github mdf. Accessed: 2018-11-16
• (27) Knopp, T.: Open mpi data. Accessed: 2018-11-16
• (28) Knopp, T., Buzug, T.M.: Magnetic Particle Imaging: An Introduction to Imaging Principles and Scanner Instrumentation. Springer, Berlin/Heidelberg (2012)
• (29) Knopp, T., Gdaniec, N., Möddel, M.: Magnetic particle imaging: from proof of principle to preclinical applications. Physics in Medicine and Biology 62(14), R124 (2017)
• (30) Knopp, T., Hofmann, M.: Online reconstruction of 3D magnetic particle imaging data. Physics in Medicine and Biology 61(11), N257–67 (2016).
• (31) Knopp, T., Rahmer, J., Sattel, T.F., Biederer, S., Weizenecker, J., Gleich, B., Borgert, J., Buzug, T.M.: Weighted iterative reconstruction for magnetic particle imaging. Phys. Med. Biol. 55(6), 1577–1589 (2010)
• (32) Knopp, T., Viereck, T., Bringout, G., Ahlborg, M., von Gladiss, A., Kaethner, C., Neumann, A., Vogel, P., Rahmer, J., Möddel, M.: Mdf: Magnetic particle imaging data format. ArXiv e-prints 1602.06072v6, 1–15 (2018). URL http://arxiv.org/abs/1602.06072v6. Article, MDF
• (33) Konkle, J., Goodwill, P., Hensley, D., Orendorff, R., Lustig, M., Conolly, S.: A convex formulation for magnetic particle imaging x-space reconstruction. PLoS ONE 10(10), e0140137 (2015)
• (34) Krizhevsky, A., Hinton, G.: Learning multiple layers of features from tiny images. Tech. rep., Citeseer (2009)
• (35)

Krizhevsky, A., Sutskever, I., Hinton, G.E.: Imagenet classification with deep convolutional neural networks.

In: Advances in neural information processing systems, pp. 1097–1105 (2012)
• (36) Lampe, J., Bassoy, C., Rahmer, J., Weizenecker, J., Voss, H., Gleich, B., Borgert, J.: Fast reconstruction in magnetic particle imaging. Physics in Medicine and Biology 57(4), 1113–1134 (2012).
• (37) Louis, A.K.: Inverse und schlecht gestellte Probleme. Vieweg+Teubner Verlag, Wiesbaden (1989)
• (38) Maass, P.: Simple networks for trivial inverse problems. submitted for publication (2018)
• (39) März, T., Weinmann, A.: Model-based reconstruction for magnetic particle imaging in 2D and 3D. Inv. Probl. Imag. 10(4), 1087–1110 (2016)
• (40) Meinhardt, T., Möller, M., Hazirbas, C., Cremers, D.: Learning proximal operators: Using denoising networks for regularizing inverse imaging problems.

In: IEEE International Conference on Computer Vision, pp. 1781–1790 (2017)

• (41) Murase, K., Aoki, M., Banura, N., Nishimoto, K., Mimura, A., Kuboyabu, T., Yabata, I.: Usefulness of magnetic particle imaging for predicting the therapeutic effect of magnetic hyperthermia. Open Journal of Medical Imaging 5(02), 85 (2015)
• (42) Nesterov, Y.: Lectures on Convex Optimization. Springer Optimization and Its Applications. Springer International Publishing (2019).
• (43) Rahmer, J., Weizenecker, J., Gleich, B., Borgert, J.: Analysis of a 3-D system function measured for magnetic particle imaging. IEEE Transactions on Medical Imaging 31(6), 1289–1299 (2012)
• (44) Rieder, A.: Keine Probleme mit inversen Problemen: eine Einführung in ihre stabile Lösung. Vieweg, Wiesbaden (2003)
• (45) Salamon, J., Hofmann, M., Jung, C., Kaul, M.G., Werner, F., Them, K., Reimer, R., Nielsen, P., vom Scheidt, A., Adam, G., Knopp, T., Ittrich, H.: Magnetic particle/magnetic resonance imaging: In-vitro MPI-guided real time catheter tracking and 4D angioplasty using a road map and blood pool tracer approach. PloS ONE 11(6), e0156899–14 (2016)
• (46) Saxe, A.M., McClelland, J.L., Ganguli, S.: Exact solutions to the nonlinear dynamics of learning in deep linear neural networks. arXiv preprint arXiv:1312.6120 (2013)
• (47) Storath, M., Brandt, C., Hofmann, M., Knopp, T., Salamon, J., Weber, A., Weinmann, A.: Edge preserving and noise reducing reconstruction for magnetic particle imaging. IEEE Transactions on Medical Imaging 36(1), 74–85 (2017).
• (48) Szegedy, C., Vanhoucke, V., Ioffe, S., Shlens, J., Wojna, Z.: Rethinking the inception architecture for computer vision.

In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 2818–2826 (2016)

• (49) Them, K., Kaul, M.G., Jung, C., Hofmann, M., Mummert, T., Werner, F., Knopp, T.: Sensitivity enhancement in magnetic particle imaging by background subtraction. IEEE Trans. Med. Imag. 35(3), 893–900 (2016).
• (50) Ulyanov, D., Vedaldi, A., Lempitsky, V.S.: Deep image prior. CoRR abs/1711.10925 (2017). URL http://arxiv.org/abs/1711.10925
• (51) Van Veen, D., Jalal, A., Price, E., Vishwanath, S., Dimakis, A.G.: Compressed sensing with deep image prior and learned regularization. arXiv preprint arXiv:1806.06438 (2018)
• (52) Vonesch, C., Unser, M.: A fast iterative thresholding algorithm for wavelet-regularized deconvolution - art. no. 67010d. Wavelets Xii, Pts 1 And 2 6701, D7010–D7010 (2007)
• (53) Weizenecker, J., Gleich, B., Rahmer, J., Dahnke, H., Borgert, J.: Three-dimensional real-time in vivo magnetic particle imaging. Physics in Medicine and Biology 54(5), L1 (2009)
• (54) Yu, E.Y., Bishop, M., Zheng, B., Ferguson, R.M., Khandhar, A.P., Kemp, S.J., Krishnan, K.M., Goodwill, P.W., Conolly, S.M.: Magnetic particle imaging: A novel in vivo imaging platform for cancer detection. Nano Letters 17(3), 1648–1654 (2017).
• (55) Zhang, C., Bengio, S., Hardt, M., Recht, B., Vinyals, O.: Understanding deep learning requires rethinking generalization. arXiv preprint arXiv:1611.03530 (2016)

## Appendix I: A reminder on minimization of Tikhonov functionals and the LISTA approach

In this section we consider only linear operators and we review the well known theory for the Iterative Soft Shrinkage Algorithm (ISTA) as well as the slightly more general Proximal Gradient (PG) combettes2005splitting ; nesterov04convex method for minimizing Tikhonov functionals of the type

 J(x)=12∥Ax−yδ∥2+αR(x). (45)

We recapitulate the main steps in deriving ISTA and PG, as far as we need it for our motivation. The necessary first order condition for a minimizer is given by

 0∈A∗(Ax−yδ)+α∂R(x). (46)

Multiplying with an arbitrary real positive number and adding plus rearranging yields

 x−λA∗(Ax−yδ)∈x+λα∂R(x). (47)

For convex , the term of the right hand side is inverted by the (single valued) proximal mapping of , which yields

 ProxλαR(x−λA∗(Ax−yδ))=x. (48)

Hence this is a fixed point condition, which is a necessary condition for all minimizers of . Turning the fixed point condition into an iteration scheme yields the PG method

 xk+1 = ProxλαR(xk−λA∗(Axk−yδ)) (49) = ProxλαR((I−λA∗A)xk+λA∗yδ) . (50)

This structure is also the motivation for LISTA lista approaches where fully connected networks with internal layers of identical size are used. Moreover, in some versions of LISTA, the affine maps between the layers are assumed to be identical. The values at the -th layer are denoted by , hence,

 xk+1=ϕ(Wxk+b). (51)

LISTA then trains on some given training data. More precisely, it trains two matrices and such that

 xk+1=ϕ(Wxk+Syδ). (52)

This derivation can be rephrased as follows.

###### Lemma 2

Let denote a fully connected network with input and -internal layers. Further assume, that the activation function is identical to a proximal mapping for a convex functional . Assume is restricted, such that is positive definite, i.e. there exists a matrix such that

 I−W=λB∗B. (53)

Furthermore, we assume that the bias term is fixed as . Then is the -th iterate of an ISTA scheme with starting value for minimizing

 JB(x)=12∥Bx−yδ∥2+αR(x). (54)
###### Proof

Follows directly from equation (49).

Following these arguments one can rephrase LISTA as a concept for learning the discrepancy term in a classical Tikhonov functional. This point of view opens further connections to variational approaches in deep learning, see e.g. hammernik2018learning ; meinhardt2017learning

## Appendix II: Numerical experiments

In this section we provide details about the implementation of the analytic deep inverse prior and the academic example.

Discretizing the integration operator yields the matrix , which has on the main diagonal, everywhere under the main diagonal and above (here ). In our experiments we use .

The analytic deep inverse prior network is implemented using Python and Tensorflow tensorflow2015-whitepaper . Initially, the weight matrix is created and then fully connected layers are added to the network, all having the same weight matrix , bias and activation function given by (25). That means the network contains in total parameters (the number of components in ), independently from the number of layers. For the experiments shown in the paper the input is randomly initialized with small norm and is , where

is the biggest eigenvalue of

.

In order to guarantee that assumption (12) holds, the network should in principle have thousands of layers, because of the slow convergence of the PG method. However, this is prohibitive from the implementation point of view. During the training iterations, in which we update , we therefore consider only a reduced network with a small number, of layers but we set the input to be the network’s output after the previous iteration. This is equivalent to adding new identical layers, with and , at the end of an implicit network which is growing by layers at a time. Here refers to the -th update of when applying gradient descent to minimize (1). After the -th iteration, we have implicitly created a network that has layers. However, in the next iteration is updated considering only back-propagation over the last layers.

Let be the output of this implicit network after iteration , see Figure 4. In order to properly update by the back-propagation, we need to be a good approximation of . We checked empirically that when by evaluating and computing at each iteration. The results are shown in Figure 5. As it can be observed, after the error is considerably low.

For the adaptive choice of , we set it as a variable in Tensorflow, i.e. at each training iteration and are updated by the gradient descent. It is expected from the setup in Def. 1 that and this was indeed observed in the experiments, see Figure 3. What is interesting is that it first decays fast and then continues to decrease slowly. Moreover, the turning point corresponds to an iteration that is quite close to the optimal one when looking at the true error plot in Figure 3. Based on that, we used it as stopping criterion for selecting .

## Appendix III: Magnetic particle imaging (MPI)

In this appendix we summarize the state of the art concerning analytic models for MPI and we give a detailed description on the MPI experiments using deep inverse prior networks.

Precisely modeling MPI, resp. formulating a physically accurate integral kernel for image reconstruction, is still an unsolved problem. Various modeling aspects, e.g. the magnetization dynamics and particle-particle interactions make it a challenging task such that the integral kernel is commonly determined in a time-consuming calibration procedure. For further information on the modeling aspects the interested reader is referred to the survey paper Kluth2018b as well as to the review article Knopp2017 for further details on the MPI methodology.

A first step towards a theoretical understanding of this problem, while excluding the temporal dependence, can be found in maerz2015modelbased . A one-dimensional problem setup considering the time-dependency of the measurements was analyzed in ErbWeinmannAhlborg:2017 for one particular trigonometric excitation. Furthermore, the multi-dimensional imaging problem for different dynamic magnetic field patterns was analyzed in the context of inverse problems regarding the best possible degree of ill-posedness Kluth2018a . The theoretical investigations so far (based on the simplified equilibrium model Kluth2018b ) conclude that in the MPI standard setup, which uses a superposition of trigonometric dynamic field excitations and linear gradient fields, is severely ill-posed.

The concentration reconstruction problem is typically solved by applying Tikhonov regularization weizenecker2009three ; Knopp2010e ; rahmer2012analysis ; Lampe_Fast_2012 . For MPI this is preferably solved by using the algebraic reconstruction technique Knopp2010e ; KnoppOnline2016 combined with a non-negativity constraint weizenecker2009three . More recently, further sophisticated regularization techniques such as fused lasso regularization, directional total variation, or other gradient-based methods have been applied to the full-calibration-based MPI problem storath2016edge ; Bathke2017 ; Konkle2015a . A total-least-squares approach with respect to operator uncertainty combined with standard Tikhonov regularization as well as a sparsity-promoting penalty term was used to improve reconstruction performance Kluth2017 .

### 5.1 How Magnetic Particle Imaging works

For the description of the MPI setup we adapt the general notations in Kluth2018a ; Kluth2018b . MPI is inherently a 3D problem such that vector-valued functions remain 3D even if the domain of the spatial variable is a subset of a -dimensional affine subspace . It is further assumed that the support of the concentration function is a subset of the domain . A lower dimensional setup () can be constructed by the assumption that the concentration is a -distribution with respect to the orthogonal complement of . Let , , be a bounded domain with a (strong) Lipschitz boundary in . Further, let denote the maximal data acquisition time and the time interval during which the measurement process takes place. The temporal derivative of any function is denoted by .

The measured voltage signal , , obtained at receive coils, is given by

 vℓ(t) =∫Ωc(x)∫I−aℓ(t−t′)μ0pℓ(x)t˙¯m(x,t)dt′dx=sℓ(x,t) −∫I∫R3aℓ(t−t′)μ0pℓ(x)t˙H(x,t)dxdt′=vE,ℓ(t), (55)

where the superscript denotes the transpose of a vector, is the concentration of the magnetic nano-particles and , , represent the system functions characterizing the mean magnetic moment behavior of nanoparticles. This relation follows from Faraday’s law and the law of reciprocity Knopp2012 . The positive constant is magnetic permeability in vacuum. The scalar functions , , are the analog filters in the signal acquisition chain, and in practice, they are commonly band stop filters adapted to excitation frequencies of the drive field. Note that the analog filters are included in the system functions in this formulation, which can differ in the literature. The functions , , denote the sensitivity profiles of the receive coil units.

In the following it is assumed that the applied magnetic field and the filters are chosen in a way such that all excitation signals , . This assumption on the excitation signals is commonly made but may not be fulfilled in MPI applications Them2016 ; Kluth2017 . Using the previous assumption the inverse problem is to find the concentration from :

 vℓ(t)=∫Ωc(x)sℓ(