1 Introduction
We study the stable solution of inverse problems of the form
(1.1) 
Here is a possibly nonlinear operator between Banach spaces and with domain . We thereby allow a possibly infinitedimensional function space setting, but clearly the approach and results apply to a finite dimensional setting as well. The element models the unknown data error (noise) which is assumed to satisfy the estimate for some noise level . We focus on the illposed (or illconditioned) case where without additional information, the solution of (1.1) is either highly unstable, highly undetermined, or both. Many inverse problems in biomedical imaging, geophysics, engineering sciences, or elsewhere can be written in such a form (see, for example, [12, 29, 34]). For its stable solution one has to employ regularization methods, which are based on approximating (1.1) by neighboring wellposed problems, which enforce stability and uniqueness.
1.1 NETT regularization
Any method for the stable solution of (1.1) uses, either implicitly or explicitly, apriori information about the unknowns to be recovered. Such information can be that belongs to a certain set of admissible elements or that has small value of a regularizer (or regularization functional) . In this paper we focus on the latter situation, and assume that the regularizer takes the form
(1.2) 
Here is a scalar functional and a neural network of depth where , for some vector space , contains free parameters that can be adjusted to available training data (see Section 2.1 for a precise formulation).
With the regularizer (1.2), we approach (1.1) via
(1.3) 
where is an appropriate similarity measure in the data space enforcing data consistency. One may take
but also other distance measures such as the KullbackLeibler divergence (which, among others, is used in emission tomography) are reasonable choices. Optimization problem (
1.3) can be seen as a particular instance of generalized Tikhonov regularization for solving (1.1) with a neural network as regularizer. We therefore name (1.3) network Tikhonov (NETT) approach for inverse problems.In this paper, we show that under reasonable assumptions, the NETT approach (1.3) is stably solvable. As , the regularized solutions are shown to converge to minimizing solutions of . Here and below minimizing solutions of are defined as any element
(1.4) 
Additionally, we derive convergence rates (quantitative error estimates) between minimizing solutions and regularized solutions . As a consequence, (1.3) provides a stable solution scheme for (1.1) using data consistence and encoding apriori knowledge via neural networks.
1.2 Possible regularizers
The network regularizer can either be userspecified, or a trained network, where free parameters are adjusted on appropriate training data. Some examples are as follows.

[wide]

Nonconvex regularizer: A simple userspecified instance of the regularizer (1.2) is the convex regularizer . Here is a prescribed basis or frame and are weights. In this case, the neural network is simply given by the analysis operator and NETT regularization reduces to sparse regularization [11, 16, 17, 26, 31]. This form of the regularizer can also be combined with a training procedure by adjusting the weights to a class of training data.
In this paper we in particular study a nonconvex extension of regularization, where the regularizer takes the form
(1.5) with and being a possible nonlinear neural network with multiple layers. In Section 3.4 we present convergence results for this nonconvex generalization of regularization.

CNN regularizer: The neural network regularizer in (1.2) may also be defined by a convolutional neural network (CNN) , containing free parameters that can be adjusted on appropriate training data. The CNN can be trained in such a way, that the regularizer has small value for elements in a class of desirable phantoms and large value on a class of undesirable phantoms. In Section 5, we present a possible regularizer design using an encoderdecoder scheme together with a strategy for training the CNN. We also present numerical results demonstrating that our approach performs well in practice for a sparse tomographic data problem.
1.3 Comparison to previous work
Very recently, several deep learning approaches for inverse problems have been developed (see, for example, [1, 3, 8, 23, 24, 39, 21, 36, 40, 41]). In all these approaches, a reconstruction network is trained to map measured data to the desired output image.
Most reconstruction networks take the form , where maps the data to the reconstruction space (backprojection; no free parameters) and is a convolutional neural network (CNN) whose free parameters are adjusted to the training data. This basic form allows the use of well established CNNs for image reconstruction [14] and already demonstrates impressing results. Another class of reconstruction networks learn free parameters in iterative schemes. In such approaches, the reconstruction network can be written in the form
where is the initial guess, are CNNs that can be trained, and are iterative updates based on the forward operator and the data. The iterative updates may be defined by a gradient step with respect to the given inverse problem. The free parameters are adjusted to available training data.
Trained iterative schemes repeatedly make use of the forward problem which might yields increased data consistency compared to the first class of methods. Nevertheless, in all existing approaches, no provable nontrivial estimates bounding the data consistency term are available; data consistency can only be guaranteed for the training data for which the parameters in the neural network are optimized. This may results in instability and degraded reconstruction quality if the unknown to be recovered is not similar enough to the class of employed training data. The proposed NETT bounds the data consistency term also for data outside the training set. We expect the combination of the forward problem and a neural network via (1.3) (or, for the noiseless case, (1.4)) to increase reconstruction quality, especially in the case of limited access to a large amount of appropriate training data. Note, further, that the formulation of NETT separates the noise characteristic and the apriori information of unknowns. This allows us to incorporate the knowledge of data generating mechanism, e.g. Poisson noise or Gaussian noise, by choosing the corresponding loglikelihood as the data consistency term, and also simplifies the training process of , as it to some extend avoids the impact of noise. Meanwhile, this enhances the interpretability of the resulting approach: we on the one hand require its fidelity to the data, and on the other penalize unfavorable features (e.g. artifacts in tomography).
The results in this paper are a first main step for regularization with neural networks. We propose a new framework in the form of Tikhonov regularization with neural network (the NETT) and present a complete convergence analysis under reasonable assumptions (see Condition 2.2). Many further issues can be addressed in future work. This includes the design of appropriate CNN regularizers, the development of efficient algorithms for minimizing (1.3), and the consideration of other regularization strategies for (1.4). The focus of the present paper is on the theoretical analysis of NETT and demonstrating the feasibility of our approach; detailed comparison with other methods in terms of reconstruction quality, computational performance and applicability to realworld data is beyond our scope here and will be addressed in future work.
1.4 Outline
The rest of this paper is organized as follows. In Section 2, we describe the proposed NETT framework for solving inverse problems. We show its stability and derive convergence in the weak topology (see Theorem 2.3). To obtain the strong convergence of NETT, we introduce a new notion of total nonlinearity of nonconvex functionals. For totally nonlinear regularizers, we show norm convergence of NETT (see Section 2.9). Convergence rates (quantitative error estimates) for NETT are derived in Section 3. Among others, we derive a convergence rate result in terms of the absolute Bregman distance (see Proposition 3.3). A framework for learning the regularizer using an auto encoderdecoder strategy is developed in Section 4, and applied to a sparse data problem in photoacoustic tomography in Section 5. The paper concludes with a short summary and outlook presented in Section 6.
2 NETT regularization
In the section we introduce the proposed NETT and analyze its wellposedness (existence, stability and weak convergence). We introduce a new property (total nonlinearity), which is applied to establish convergence of NETT with respect to the norm.
2.1 The NETT framework
Our goal is to solve (1.1) with and . For that purpose we consider minimizing the NETT functional (1.3), where the regularizer in (1.2) is defined by a neural network of the form
(2.1) 
Here is the depth of the network (the number of layers after the input layer) and are affine linear operators between Banach spaces and ; we take . The operators are the linear parts and the socalled bias terms. The operators are possibly nonlinear and the functionals are possibly nonconvex.
As common in machine learning, the affine mappings
depend on free parameters that can be adjusted in the training phase, whereas the nonlinearities are fixed. Therefore and are treated separately and only the affine part is indicated in the notion of the neural network regularizer . Throughout our theoretical analysis we assume to be given and all free parameters to be trained before the minimization of (1.3). In Section 4, we present a possible framework for training a neural network regularizer based on an encoderdecoder strategy.Remark 2.1 (CNNs in Banach space setting).
A typical instance for the neural network in NETT (1.2), is a deep convolutional neural network (CNN). In a possible infinite dimensional setting, such CNNs can be written in the form (2.1), where the involved spaces satisfy and with , and being function spaces, and being an at most countable set that specifies the number of different filters (depth) of the th layer. The linear operators are taken as
(2.2) 
where are convolution operators.
We point out, that in the existing machine learning literature, only finite dimensional settings have been considered so far, where and are finite dimensional spaces. In such a finite dimensional setting, we can take , and as a set with elements. One then can identify and interpreted its elements as stack of discrete images (the same holds for ). In typical CNNs, either the dimensions of the base space are progressively reduced and number of channels increased, or vice versa. While we are not aware of any infinite dimensional general formulation of CNNs, our proposed formulation (2.1), (2.2) is the natural infinitedimensional Banach space version of CNNs, which reduces to standard CNNs [14] in the finite dimensional setting.
Basic convex regularizers are sparse penalties . In this case one may take (2.1) as a singlelayer neural network with , and being the analysis operator to some frame . The nonconvex functional is a weighted norm. The frame may be a prescribed wavelet or curvelet basis [7, 10, 28] or a trained dictionary [2, 19]. In Section 3.4, we analyze a nonconvex version of regularization, where are replaced by nonlinear functionals.
2.2 Wellposedness and weak convergence
For the convergence analysis of NETT regularization, we use the following assumptions on the regularizer and the data consistency term in (1.3).
Condition 2.2 (Convergence of NETT regularization).

[leftmargin=3em,label = (A0)]


[wide]

are affine operators of the form ;

are bounded linear and for some , we have ;

are weakly continuous and coercive, that is as ;

The functional is lower semicontinuous and coercive.


The data consistency term satsfies the following:

[wide]

For some we have ;

;

;

The functional is sequentially lower semicontinuous.

In CNNs, the spaces and are function spaces (see Remark 2.1) and a standard operation for
is the ReLU (the rectified linear unit),
, that is applied componentwise. The plain form of the ReLU is not coercive. However, the slight modification for some , named leaky ReLU, is coercive, see [27, 22]. Another coercive standard operation forin CNNs is max pooling which takes the maximum value
within clusters of transform coefficients.Theorem 2.3 (Wellposedness of CNNregularization).
Let Condition 2.2 be satisfied. Then the following assertions hold true:

Existence: For all and , there exists a minimizer of ;

Stability: If and , then weak accumulation points of exist and are minimizers of .

Convergence: Let , , satisfy for some sequence with , suppose , and let the parameter choice satisfy
(2.3) Then the following holds:

[wide]

Weak accumulation points of are minimizing solutions of ;

has at least one weak accumulation point ;

Any weakly convergent subsequence satisfies ;

If the minimizing solution of is unique, then .

Proof.
According to [15, 34] it is sufficient to show that the functional is weakly sequentially lower semicontinuous and the set is sequentially weakly precompact for all and and . By the BanachAlaoglu theorem, the latter condition is satisfied if is coercive. The coercivity of however is directly implied by Condition 2.2. Also from Condition 2.2 it follows that is sequentially lower semicontinuous. ∎
Remark 2.4.
The boundedness of linear operators
seems restrictive, but it is somewhat indispensable. In fact, suppose for the moment that we can drop the boundedness requirement and ensure the first layer is lower semicontinuous. The lower semicontinuity of
asks for the weights in for , are all positive, since certain order has to be preserved. This positive weight assumption meanwhile leads to the convexity of , which, in practice, is often not the case.2.3 Absolute Bregman distance and total nonlinearity
For convex regularizers, the notion of Bregman distance is a powerful concept [4, 34]. For nonconvex regularizers, the standard definition of the Bregman distance takes negative values. In this paper, we therefore use the notion of absolute Bregman distance. To the best of our knowledge, the absolute Bregman distance has not been used in regularization theory so far.
Definition 2.5 (Absolute Bregman distance).
Let be Gâteaux differentiable at . The absolute Bregman distance with respect to at is defined by
(2.4) 
Here denotes the Gâteaux derivative of at .
From Theorem 2.3 we can conclude convergence of to the exact solution in the absolute Bregman distance. Below we show that this implies strong convergence under some additional assumption on the regularization functional. For this purpose we introduce the new total nonlinearity, which has not been studied before.
Definition 2.6 (Total nonlinearity).
Let be Gâteaux differentiable at . We define the modulus of total nonlinearity of at as ,
(2.5) 
The function is called totally nonlinear at if for all
The notion of total nonlinearity is similar to total convexity [6] for convex functionals. Opposed to total convexity we do not assume convexity of , and use the absolute Bregman distance instead of the standard Bregman distance. For convex functions, the total nonlinearity reduces to total convexity, as the Bregman distance is always nonnegative for convex functionals. For a Gâteaux differentiable function, the total nonlinearity essentially requires that its second derivative at is bounded away from zero. The functional with is totally nonlinear at every if .
We have the following result, which generalizes [32, Proposition 2.2] (see also [34, Theorem 3.49]) from the convex to the nonconvex case.
Proposition 2.7 (Characterization of total nonlinearity).
For and any the following assertions are equivalent:

[label = ()]

The function is totally nonlinear at ;

.
Proof.
The proof of the implication 2 1 is the same as [32, Proposition 2.2]. For the implication 1 2, let 1 hold, let satisfy , and suppose for the moment. For any , by the continuity of , there exist with such that for sufficiently large
This leads to , which contradicts with the total nonlinearity of at . Then, the assertion follows by considering subsequences of . ∎
2.4 Strong convergence of NETT regularization
For totally nonlinear regularizers we can prove convergence of NETT with respect to the norm topology.
Theorem 2.9 (Strong convergence of NETT).
Let Condition 2.2 hold and assume additionally that has a solution, is totally nonlinear at minimizing solutions, and satisfies (2.3). Then for every sequence with where and every sequence , there exist a subsequence and an minimizing solution with . If the minimizing solution is unique, then with respect to the norm topology.
Proof.
It follows from Theorem 2.3 that there exists a subsequence weakly converging to some minimizing solution such that . From the weak convergence of and the convergence of it follows that . Thus it follows from Proposition 2.7, that . If is the unique minimizing solution, the strong convergence to again follows from Theorem 2.3 and Proposition 2.7. ∎
3 Convergence rates
In this section, we derive convergence rates for NETT in terms of general error measures under certain variational inequalities. We discuss instances where the variational inequality is satisfied for the absolute Bregman distance. Additionally, we consider a nonconvex generalization of regularization.
3.1 General convergence rates result
We study convergence rates in terms of a general functional measuring closedness in the space . For convex , let denote the Fenchel conjugate of defined by .
Theorem 3.1 (Convergence rates for NETT).
Suppose , let and assume that there exist a concave, continuous and strictly increasing function with and a constant such that for all and with we have
(3.1) 
Additionally, let Condition 2.2 hold, let and satisfy and write for the Fenchel conjugate of the inverse function . Then the following assertions hold true:

For sufficiently small and , we have
(3.2) 
If , then as .
3.2 Rates in the absolute Bregman distance
We next derive conditions under which a variational inequality in form of (3.1) is possible for the absolute Bregman distance as error measure, .
Proposition 3.3 (Rates in the absolute Bregman distance).
Proof.
Let satisfy that . Then . Note that if and otherwise. This yields
with the constant , and concludes the proof. ∎
Remark 3.4.
Proposition 3.3 shows that a variational inequality of the form (3.1) with and follows from a classical source condition . By Theorem 3.1, it further implies that if . Moreover, we point out that the additional assumption (3.3) is rather weak, and follows from the classical source condition if is convex, see [16]. It is clear that a sufficient condition to (3.3) is
which resembles a tangentialcone condition.
3.3 General regularizers
So far we derived wellposedness, convergence and convergence rates for regularizers of the form (1.2). These results can be generalized to Tikhonov regularization
(3.4) 
where the regularization term is not necessarily defined by a neural network. These results are derived by replacing Condition 2.2 with the following one.
Condition 3.5 (Convergence for general regularizers).

[leftmargin=3em,label = (B0)]

The functional is sequentially lower semicontinuous.

The set is sequentially precompact for all and .

The data consistency term satisfies 2.
Then we have the following:
Theorem 3.6 (Results for general Tikhonov regularization).
Proof.
All assertions are shown as in the special case . ∎
3.4 Nonconvex regularization
We now analyze a special instance of NETT regularization (1.2), generalizing regularization. More precisely, we consider the following nonconvex Tikhonov functional
(3.5) 
Here is a countable set and are possibly nonlinear functionals. The regularizer is a particular instance of NETT (1.2), if we take , and as a weighted norm. However, in (3.5) also more general choices for are allowed (see Condition 3.7).
We assume the following:
Condition 3.7.

[leftmargin=4em,label = (C0)]

is a bounded linear operator between Hilbert spaces and .

is Gâteaux differentiable for every .

There is a minimizing solution with ;

There exist constants such that for all with , it holds that
Here for , for , and otherwise.
Proposition 3.8.
Let Condition 3.7 hold, suppose that is such that , and let . If choosing , then
Proof.
Remark 3.9.
Consider the case that for an orthonormal basis of . It is known that , see e.g. [34]. Then, Proposition 3.8 gives , which reproduces the result of [34, Theorem 3.54]. This rate can be improved to if we further assume sparsity of and restricted injectivity of . It can be shown by Theorem 3.1 because in such a situation (3.1) holds with and , see [16] for details.
4 NETT regularization using autoencoders
In this section we present a framework for constructing a trained neural network regularizer of the form (2.1). The proposed network has the form of an autoencoder. Additionally, we develop a strategy for network training and minimizing the NETT functional.
4.1 A trained regularizer
For the regularizer we propose with a network of the form (2.1), that itself is part network of the encoderdecoder type,
(4.1) 
Here can be interpreted as encoding network and as decoding network. Any network with at least one hidden layer can be written in the form (4.1). Training of the network is performed such that is small for artifact free images and large for images with artifacts. A possible training strategy is presented below.
rregularizer eencoder ddecoder
For suitable network training of the encoderdecoder scheme (4.1), we propose the following strategy (compare Figure 4.1). We choose a set of training phantoms for from which we construct backprojection images for the first training examples, and set for the last training images. From this we define the training data , where
(4.2)  
(4.3) 
The free parameters in (4.1) are adjusted in such a way, that for any training pair . This is achieved by minimizing the error function
(4.4) 
where
is a suitable distance measure (or loss function) that quantifies the error made by the network function on the
th training sample. Typical choices for are mean absolute error or mean squared error.Given an arbitrary unknown