1 Introduction
The classical inverse problem is described by an operator equation of the form
(1) 
where is a linear or nonlinear operator between some Hilbert/Banach spaces and . In the case of illposedness, we resort to regularization methods for approximating the true solution . The most developed and widely used method for solving illposed inverse problems is Tikhonov regularization, see [35, 36]. Some of the classical results on Tikhonov regularization can be found in [4, 9, 15, 17, 22, 23, 30, 33]. Here, the regularized solution is defined as the minimizer of the Tikhonov functional
(2) 
which consists of a discrepancy and a regularization term (also called penalty term). Through the regularization term we are able to include apriori knowledge about the true solution.
In recent years, the concept of sparsity is considered a powerful tool, especially in applications, see for instance [5, 8, 15, 22, 29]. In this case the true solution has a sparse representation in the given basis or frame for the parameter space , i.e., only a few coefficients are different from zero. It turns out that in many applications one has to choose between classical and sparse regularization. The new challenge, resulting from realworld applications, is to allow some deviations in the data . In [11] Tikhonov functionals incorporating tolerances in the discrepancy term were studied for the solution of inverse problems. The authors proposed an altered Tikhonov functional of the form
(3) 
where denotes the insensitive distance . This approach makes sense, e.g., in production engineering. In the case of surface treatment, tolerances for the quality of the end product or for the measurement accuracy are often specified. These methods have been successfully applied to the problem of process design in micro production and applications in image processing. In addition to the original reference, we refer the user to [12] and [13], too. For linear operators the case and
is a generalization of Support Vector Regression (SVR) which can be used for treating illposed inverse problems, see for instance
[34]. Furthermore, in [24] a rigorous analysis incorporating discrepancy terms with tolerance for solving linear integral equations was presented, under a semidiscrete setting in reproducing kernel Hilbert spaces (RKHS).Inspired by the great potential of such approaches in applications, in our work we examine the effect of tolerances in the regularization term of Tikhonov functionals. Including these inside the penalty term means that the solution will eventually lie inside a confidence interval. An application of interest is the development of new structural materials. In this case, the goal is to find appropriate values for a set of production parameters, like chemical composition, heating or cooling, to finally obtain materials satisfying certain properties. The desired properties of the new materials are given in the form of intervals, or in the form of a socalled performance profile, for further reading refer to
[27].1.1 Regularization functional with tolerances
As discussed in the introduction, the insensitive function comes from the theory of SVR, for further reading see [24, 32, 37], and was first introduced by Cortes and Vapnik in [7]. For a given the function is defined as
(4) 
In Figure (a)a, as given in (4) is plotted in comparison to the absolute value function while Figure (b)b shows their subdifferentials. In the following, we often use the term tolerance function when referring to the insensitive function. Two analogous definitions are used within this work which differ in being a sequence or a function. We follow the definition in [11, Definition 1] and define the insensitive modulus .
Definition 1 (modulus function).
For we define the insensitive modulus component wise as
(5) 
For , with we define the insensitive modulus function by
(6) 
For simplicity of notation, we write for all cases. In both definitions given in (5) and (6) the equation (4) is applied pointwise. Analogously using the Definition 1 pointwise in the induced norm we obtain a distance function in space.
Definition 2 (insensitive measure).
Let be a bounded and closed and let . The insensitive measure is denoted via
(7) 
Our definition agrees with the one given in [11], and for the case of we further have to assume that is bounded. For notational simplicity of our subsequent analysis, will often be denoted by .
In regularization methods we often assume a reference solution which is included in the penalty term as apriori information on the true solution of the problem. Denoting with the reference solution and assuming including the tolerances, our penalty term is of the form
(8) 
where and bounded set in . Since does not affect our theoretical analysis, for simplicity, we assume it to be zero and we only consider it later in our numerical results.
The functional is weakly lower semicontinuous and fulfills the following inequalities
(9)  
(10) 
which have been proved in [11]. Furthermore, is continuous, convex for , whereas for is strictly convex. By (9) it is obvious that and, therefore, is well defined.
Proposition 3.
Let for . The regularization functional given by (8) is coercive.
Proof.
This follows directly from the inequality (9) since taking leads to the conclusion that .∎
1.2 Tikhonov functional with tolerance in regularization term
Assuming over a bounded set and to be a reflexive Banach space, we consider an altered Tikhonov functional including the tolerance function described in the previous section in the regularization term, that is
(11) 
Here is a nonlinear operator between and and the noisy data are created with additive noise with level noise and are such that . The regularization term for includes the tolerance and is given by (8). We aim at investigating the analytical properties of minimizers . Moreover, we examine the connection between tolerances in parameter space and sparsity regularization. The following assumption remains valid throughout the paper.
Assumption 4.

Let be weakly sequentially closed with respect to the weak topology on .

The set is nonempty. Note that this assumption implies that is proper.
Furthermore, in the proofs of convergence and convergence rates of the minimizers of , we use the concept of an minimizing solution.
Definition 5 (minimizing solution).
The element is called an minimizing solution, if and .
2 Wellposedness
We begin with the existence of minimizers . Then, we continue with results on the stability of minimizers i.e., we prove that the minimizer depends continuously on the data. In the following results we use the next lemma which can be found in [15].
Lemma 6.
Let . Assume that is fixed, is a bounded sequence in and that there exist and such that , for all . Then, there exist and a subsequence such that and .
Proof.
The proof of this Lemma is omitted as it follows with similar steps as in [15, Lemma 4]. ∎
In the theorems, we closely follow the concept in [15] and [23] and prove them for our Tikhonov functional with tolerances incorporated in the regularization term.
Theorem 7 (Existence).
Assume that is fixed. For and for every the functional has a minimizer in .
Proof.
Let satisfy . From Lemma 6, there exists a subsequence weakly converging to some such that . From the weak lower semicontinuity of and and the fact that is weakly sequentially closed it follows that
Therefore, for any , which means that is a minimizer of . ∎
Notation.
If any of the ingredients is taken as a (sub)sequence, the functional will be denoted including the respective (sub)sequence in its shorthand notation, e.g., given a sequence of noisy data , we will write for denoting the functional .
The next theorem concerns the stability of minimizers of , namely, for fixed we prove that the minimizer depends continuously on .
Theorem 8 (Stability for fixed ).
Assume and fixed. Let converge to some and let
Then, there exist a subsequence which converges weakly to a minimizer of the functional . Moreover, we have that
Proof.
Since is a sequence of minimizers of , it holds that for any . From Lemma 6, there exists a subsequence weakly converging to some such that . Moreover, from the weak lower semicontinuity of and there holds
(12) 
Combining the above, we get
(13)  
On the other hand, for any , we see that
(14) 
From (13) and (14) we conclude that for any , that is, is a minimizer of . Moreover, the weak lower semicontinuity of and implies that . ∎
Remark 9.
In [15, Proposition 6], the authors additionally to prove that for their functional . In our case, such a result cannot be inferred as weak convergence is not preserved under the nonlinearity of . That is, assuming we cannot prove that . In order to obtain norm convergence, one can further assume . However, we choose not to make this additional assumption as it is quite restrictive.
Theorem 10 (Weak convergence for fixed ).
Let be fixed. Assume that attains a solution in and that satisfies
Let and let satisfy . Moreover, let and
Then, there exist an minimizing solution of and a subsequence with .
Proof.
Let be any solution of . From the definition of it follows that
It can be easily seen that and together with the assumptions on and , we conclude that . For the penalty term we have which yields
(15) 
when using the definition of the limit superior. Let , from the previous inequality there exists such that
Therefore, Lemma 6 guarantees the existence of a subsequence and some such that and . Since
it follows that , i.e., . From the weak lower semicontinuity of and the fact that (15) holds for any solving , we conclude
This shows that is an minimizing solution of and . ∎
2.1 Stability and convergence for vanishing tolerances
In the previous results we always assumed a positive constant . In this section, we consider a nonnegative sequence , such that . When the limit point of is , we observe that gives
(16) 
Therefore, we obtain minimizers of the generalized Tikhonov functional. For that reason, the minimizer of is denoted by .
Theorem 11 (Stability for ).
Assume . Let converge to , be a tolerance sequence converging to and let
Then, there exist and a minimizer of the functional such that .
Proof.
The minimizing property of gives that . Lemma 6, guarantees the existence of a subsequence of , denoted by , which converges to some and is such that . From the weak lower semicontinuity of and and the fact that , we have that
(17) 
On the other hand, since , for any we have
Hence, based on the notation in (16), we obtain
for all , implying that is a minimizer of . Moreover, and due to the fact that both and are weakly lower semicontinuous, it follows that . Then, with the use of [15, Lemma 2] we conclude that ∎
Theorem 12 (Convergence for ).
Let be a tolerance sequence converging to . We assume that attains a solution in and that satisfies
Let and let satisfy . Moreover, let and
Then, there exist an minimizing solution of and a subsequence with .
Proof.
Let be any solution of . The minimizing property of implies
Therefore, it follows that . Then, taking the limit for yields since we assumed that and as . In a similar way, for the penalty term we have
that is . Taking the limit superior as we obtain
(18) 
which is true for any solution of .
With and the previous calculation, there exists a constant such that
From Lemma 6, there exists a subsequence weakly convergent to some such that . Since
(19)  
it follows that
From the weak lower semicontinuity of , the fact that and (18), we obtain that
for all such that . Using the notation in (16), we conclude that , for all such that Hence, is an minimizing solution of . Due to and the fact that , and using [15, Lemma 2], we further conclude that ∎
3 Convergence rates
In this section we present results on the convergence rates of minimizers of the functional (11
). Since we assume the parameter space to be a Banach space, we adopt the standard approach in Banach space settings and use the Bregman distance to estimate the difference between the regularized solution
and the ground truth . Some standard results on convergence rates are found in [6, 10, 15, 22, 25], while in [14, 17, 33] exist convergence rates results using the Bregman distance. Moreover, for estimating the distance between and , we use the usual norm of the Banach space .The definition of the Bregman distance for requires the subdifferential of the functional at an element , which is given by
where denotes the dual space of and the dual pairing between and . Particularly for (finite) dimensional problems, like the numerical example presented in the next section, the insensitive measure appearing in the regularization functional is defined by . Using the classical subdifferential rules, for we compute the subdifferential
(20) 
with th sum component given by
(21) 
Similarly, for we have
(22) 
with th sum component computed as
(23) 
Note that the tolerance function is applied in a component wise sense for computing the above subdifferentials. The previous computations are confirmed in the subdifferential’s formula for
(24) 
where is determined by (21).
It is worth noting that if the tolerance is not scalar but it is given as a vector with positive entries, then instead of there will be in all of the above calculations. Given the subdifferential of , we proceed with the Bregman distance and the convergence rates.
Definition 13 (Bregman distance).
Let . Also, let be a convex and proper functional with subdifferential . Considering an element , the Bregman distance of at is defined by
(25) 
for and it is only defined in the Bregman domain
For notational simplicity, we use the usual inner product notation for the dual pairing. Since we work in Banach spaces, there should not be any confusion with the notation of inner products in Hilbert spaces. Moreover, when writing for and , we mean that there exist and such that for .
The classical process for proving convergence rates requires an additional assumption on the smoothness of (restriction of its nonlinearity), as well as a source condition (in [18, 30] general source conditions are discussed) which allows the estimation of the duality pairing appearing in the Bregman distance. Both are included in the following assumption.
Assumption 14 (Smoothness of and source condition).
Assume that the following hold:

The operator is Gâteaux differentiable at and denotes its Gâteaux derivative.

There exists a constant , such that
for all , with a sufficiently large .

There exists , such that with .
Theorem 15.
(Convergence rates) Let , . Moreover, we consider that Assumptions 4 and 14 hold. Assume noisy data such that and that there exists an minimizing solution of (1), in the Bregman domain . For the minimizer of (11), we prove the following estimates:

If and ,

If ,
with being the conjugate of such that .
Moreover, we have:

For and the choice with fixed

For and the choice
Proof.
We start by comparing the functional values and . From the minimizing property of , we obtain
Then, by reordering and gathering terms we use the Bregman distance , which yields
In the next step we employ the source condition (iii) of Assumption 14 for rewriting the last term, which results into
(26) 
Now, we focus on the dual pairing of the last term, for which we have
Adding and subtracting inside the last term and using the triangle inequality, yields
Furthermore, we use the smoothness assumption of defined in (ii) of Assumption 14 to write
and by defining constants such that and , we further obtain
(27) 
In addition, we can estimate the term . We add and subtract and use the triangle inequality to conclude
(28) 
Substituting the estimates (27), (28) into (26), we have
(29) 
For , rearranging (29) yields
For sufficiently small such that , the first term is nonnegative. Moreover, the second term is nonnegative by assumption since . Therefore, we can derive the following estimates
Choosing with fixed , we obtain
For , we have
Applying Young’s inequality with and , yields
Comments
There are no comments yet.