1 Introduction
The goal of image reconstruction is obtaining an image of the object of interest from indirectly measured, and typically noisy, data. Mathematically, image reconstruction problems are commonly formulated as inverse problems that can be written in the form of operator equations
(1.1) 
where is the unknown, is the measurements and is a forward operator that models the data acquisition. In this paper, we consider linear forward operators and assume that equation (1.1) with exact data and operator has a unique solution that we denote by .
In practice, not only the righthand side is noisy, but also the operator is often not exact as it contains errors that come from imperfect calibration measurements.
Uncertainty in the operator and in the data may be characterised by the inclusions and for some sets and . These sets may be referred to as uncertainty sets – a concept widely used in robust optimisation [Bertsimas_rob_opt_2011]. Given and , we would like to find a subset , called the feasible set, that contains the exact solution . Depending on the particular form of the uncertainty sets and and on available additional a priori information about , the inclusion can be proven for different feasible sets. Two main considerations that affect the choice of a particular feasible set are its size (smaller feasible sets are preferred) and the availability of efficient numerical algorithms for optimisation problems involving the feasible set (therefore, convex feasible sets are preferred).
For illposed problems, in general, available feasible sets are too large and contain elements arbitrary far from . An exception is the case when a compact set that contains the exact solution is known a priori [TGSYag]. In this case, a feasible set of finite diameter can be obtained. In the general case, an appropriate regularisation functional needs to be minimised on the feasible set to find a stable approximation to .
This is the idea behind the residual method [IvanovVasinTanana, GrasmairHalmeierScherzer2011]. Operating in normed spaces, one can define the uncertainty sets as follows:
(1.2) 
for an approximate righthand side , approximate forward operator and approximation errors and [IvanovVasinTanana]. Using the information in (1.2), one can define a feasible set as follows [IvanovVasinTanana]:
(1.3) 
This set contains all elements of that are consistent with (1.1) within the tolerances given by (1.2). The inclusion can be easily verified. Unless (i.e. the forward operator is exact), the set is nonconvex and the residual method results in a nonconvex optimisation problem even for convex regularisation functionals.
An alternative approach to modelling uncertainty in and using partially ordered spaces was proposed in [Kor_IP:2014, Kor_Yag_IP:2013]. Assume that and are Banach lattices, i.e. Banach spaces with partial order “”, and that is a regular operator [Schaefer]. Then, uncertainties in and can be characterised using intervals in appropriate partial orders, i.e.
(1.4) 
where is the space of regular operators . Assuming positivity of the exact solution and using the inequalities in (1.4), we can show that the exact solution is contained in the following feasible set [Kor_IP:2014]:
(1.5) 
In contrast to (1.3), the set in (1.5) is convex and minimising a convex regularisation functional on this set results in a convex optimisation problem:
(1.6) 
Using the relationship between partial orders and norms in Banach lattices, one can prove the inclusion of the partialorderbased feasible set (1.5) in the normbased feasible set (1.3) for appropriately chosen . We briefly review the partialorderbased approach in Section 2.
While convergence of the minimisers of (1.6) to the exact solution can be guaranteed [Kor_IP:2014], it is not clear, whether the solution to (1.6) actually corresponds to a particular pair within the bounds (1.4). It seems more natural to look for approximate solutions in the following set:
(1.7) 
i.e. to assume that, while the exact operator and noisefree measurements are unknown, there has to exist at least one pair within the uncertainty bounds (1.4) that exactly explains the solution.
It is not clear a priori, whether the set (1.7) is convex. In this paper we show that the sets (1.5) and (1.7), in fact, coincide, which implies convexity of (see Section 3) and shows that the convex problem (1.6) actually implements the natural formulation (1.7).
It is tempting to add an additional constraint on the operator in (1.7), reflecting additional a priori information about . For instance, if is a blurring operator, then (after finitedimensional approximation) the rows of the matrix should sum up to one, i.e. we should have that
for two vectors of ones of appropriate lengths. More generally, one can define an additional linear constraint
for a fixed pair to obtain(1.8)  
Unfortunately, even such a simple constraint breaks the convexity of the feasible set. We demonstrate this in Section 4 by explicitly describing the set (1.8) in finite dimensions in the special case when . We also argue that the additional constraint can still be useful to tighten the bounds if they weren’t carefully chosen initially.
Since the set is convex (in ), the analysis of Section 4 shows that convexity of an additional constraint set does not guarantee convexity of the corresponding feasible set in . Because of this negative result, we confine ourselves to the set (1.5) in our numerical experiments.
In Section 5 we consider an application in image deblurring with uncertainty in the blurring kernel. In many applications, such as astronomy or fluorescence microscopy, the blurring kernel (often referred to as the point spread function) is obtained experimentally by recording light from reference stars [Telescope_PSF] or imaging subresolution fluorescent particles [Shaw_PSF_91]. Such blurring kernels inevitably contain errors that can significantly impact the reconstruction. Blind deconvolution [Chan_Wong_blind_deconvolution_98, Perrone_Favaro_blind_deconvolution_16]
aims at reconstructing both the blurring kernel and the image simultaneously, but suffers from severe illposedness and nonconvexity. The approach we propose takes into account the errors in the available blurring kernel (without attempting to obtain a better estimate of it) while staying within the convex setting.
2 Brief overview of the partialorderbased approach
spaces, endowed with a partial order relation
become Banach lattices, i.e. partially ordered Banach spaces with welldefined suprema and infima of each pair of elements and a monotone norm [Schaefer]. If and are two Banach lattices, then partial orders in and induce a partial order in a subspace of the space of linear operators acting from to , namely in the space of regular operators. A linear operator is called regular, if it can be represented as a difference of two positive operators. Positivity of an operator is defined as iff . Partial order in the space of regular operators is introduced as follows: iff is a positive operator. Every regular operator acting between two Banach lattices is continuous [Schaefer].
The framework of partially ordered functional spaces allows quantifying uncertainty in the data and forward operator of the inverse problem (1.1) by means of order intervals (1.4). Approximate solutions to (1.1) are the minimisers of (1.6). Convergence of these minimisers to the exact solution of (1.1) is studied as the uncertainty in the data and forward operator diminishes. This is formalised using monotone convergence sequences of lower and upper bounds (1.4):
(2.1)  
If a sequence of lower (upper) bounds is not monotone, it can always be made monotone by consequently taking the supremum (infimum) of each element in the sequence with the preceding one.
Convergence of the corresponding sequence of minimisers of (1.6) to is guaranteed by the following theorem [Kor_IP:2014]:
Theorem 2.1.
Let and be Banach lattices and order complete^{1}^{1}1A Banach lattice is called order complete if every majorised set in has a supremum.. Suppose that the regulariser satisfies the following assumptions:

is bounded from below on ;

is lower semicontinuous;

nonempty sublevel sets are strongly sequentially compact.
Then strongly in .
The assumptions on the regulariser in Theorem 2.1 are rather standard. Conditions of Theorem 2.1 are satisfied, for example, if and . The term can be dropped if boundedness of the norm can be guaranteed for all (1.5). The term can be replaced by any topologically equivalent seminorm, such as [bredies2009tgv] (see [l1tgv] for a proof of topological equivalence) or [Burger_TVLp_2016].
Strong compactness of the sublevel sets of can be replaced by weak compactness if has the RadonRiesz property, i.e. that for any sequence weak convergence along with convergence of the values implies strong convergence . With this modification, Theorem 2.1 admits norms in reflexive Banach spaces as regularisers.
The constraint in (1.5) is important. It can be relaxed to for some (not necessarily ), with some modifications in the formulae [Kor_Yag_IP:2013], provided that the exact solution satisfies this constraint. If the exact solution may be unbounded from below, the method won’t work.
Let us briefly discuss the inclusion of the partialorderbased feasible set (1.5) in the normbased feasible set (1.3). Let us choose
(2.2) 
It is easy to verify that , and as . Indeed, we note that, since , we get
and therefore
(2.3) 
Since the space of regular operators with order complete is a Banach lattice under the socalled norm [Abramovich] and the norm is always greater or equal to the operator norm [Schaefer], (2.3) implies
The inequality can be shown analogously and follows from (2.1). The proof of the inclusion of (1.5) in (1.3) with as defined in (2.2) can be found in [Kor_IP:2014, Thm. 2]. Therefore, the partialorderbased feasible set (1.5) is contained in the normbased feasible set (1.3) if the approximate operator, the approximate righthand side and the approximation errors are as in (2.2).
3 Equivalence of and
Theorem 2.1 guarantees convergence of approximate solutions chosen from the partialorderbased feasible set (1.5) by minimizing a regulariser over as in (1.6). It is not clear, however, whether the minimisers solve (1.1) for any particular pair within the bounds (1.4). In this section, we give a positive answer to this question for regular integral operators [Abramovich] acting between two spaces and of  and measurable functions, respectively, where and are sets, and are algebras over these sets and and are measures. A linear operator is called an integral operator, if there exists a jointly measurable function such that for each we have
for almost all . An integral operator is regular if and only if the operator
has range in [Abramovich, Thm. 5.11].
Theorem 3.1.
Proof.
Since and are integral operators, there exist jointly measurable functions and such that
for almost all and by [Abramovich, Thm. 5.5] we have for almost all .
Note that, as an immediate consequence of [Abramovich, Thm. 5.9], every operator that satisfies is an integral operator and therefore there exists a jointly measurable function such that
and
for almost all .
Let us choose a measurable function such that a.e. and define
Note that such choice of does not capture all measurable functions such that (a choice of a jointly measurable would do that), but it will suffice for our existence proof. Obviously, such choice of defines an integral operator that satisfies .
Fix and define
Our goal is to find such that and , i.e.
for almost all . If almost everywhere, the inequalities above are trivially satisfied. Otherwise, we rewrite them as follows:
or
(3.1)  
This system has a solution if the first operand in the supremum in (3.1) is and the first operand in the infimum is a.e. in . This is indeed the case as a consequence of the conditions (1.4) and the inequalities and . Therefore, we can always find a measurable satisfying (3.1), for example, by choosing
which is a supremum of two measurable functions and therefore measurable. In the special case we get a unique solution
Hence, we have found a pair within the bounds (1.4) for an arbitrary such that . ∎
Theorem 3.1 proves the inclusion , where is as defined in (1.7). The opposite inclusion holds as well, since for any , with the corresponding pair from we have
due to the positivity of , and hence and . Therefore, we have proven the following
Theorem 3.2.
An immediate consequence of this result is the convexity of the set , since the set is, obviously, convex. An advantage of the formulation (1.5) is the ease of implementation in an optimisation algorithm. On the other hand, the formulation (1.7) allows to easily include a priori information on the operator as additional constraints, cf. (1.8).
4 Imposing further constraints on the operator
It is a natural question to ask, whether under some additional constraints on the feasible set (1.7) remains convex (in ). In this section we answer this question negatively in the case when the additional constraint is linear. We restrict ourselves to the finitedimensional case when , , and is an matrix. Note that in the finitedimensional case partial order in the space of regular operators coincides with the elementwise partial order for matrices, i.e. iff . Without loss of generality, we also restrict ourselves the special case .
Fix a pair such that and
(4.1) 
and consider the set
(4.2) 
As noted in the introduction, the additional constraint can be useful, for example, if the exact forward operator is a convolution operator, i.e. all rows of the matrix sum up to one. This additional constraint allows us to further restrict the feasible set, while still preserving the inclusion . Intuitively, a tighter feasible set provides more information about the exact solution and can be expected to improve the reconstructions.
While the inclusion of (4.2) in the convex set (1.5), obviously, still holds, the opposite inclusion does not hold any more. In what follows, we derive an explicit description of the set and argue that this set is not convex. Therefore, the advantages in reconstruction quality offered by using a tighter feasible set come at a price of a significant increase in computational complexity.
The structure of
Every matrix , , can be written as
with . Fix . The constraints and can be written as
for each row . In what follows we will drop the subscript and consider this system for each row separately:
or, equivalently,
(4.3) 
The matrix and the righthand side in (4.3) have nonnegative entries due to (1.5), (4.1) and the inequality . Our goal is to find conditions on under which this system has a solution . We will use Farkas’ lemma [Rockafellar] to find out, when the system (4.3) has a positive solution. To find out, when it has a solution , we reformulate (4.3) in terms of , which gives us the following system
(4.4) 
which also has a positive righthand side due to (1.5) and (4.1). Combining these two systems, we get:
(4.5)  
The last lines in this system enforce the constraint , which guarantees that we find conditions under which the system (4.3) has a solution that is simultaneously and . The system (4.3) has a solution in if and only if the system (4.5) has a solution .
Our goal is to find the conditions on , under which the system (4.5) has a nonnegative solution. Farkas’ lemma gives us the following alternative: either (4.5) has a solution or there exists a vector such that
(4.6) 
and
(4.7)  
We can rewrite these conditions equivalently as follows:
(4.8)  
(4.9)  
(4.10) 
The proof will be based on considering various combinations of signs of and separately. First we prove the following
Proof.
The coefficients at and in both equations are positive. Therefore, whether or , one of the above equations is violated. ∎
Due to Lemma 4.1, we only need to consider two combinations: and .
Let and be the complement of . Inequality (4.10) requires that we choose as small as possible, which is
Substituting this into (4.10), we get
(4.11)  
Define and
(4.12)  
Lemma 4.2.
The function as defined in (4.12) has the following properties:

is piecewise linear;

Comments
There are no comments yet.