 # A parametric level-set method for partially discrete tomography

This paper introduces a parametric level-set method for tomographic reconstruction of partially discrete images. Such images consist of a continuously varying background and an anomaly with a constant (known) grey-value. We represent the geometry of the anomaly using a level-set function, which we represent using radial basis functions. We pose the reconstruction problem as a bi-level optimization problem in terms of the background and coefficients for the level-set function. To constrain the background reconstruction we impose smoothness through Tikhonov regularization. The bi-level optimization problem is solved in an alternating fashion; in each iteration we first reconstruct the background and consequently update the level-set function. We test our method on numerical phantoms and show that we can successfully reconstruct the geometry of the anomaly, even from limited data. On these phantoms, our method outperforms Total Variation reconstruction, DART and P-DART.

## Code Repositories

### Tomography

None

##### This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

## 1 Introduction

The need to reconstruct (quantitative) images of an object from tomographic measurements appears in many applications. At the heart of many of these applications is a projection model based on the Radon transform. Characterizing the object under investigation by a function with , tomographic measurements are modeled as

 pi=∫Du(x)δ(si−n(θi)⋅x)dx,

where denotes the shift, denotes the angle and . The goal is to retrieve from a number, , of such measurements for various shifts and directions.

If the shifts and angles are regularly sampled, the transform can be inverted directly by Filtered back-projection or Fourier reconstruction . A common approach for dealing with non-regularly sampled or missing data, is to express in terms of a basis

 u(x)=n∑j=1ujb(x−xj),

where are piece-wise polynomial basis functions and is a regular (pixel) grid. This leads to a set of linear equations in unknowns

 p=Wu,

with . Due to noise in the data or errors in the projection model the system of equations is inconsistent, so a solution may not exist. Furthermore, there may be many solutions that fit the observations equally well because the system is underdetermined. A standard approach to mitigate these issues is to formulate a regularized least-squares problem

 minu12∥Wu−p∥22+λ2∥Ru∥22,

where is the regularization operator. Such a formulation is popular mainly because very efficient algorithms exist for solving it. Depending on the choice of , however, this formulation forces the solution to have certain properties which may not reflect the truth. For example, setting to be the discrete Laplace operator will produce a smooth reconstruction, whereas setting

to be the identity matrix forces the individual coefficients

to be small. In many applications such quadratic regularization terms do not reflect the characteristics of the object we are reconstructing. For example, if we expect to be piecewise constant, we could use a Total Variation regularization term where is a discrete gradient operator . Recently, a lot of progress has been made in developing efficient algorithms for solving such non-smooth optimization problems . If the object under investigation is known to consist of only two distinct materials, the regularization can be formulated in terms of a non-convex constraint

. The latter leads to a combinatorial optimization problem, solutions to which can be approximated using heuristic algorithms

.

In this paper, we consider tomographic reconstruction of partially discrete objects that consist of a region of constant density embedded in a continuously varying background. In this case, neither the quadratic, Total Variation nor non-convex constraints by themselves are suitable. We therefore propose the following parametrization

 u(x)={u0(x)ifx∈Ω,u1otherwise.

The inverse problem now consists of finding , and the set . We can subsequently apply suitable regularization to separately. To formulate a tractable optimization algorithm, we represent the set using a level-set function such that

 Ω={x|ϕ(x)>0}.

In the following sections, we discuss how to formulate a variational problem to reconstruct and based on a parametric level-set representation of and assuming we know .

The outline of the paper is as follows. In section 2 we discuss the parametric level-set method and propose some practical heuristics for choosing various paramaters that occur in the formulation. A joint background-anomaly reconstruction algorithm for partially discrete tomography is discussed in section 3. The results on few moderately complicated numerical phantoms are presented in Section 4. We provide some concluding remarks in Section 5.

## 2 Level-set methods

In terms of the level-set function, we can express as

 u(x)=(1−h(ϕ(x)))u0(x)+h(ϕ(x))u1,

where is the Heaviside function and the latter term represents the anomaly.

Level-set methods have received much attention in geometric inverse problems, interface tracking, segmentation and shape optimization. The reason being their ability to handle topological changes. The classical level-set method, introduced by Sethian and Osher , solves the Hamiltonian-Jacobi equation, also known as level-set equation.

 ∂ϕ∂t+v|∇ϕ|=0, (1)

where denotes the level-set function as a time-dependent quantity for representing the shape and denotes the normal velocity. In the inverse-problems setting, the velocity is often derived from the gradient of the cost function with respect to the model parameter , . There are various numerical issues associated with the numerical solution of level-set equation, e.g. reinitialization of the level-set. We refer the interested reader to a seminal paper in level-set method  and its application to computational tomography .

Instead of taking this classical level-set approach, we employ a parametric level-set approach, first introduced by Aghasi et al . In this method, the level-set function is parametrized using radial basis functions:

 ϕ(x)=n′∑j=1αjΨ(βj∥x−χj∥2),

where is a radial basis function, and are the amplitudes and nodes respectively, and the parameters control the widths. Introducing the kernel matrix with elements

 aij=Ψ(βj∥xi−χj∥2),

we can now express as

 u=(1−h(A(χ,β)α))⊙u0+h(A(χ,β)α)u1, (2)

where

is applied element-wise to the vector

and denotes the element-wise (Hadamard) product. By choosing the parameters appropriately we can represent any (smooth) shape. To simplify matters and make the resulting optimization problem more tractable, we consider a fixed regular grid and a fixed width . In the following we choose in accordance with the gridspacing as , where determines the influence of RBF on its neighbors.

#### 2.0.1 Example

To show that the reconstruction of level-set with a finitely many radial basis functions, we consider the level-set shown in Figure 1 (a). With RBFs, it is possible to reconstruct a smooth shape discretized on a grid with pixels. Figure 1: Any (sufficiently) smooth level-set can be reconstructed from radial basis functions. (a) Level-set to be reconstructed is denoted by green line. Initial level-set (dash-dotted line) is generated by some positive RBFs (denoted by red plusses) near the center and negative RBFs all around (denoted by blue dots) (b) Initial level-set function and the 0-level plane (c) Reconstructed level-set denoted by dash-dotted line with corresponding positive and negative RBFs (d) Final level-set function

Finally, the discretized reconstruction problem for determining the shape is now formulated as

 minα{f(α)=∥W[(u1−u0)⊙hϵ(Aα)]−(p−Wu0)∥22}, (3)

where is a smooth approximation of the Heaviside function. The gradient and Gauss-Newton Hessian of are given by

 ∇f(α)=ATDTαWTr(α),HGN(f(α))=ATDTαWTWDαA. (4)

where the diagonal matrix and residual vectors are given by

 Dα=diag((u1−u0)⊙h′ϵ(Aα)), r(α)=W[(u1−u0)⊙hϵ(Aα)]−(p−Wu0).

Using a Gauss-Newton method, the level-set parameters are updated as

 α(k+1)=α(k)−μ(k)(HGN(f(α(k))))−1∇f(α(k)),

where is a suitable stepsize and

is a given initial estimate of the shape.

From equation 4, it can be observed that the ability to update the level-set parameters depends on two main factors: 1) The difference between and , and 2) the derivative of the Heaviside function. Hence, the support and smoothness of plays a crucial role in the sensitivity. More details on the choice of are discussed in section 2.1.

#### 2.0.2 Example

We demonstrate the parametric level-set method on a (binary) discrete tomography problem. We consider the model described in Figure 2(a). For a full-angle case () with a large number of samples, Figure 2(c) shows that it is possible to accurately reconstruct a complex shape. Figure 2: Parametric level-set method for Discrete tomography problem. (a) True model (n=256×256) (b) RBF grid (n′=27×27) with initial level-set denoted by green line, positive and negative RBFs are denoted by red pluses and blue dots respectively (c) Final level-set denoted by the green line, and the corresponding positive and negative RBFs (d) Initial level-set function (e) level-set function after 10 iterations (f) final level-set function after 25 iterations.

### 2.1 Approximation to Heaviside function

The update of the level-set function primarily depends on the Heaviside function. Various approximations have been mentioned earlier . These approximations suffer from the variational region of Dirac-Delta function near its peak () which amplifies the gradient disproportionally. This sometimes results in poor updates for the level-set parameter , and hence ruining the reconstructions. To solve this issue, we propose a new formulation of the Heaviside function. We construct the piecewise Dirac-Delta function shown in equation (5):

 δ(x)=⎧⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪⎨⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪⎩0x≤−ϵ14(1−μ)ϵ(1+x+(1−μ)ϵμϵ+1πsin(πx+(1−μ)ϵμϵ))−ϵ

This new approximation has been plotted in Figure 3. The above formulation provides mainly 3 benefits: 1) constant sensitivity in the boundary region controlled by parameter , 2) a smooth transition part and 3) the compact support.

###### Definition 2.1.

In accordance with the compact approximation of the Heaviside function with width , a level-set boundary, denoted by , is defined as the set of all the points satisfying the condition .

Figure 3(c) shows a graphical representation of level-set boundary.

###### Lemma 2.1.

For any smooth and compact approximation of the Heaviside function with finite width , there exists a relation between level-set boundary and gradient of level-set function, given by , where, and is the point on the level-set.

###### Proof.

From Taylor series expansion for near the level-set point , we get

 ϕ(x)=ϕ(x0)+(x−x0)T∇ϕ(x0)+O(∥x−x0∥2).

if and only if . Neglecting higher-order terms, we get . This implies the above relation.

From the lemma 2.1, it is important to choose the Heaviside width in such a way that the level-set boundary exists on model grid. For simplicity, we crudely approximate the gradient of level-set function using upper and lower bounds . Hence, the heaviside width is represented by

 ϵ=κ(max(ϕ(x))−min(ϕ(x))Δx)=κ(max(Aα)−min(Aα)Δx), (6)

where controls the number of gridpoints a level-set boundary can have. This formulation of solves the re-initialization issue associated with the level-set method. The steepness () of the level-set function in the level-set boundary can be handled by this formulation as well, as it adapts the level-set boundary to global change in level-set function. Figure 3: New formulation for approximating the Heaviside function. the Heaviside functions (a) and corresponding Dirac-Delta functions (b) with ϵ=1 and μ=0.2 . Global approximation is constructed from inverse tangent function, while compact one is composed of linear and sinusoid functions. (c) level-set boundary (orange region) around zero level-set denoted by blue line, n represents the normal direction at x0.

## 3 Joint reconstruction algorithm

Reconstructing both the shape and the background parameter can be cast as a bi-level optimization problem

 minu0,α{f(α,u0):=12∥W[(1−h(Aα)u0+h(Aα)u1]−p∥22+λ2∥Lu0∥22}, (7)

where is of form . and are the second-order finite-difference operators in and directions respectively. This optimization problem is separable; it is quadratic in and non-linear in . In order to exploit the fact that the problem has a closed-form solution in for each , we introduce a reduced objective

 ¯¯¯f(α)=minu0f(α,u0).

The gradient and Hessian of this reduced objective are given by

 ∇¯¯¯f(α) = ∇αf(α,¯¯¯¯u0), (8) ∇2¯¯¯f(α) = ∇2αf−∇2α,u0f(∇2u0f)−1∇2α,u0f, (9)

where .

Using a modified Gauss-Newton algorithm to find a minimizer of , leads to the following alternating algorithm

 u(k+1)0 = arg minu0f(α(k),u0) (10) α(k+1) = α(k)−μ(k)(HGN(f(α(k))))−1∇αf(α(k),u(k+1)0), (11)

where the expressions for the gradient and Gauss-Newton Hessian are given by (4). Convergence of this alternating approach to a local minimum of (7) is guaranteed as long as the step-length satisfies the strong Wolfe conditions .

The reconstruction algorithm based on this iterative scheme is presented in Algorithm 1.

We use the LSQR method in step 3, with pre-defined maximum iterations and a tolerance value. A trust-region method is applied to compute in step 4 restricting the conjugate gradient to only 10 iterations.

## 4 Numerical Experiments

The numerical experiments are performed on 4 phantoms shown in figure 4. Each phantom has a constant gray value of parameter 1. For the first two phantoms, the background varies from 0 to 0.5, while for the next two, it varies from 0 to 0.8. In order to avoid inverse crime, the data is generated using a line Kernel, and the forward model uses a Joseph kernel. We use ASTRA toolbox to compute the forward and backward projections . First, we show the results on the noiseless full-view data and later we compare various methods to proposed method in limited-data case with additive gaussian noise of 10 dB SNR. Figure 4: Phantoms for Simulations. All the models have resolution of 256×256 pixels.

For the parametric level-set method, we use compactly supported radial basis functions. The basis functions has the form given below:

 Ψ(r)=(1−r)8+(32r3+25r2+8r+1).

RBF nodes are placed on a rectangular grid with the gridspacing 5 times the computational (model) gridspacing. The grid extends to two points outside the model grid to compensate for the background effects. The heaviside width parameter is set to be 0.01 and the its inclination parameter is set to be 0.1.

The level-set parameter is optimized using the fminunc package (trust-region algorithm) in MATLAB. A total of 50 iterations are performed for predicting the , while 200 iterations are performed for predicting using LSQR at each step.

### 4.1 Regularization parameter selection

The reconstruction with the proposed algorithm is influenced by the parameter for Tikhonov regularization. In general, there are various strategies to choose this parameter, e.g., . As our problem formulation deals with the non-linearity in the level-set parameter, application of these kinds of strategies is not clear. Instead we analyze the various residuals, introduced below, with respect to the regularization parameter.

We define three measures (all in the least-squares sense) to quantify the residuals: 1) data residual (DR), determines the data fit between the true data and reconstructed data, 2) model residual (MR), determines the fit between reconstructed model and true model, 3) shape residual (SR), determines the fit between the reconstructed and true anomaly shape. In practice, one can only have a data residual measure to figure out the regularization parameter . From Figure 5, it is evident that there exists a sufficient region of for which the reconstructions almost stays constant. This region is easily identifiable from the data residual plot for various . Figure 5: Variation of residuals with regularization parameter for Tikhonov. Appropriate region for chosing λ exists between 3.79×105 and 6.95×106. (a) behavior of DR, MR and SR over λ for model A with noisy limited-angle data. Noise amplitude is denoted by green dotted line. (b),(c),(d),(e) shows reconstructions for various λ values

### 4.2 Benchmark test

For the full-view (benchmark) case, the projection data is generated on grid with 256 detectors and 180 projections with . The noise is assumed to be zero in this case. The results on the phantoms with the full-view data are shown in Figure 6. Anomaly geometries in all of these models are reconstructed almost perfectly with the proposed method, although the background has been smoothened out with the tikhonov regularization. Figure 6: Benchmark Tests: Reconstructions with full-view noiseless data for the regularization parameter λ shown below it.

### 4.3 Limited-angle test

In this case, we use only 5 projections with restricted from to . The data is now reduced to almost compared to the benchmark test. We also add Gaussian noise of 10 dB SNR to this synthetic data. To check the performance of the proposed method, we compare it to Total-variation method , DART  and its modified version for partially discrete tomography, P-DART . A total of 200 iterations were performed with regularization parameter determined from shape residual curve. In DART, the background part was modeled using 20 discrete gray-values between its bounds for model A and B, while 30 discrete gray-values for model C and D. 40 DART iterations were perfomed in each case. For P-DART, a total of 150 iterations were performed. Figure 7: Reconstructions with noisy limited data. The first column shows the true models, while the last 4 columns show the reconstructions with various methods. The residuals are also shown below each reconstructed model.

The results on noisy limited-angle with limited data are presented in Figure 7. The proposed method is able to capture most of the fine details (evident from the shape residual) in the phantoms even with the very limited data with moderate noise. The P-DART method achieves the least amount of data residual in all the cases, but fails to capture the complete geometry of the anomaly.

## 5 Conclusions and Discussion

We discussed a parametric level-set method for partially discrete tomography. We model such objects as a constant-valued shape embedded in a continuously varying background. The shape is represented using a level-set function, which in turn is represented using radial basis functions. The reconstruction problem is posed as a bi-level optimization problem for the background and level-set parameters. This reconstruction problem can be efficiently solved using a variable projection approach, where the shape is iteratively updated. Each iteration requires a full reconstruction of the background. The algorithm includes some practical heuristics for choosing various parameters that are introduced as part of the parametric level-set method. Numerical experiments on a few numerical phantoms show that the proposed approach can outperform other popular methods for (partially) discrete tomography in terms of reconstruction error. As the proposed algorithm requires repeated full reconstructions, future research is directed at making the method more efficient.

Acknowledgments. This work is part of the Industrial Partnership Programme (IPP) ‘Computational sciences for energy research’ of the Foundation for Fundamental Research on Matter (FOM), which is part of the Netherlands Organisation for Scientific Research (NWO). This research programme is co-financed by Shell Global Solutions International B.V. The second and third authors are financially supported by the Netherlands Organisation for Scientific Research (NWO) as part of research programmes 613.009.032 and 639.073.506 respectively.

## References

•  A. Aghasi, M. Kilmer, and E. L. Miller. Parametric level set methods for inverse problems. SIAM Journal on Imaging Sciences, 4(2):618–650, 2011.
•  A. Y. Aravkin and T. Van Leeuwen. Estimating nuisance parameters in inverse problems. Inverse Problems, 28(11):115016, 2012.
•  K. J. Batenburg and J. Sijbers. Dart: a practical reconstruction algorithm for discrete tomography. IEEE Transactions on Image Processing, 20(9):2542–2553, 2011.
•  F. Bleichrodt, T. van Leeuwen, W. J. Palenstijn, W. van Aarle, J. Sijbers, and K. J. Batenburg. Easy implementation of advanced tomography algorithms using the astra toolbox with spot operators. Numerical algorithms, 71(3):673–697, 2016.
•  M. Burger. A level set method for inverse problems. Inverse problems, 17(5):1327, 2001.
•  A. Chambolle and T. Pock. A first-order primal-dual algorithm for convex problems with applications to imaging. Journal of Mathematical Imaging and Vision, 40(1):120–145, 2011.
•  O. Dorn and D. Lesselier. Level set methods for inverse scattering. Inverse Problems, 22(4):R67, 2006.
•  A. Kadu, T. Van Leeuwen, and W. A. Mulder. Salt reconstruction in full waveform inversion with a parametric level-set method. IEEE Transactions on Computational Imaging, 2016.
•  A. C. Kak and M. Slaney. Principles of computerized tomographic imaging. SIAM, 2001.
•  E. Klann, R. Ramlau, and W. Ring. A mumford-shah level-set approach for the inversion and segmentation of spect/ct data. Inverse Probl. Imaging, 5(1):137–166, 2011.
•  S. Osher and R. Fedkiw. Level set methods and dynamic implicit surfaces, volume 153. Springer Science & Business Media, 2006.
•  S. Osher and J. A. Sethian. Fronts propagating with curvature-dependent speed: algorithms based on hamilton-jacobi formulations. Journal of computational physics, 79(1):12–49, 1988.
•  T. Roelandts, K. Batenburg, E. Biermans, C. Kübel, S. Bals, and J. Sijbers. Accurate segmentation of dense nanoparticles by partially discrete electron tomography. Ultramicroscopy, 114:96–105, 2012.
•  E. Y. Sidky and X. Pan. Image reconstruction in circular cone-beam computed tomography by constrained, total-variation minimization. Physics in medicine and biology, 53(17):4777, 2008.
•  A. M. Thompson, J. C. Brown, J. W. Kay, and D. M. Titterington. A study of methods of choosing the smoothing parameter in image restoration by regularization. IEEE Transactions on Pattern Analysis and Machine Intelligence, 13(4):326–339, 1991.
•  S. Wright and J. Nocedal. Numerical optimization. Springer Science, 35:67–68, 1999.