1 Introduction
In this section, we briefly recall the Region of Interest (ROI) tomography problem and review the related work.
1.1 Region of Interest Tomography
Region of Interest (ROI) tomography, also called local tomography, naturally arises when imaging objects that are too large for the detector field of view (FOV), as depicted on Figure 1. It notably occurs in medical imaging, where only a small part of a body is imaged. Local tomography can also originate from a radiation dose concern in medical imaging.
Since the projection data does not cover the entire object, it is said to be truncated with respect to a scan that would cover the entire object. The aim is then to reconstruct the ROI from this “truncated” data.
However, due to the nature of the tomography acquisition, the acquired data is not sufficient to reconstruct the ROI in general: for each angle, rays go through the entire object, not only the ROI. Thus, the data does not only contain information on the ROI, but also contribution from the parts of the object external to the ROI. For example, on Figure 1, the detector gets data from parts of the object located at the left of the ROI. These contributions from the external parts actually preclude from reconstructing exactly the ROI from the acquired data in general.
The problem of reconstructing the interior of an object from truncated data is referred as the interior problem. It is well known that the interior problem does not have a unique solution in general. If denotes the projection operator, the acquired data and a solution of the problem , then is defined up to a set of ambiguity functions such that . An example is given in [local_tomoreconst21] where is nonzero in the ROI, but in the detector zone corresponding to the ROI : two solutions differing by would produce the identical interior data. In [local_meaning], it is emphasized that the ambiguity is an infinitely differentiable function whose variation increases when going outside the ROI. The nonuniqueness of the solution of the interior problem prevents quantitative analysis of the reconstructed slices.
Methods tackling the ROI tomography problem can mainly be classified in two categories. The first category methods aim at completing the data by extrapolating the sinogram. There are often oriented toward easy and practical use, although having no theoretical guarantees. The second category of methods rely on prior knowledge on the object. Many theoretical efforts were made on these methods, providing for example uniqueness and stability results.
Other works use wavelets to localize the Radon transform [local_wavelet_multires] [local_convolution_backproj]
or focus on the detection of discontinuities, the best known being probably Lambdatomography
[local_lambda_fan].1.2 Sinogram extrapolation methods
In a classical tomography acquisition, the whole object is imaged. If nothing is surrounding the object, the rays are not attenuated by the exterior of the object ; thus the sinogram values for each angle go to zero on the left and right parts (after taking the negative logarithm of the normalized intensity). In a local tomography acquisition, however, the data is “truncated” with respect to what would have been a whole scan. The incompleteness of the data induces artifacts on the reconstructed image. The first obvious artifact is visible as a bright rim on the exterior of the image. This bright rim is the result of the abrupt transition in the truncated sinogram: the filtration process suffers from a Gibbs phenomenon. Another artifact is referred as the cupping effect: an unwanted background appears in the reconstructed image, which makes further analysis like segmentation challenging. These two artifacts occur simultaneously, but they have different causes. The bright rim comes from the truncation, while the cupping comes from the contribution of the external part.
Figure 2 and 3 illustrates these artifacts. A synthetic slice is projected, and the resulting sinogram is truncated to simulate a ROI tomography setup. The filtering step enhances the transition between the ROI and the truncated part which is set to zero. The difference between the filtered whole sinogram and the filtered truncated sinogram also shows the cupping effect, which appears as a lowfrequency bias.
Sinogram extrapolation methods primarily aim at eliminating the bright rim resulting from the truncation by ensuring a smooth transition between the ROI and the external part. Besides, efforts have been put into the estimation of the missing data in order to reduce the cupping effect. These techniques are referred as sinogram extrapolation methods: the external part is estimated from the truncated data with some extrapolating function.
Extrapolating function can be for example constant (the outermost left/right values are replicated), polynomial, . In [local_mixed_extrapolations], a mixture of exponential and quadratic functions are used to estimate the external part, possibly iteratively. Projection of a circle, for which a closedform formula is known, can also be used [local_thesis_truncated_projs]. A common approach is using the values of the left/right part of the sinogram to estimate the external part, that is, replicating the borders values.
In general, sinogram extrapolation methods do not take into account the sinogram theoretical properties. For example, given an object being nonzero only inside a circle of a given radius, the sinogram decreases to zero at the left and right boundaries. Generally speaking, a sinogram of complete measurements satisfies the HelgasonLudwig consistency conditions (1) [local_thesis_truncated_projs]:
(1) 
is a homogeneous polynomial of degree in and , for all . An alternative formulation is given by equation (2) :
(2) 
for and even. In [local_ellipse], (2) is used as a quantitative measure of the sinogram consistency, and is optimized as an objective function.
For many applications, constant extrapolation provides acceptable results [roifbp], although cupping artifact makes the segmentation challenging.
1.3 Prior knowledge based interior tomography
It was long believed that ROI tomography cannot be solved exactly, because of the nature of Radon inversion through FBP: the reconstruction of each voxel requires the knowledge of all the (complete) lines passing through this voxel. However, in the last decade, it has been shown that multiple nonequivalent reconstruction formulas allow partial reconstruction from partial data in the 2D case [local_tomoreconst21]. Alternatively to Filtered Back Projection reconstruction, which requires complete data, Virtual Fan Beam (VFB) and Differentiated BackProjection (DBP) were developed based on the Hilbert projection equality [local_vfb_vs_dbp].
Moreover, uniqueness theorems based on analytical continuation of the Hilbert Transform were stated and progressively refined in [local_1], [local_2], [local_2b], [local_3], [local_4], [local_truncated_ht], [local_apriori_tiny], [local_7]. They ensure an exact and stable reconstruction of the ROI given some assumptions. These assumptions can be of geometric nature, or in the form of a prior knowledge.
Geometrybased prior knowledge is related to the acquisition geometry. For example, in DBP based reconstruction, a point can be reconstructed if it lies on a line segment extending outside the object on both sides, and all lines crossing the segment are measured [local_tomoreconst21], as shown in Figure 4 (a). Similar results were obtained under less restrictive assumptions, for example the field of view extending the ROI on only one side [local_truncated_ht].
These geometrybased methods do not work, however, when the FOV does not extend the object (Figure 4 (b)). In this case, it has been shown [local_apriori] that a prior knowledge on the function inside the ROI enables exact and stable reconstruction of the ROI. This knowledge can be in the form of the function values inside a subregion of the ROI [local_apriori_tiny] or can be about the properties of the function to reconstruct, for example sparsity in some domain.
This latest kind of knowledge has led to compressive sensing based ROI tomography. In [local_highordertv], [local_cs_based], [local_generalized_tv], Total Variation method is used to reconstruct the ROI. In [local_wavelet_sparsity_besov], the function is assumed to be sparse in the wavelet domain, and a multiresolution scheme reduces the number of unknown by keeping only finescale wavelet coefficients inside the ROI. In [local_wavelets_weighted_sparsity], it is shown that piecewise constant functions are determined everywhere by their ROI data, the underlying hypothesis being formulated as sparsity in the Haar domain.
2 Low frequencies artifacts correction with Gaussian blobs
Sinogram extrapolation usually copes well with the correction of discontinuities in the truncated sinogram, but does not correct the cupping effect in general. This cupping effect appears as a low frequency bias in the reconstructed image. Sinogram extrapolation and other background correction techniques do not give guarantees that the low frequency bias will actually be removed without distorting the reconstruction.
In this section, we describe a new method using prior knowledge on a subregion of the reconstructed volume to eliminate the low frequency cupping bias. The starting point of this method is an initial reconstruction, hereby denoted
, which can be obtained for example with the padded FBP method. This initial reconstruction is then refined with an additive correction term. This correction term uses the known subregion as a constraint which should be sufficient, according to uniqueness theorems stated in the references given in
1.3, to accurately reconstruct the region of interest.As bears the high frequencies features, the correction term is expressed as a linear combination of Gaussian functions to counterbalance the low frequency artifacts. The coefficients are optimized subject to the knowledge of the subregion, hereby denoted . To constrain all the Gaussian coefficients by the knowledge of the image values in , a reduced set of coefficients is firstly computed inside . Then, the Gaussian coefficients are iteratively optimized to fit the reconstruction error of the whole image, using the coefficients computed inside as a constraint.
Let denote the “true” object values in the known region . The proposed method can be summarized as follows:

The reconstruction error in , denoted , is expressed as a linear combination of two dimensional Gaussians. The resulting Gaussian coefficients are denoted .

The error in the whole image is iteratively fitted with Gaussians coefficients , subject to , to build a consistent reconstruction error in the whole image.
Details of each step are described in the following parts.
2.1 Capturing the low frequencies of the error in the known zone
The key assumption of this method is that is large enough to bear sufficient information on the low frequencies artifacts (cupping effect) of a classical reconstruction. The reconstruction error in is approximated as a linear combination of Gaussian functions. This function is chosen for computational convenience, more details are given in 2.2.3.
Equation (3) gives the expression of the approximation.The error estimation is a linear combination of translated Gaussians of weights , with a spacing .
(3)  
For simplicity, all the Gaussians have the same standard deviation
and all have the same spacing between them. Their location on the image grid is also fixed, so that the fit turns into a linear inverse problem :(4) 
where is the operator taking as an input the coefficients
, stacked in a vector
; and producing an image tiled with Gaussians (here in region ). The norm is the squared Frobenius norm, that is, the sum of the squares of all components. Choosing the same standard deviation and the same spacing for all Gaussians enables to implement as a convolution. More precisely, given an image being zero everywhere, coefficients are placed every pixels in and this image is convolved with two dimensional .The reconstruction error in is thus estimated in the least squares sense: is the vector of Gaussian coefficients giving the best estimation of the reconstruction error in the L2 sense. This vector will be used in the second part of the algorithm.
2.2 Correcting the reconstruction error in the image
2.2.1 Overview of the method
Outside the known region , the reconstruction error is not known. Like some other methods described in 1.3, this algorithm aims at using the known region information to accurately reconstruct the whole ROI. However, this approach focuses on correcting an initial reconstruction: the reconstruction error in is fitted by as a linear combination of Gaussians, then the whole image is corrected in a coarse Gaussian basis whose coefficients are constrained in the known subregion.
The Filtered Backprojection (FBP) with sinogram extrapolation is widely used in local tomography because it is both simple and gives satisfactory results in general [roifbp]. Theoretical investigations found that FBP provides a reconstructed function bearing the same discontinuities as the reference function [local_fbp_bilgot] , although the cupping effect can make the segmentation challenging. In this method, FBP with padding is used to obtain an initial estimate of the reconstruction ; the aim is to correct the local tomography artifacts on this image using the prior knowledge. Equation (5) gives the expression of the estimate where is the initial reconstruction, is the operator described in 2.1 and is a linear combination of Gaussian functions aiming at counterbalancing the low frequencies artifacts.
(5) 
The vector is found by minimizing an objective function which is built as follows. A new image , containing the initial reconstruction, is created. The image is an extension of the initial reconstruction . This new image is projected with a projector adapted to the bigger size. To compare with the acquired sinogram , the computed sinogram is truncated by a cropping operator . The data fidelity is then given by Equation (6).
(6) 
We emphasize that this approach differs from the full estimation of the ROI based on a subregion. The variables are in a coarse basis while is fixed, which is notably reducing the degrees of freedom of the problem. The operation has two goals: a coarse estimation of the exterior (outside the support) and a correction of the low frequencies error inside the support.
As the minimization is on , the initial estimate of the ROI is constant, and the data fidelity term (6) can be rewritten as in Equation (7)
(7) 
where is the difference between the acquired sinogram and the projection of the initial reconstruction , and is the projector adapted to the size of . The optimization problem is given by Equation (8).
(8) 
is the vector of Gaussian coefficients found in (4), such that approximates the error in the known zone. The set denotes the subset of the Gaussian basis corresponding to in the pixel basis: if a coefficient of lies in in the Gaussian basis, then lies in in the pixel basis. Equation (8) boils down to finding coefficients minimizing the reconstruction error in the whole image, under the constraint that should give the (known) reconstruction error in .
This local constraint is propagated in all the variables by the projection operator involved in the optimization process. Uniqueness theorems mentioned in 1.3 state that the knowledge of a subregion of the ROI is sufficient to yield an exact reconstruction. However, when using a pixel basis without space constraints, the number of degrees of freedom might be too high ; leading to a slow convergence. Using a coarse basis for correcting the low frequencies reduces this number of degrees of freedom.
2.2.2 Details on the involved operations
In this part, more details are given on the different steps of the algorithm. We start by computing a padded FBP reconstruction which gives an initial estimate of the ROI of size . This image is extended to a bigger image of size where , and is placed in the center of the image.
At each iteration , the image , where is the Gaussian coefficients vector at iteration , is computed. The operator consists in placing the coefficients on a regular grid and convolving with the Gaussian kernel . Thus, the operator can be written where is the convolution by the aforementioned Gaussian kernel of standard deviation , and is an operator upsampling an image by a factor of . As both are linear operators, is a linear operator and where is the downsampling operator and is a convolution by the matched Gaussian kernel, which is the same kernel due to symmetry. In our implementation, the Gaussian kernel has a size of , i.e the Gaussian is truncated at . The resulting image is given by Equation (9)
(9)  
where is the discrete Gaussian kernel, is the image containing the Gaussian coefficients placed on the grid of size after upsampling, that is, zeros almost everywhere except coefficients every pixel. The summation in Equation (9) is done on the convolution kernel support. If , the Gaussian functions supports can overlap once placed on the grid. In practice, these Gaussians should overlap to appropriately fit constant regions : for close to , the Gaussians almost yield a partition of unity [gaussians_partition_unity].
Once the coefficients are placed on a grid and convolved by the 2D Gaussian function, the image is projected. The projection operator adapted to the new geometry (the bigger image ) is denoted . This is a standard Radon transform. This process is illustrated in Figure 5.
As the new image is bigger than , the sinogram and the acquired data cannot be directly compared. The computed sinogram is thus cropped to the region corresponding to the ROI. The cropping operator is denoted by ; it is also a linear operator whose transpose consists in extending the sinogram by inserting zeros on both sides.
The resulting sinogram aims at fitting the error between the acquired sinogram and the (cropped) projection of the object. As the object is unknown except inside , the reconstruction error is only known in . The Gaussian coefficients are constrained by those found by fitting the error inside in 2.1. The projection operator involved in the process propagates the constraint to all the other coefficients.
Coefficients from Equation (8) are computed with an iterative solver. As this objective function is quadratic, efficient minimization algorithms like conjugate gradient can be used. The final image is obtained with and is cropped to the region of interest.
2.2.3 Computational aspects
Using Gaussians as functions to iteratively express the error has several computational advantages. Gradientbased algorithms for solving (8) involve the computation of the forward operator and its adjoint . They are usually the computationally expensive steps of iterative solvers. In this case, these operators can be computed in an efficient way.
The Gaussian kernel has an interesting property: it is the only (nonzero) one to be both rotationally invariant and separable [gaussians]. In our case, the convolution by a Gaussian followed by a projection (forward Radon transform) is equivalent to projecting first and convolving each line of the sinogram by the corresponding one dimensional Gaussian, as illustrated on Figure 5.
The first advantage is of theoretical nature. In many implementations, the projector and backprojector pair are usually not adjoint of each other for performances reasons. Although giving satisfying results in most practical applications, this raises theoretical issues on convergence of algorithms using iteratively forward and backward operators [unmatched]. By using a pointprojector and a pointbackprojector implementation, the pair can be exactly adjoint, besides giving more accurate results.
The second advantage is on the computational side. As the operator consists in projecting 2D Gaussians disposed on the image, it is equivalent to placing onepixel coefficients (Dirac functions in the continuous case) in the image on a grid denoted , projecting the image (with a pointprojector) and convolving the sinogram by a onedimensional Gaussian kernel. The same goes for the adjoint operator consisting in retrieving the Gaussians coefficients from a sinogram. The standard way to compute this operator would be backprojecting the sinogram, convolving by the two dimensional Gaussian (which is its own matched filter due to symmetry), and sampling the image on the grid to get the coefficients. Here, the convolution can be first performed in one dimension along the sinogram lines. The sinogram is sampled at locations corresponding to points in the image domain. The resulting sampled sinogram is then backprojected with a pointbackprojector.
2.3 Pseudocode of the proposed method
In this section, the different steps of the proposed method are summarized in two algorithms. The first performs the fitting of the error in the known zone as described in section 2.1, the second builds the resulting image as described in section 2.2.
A complete implementation of the proposed method is available at [gitlocaltomo]. It contains comments on the different steps and can be tuned for various setups. This implementation relies on the ASTRA toolbox [astragpu] [astra], the pointprojector scheme described in 2.2.3 is not implemented for readability ; but this approach would be more suited to a production reconstruction algorithm where performances are an issue.
The location of the known zone can be simply implemented as a tuple of pixels and a radius for a circular zone.
In practice, the final image is cropped to the region of interest. In algorithm 2, the optimization (line 5) can be done with a gradient algorithm, as differentiating the quadratic error term requires only the operators and their adjoints.
3 Results and discussion
In this section, results and discussions on three test cases are presented. Synthetic sinograms are generated by projecting an object and truncating the sinogram to the radius of a given region of interest in the image.
The following notations are used: is the standard deviation of the Gaussians of the basis, is the grid spacing, is the size (width or height in pixels) of the extended image and is the radius (in pixels) of the known region. In practice, the size of the “original image” (which corresponds to the size of an image that would contain the whole object in practice) is unknown, hence is always chosen different from the width of the original test image.
In all cases, the input image is projected with a projector covering the entire object. The resulting sinogram is then truncated to the radius of the region of interest. The truncated sinogram is the input of the methods. The proposed method is compared to the padded FBP. As the padded FBP is used as an initial reconstruction by the proposed method, the benchmark is mainly about checking that the cupping effect is actually removed, and that the correction does not induce distortion to the final image.
The first test involves the standard SheppLogan phantom (Figure 6), pixels. The region of interest is embedded inside the “absorbing outer material” (ellipse with the highest gray values) to simulate a local tomography acquisition. For an easier interpretation of the line profiles in the final reconstructed images, the values of the standard phantom are multiplied by so that all the values are between and . The width of the extended image is .
Figure 7 shows the result of the reconstruction with padded FBP and with the proposed method. The Gaussian coefficients were computed with on a grid of spacing . The known region radius is pixels, and the extended image width is pixels.
By visual inspection, this method do not induce new artifacts in the reconstruction. Figure 8 shows a line profile of this reconstruction. The cupping effect is visible for the padded FBP, and it has been removed with the proposed method. More importantly, the average reconstructed values are distributed around the true interior values. This provides an illustration of the uniqueness theorem: knowing the values of a subregion of the ROI enables to exactly reconstruct (up to numerical errors) the ROI. The reconstruction with the proposed method bears the same high frequencies as the FBP with full data, which is a good indication that this method do not induce new artifacts. The fact that the reconstruction has the same mean values than the true interior could enable quantitative analysis of the reconstructed volume, which is not easily achievable in local tomography.
Figure 9 shows the difference between the reconstructions and the interior values (denoted ). As expected, the cupping effect is visible for padded FBP, while being almost entirely suppressed in the reconstruction with proposed method.
The second test involves the test image “Lena”, pixels, bearing both smooth regions and high frequencies textures. Figure 10 shows the test setup. The known region has be chosen as slowly varying as possible, as in real acquisitions the known region is likely to be air or coarse features. The width of the extended image is .
Figure 11 shows the difference between the true interior and the reconstruction with the proposed method, with varying values of the radius of the known region. The parameters has been used for these reconstructions. As it can be expected, the cupping effect removal is better when the known region is wide.
Figure 12 highlights the quality improvement for higher values of : the reconstruction profile get closer to the true interior or the full FBP as increases.
It is also interesting to visualize the reconstruction of the whole extended image. As it can be seen on Figure 13, the Gaussian basis even yields an approximation of the exterior. This approximation is actually important for modeling the contribution of the external part in the acquired sinogram. The bias correction is thus closely related to the modeling of the external part.
The third test involves the image of a pencil resulting from a scan at the ESRF ID19 beamline, pixels, shown on Figure 14. The width of the extended image is .
Figure 15 shows profiles of the difference between the reconstruction and the true interior. On this image, a greater radius also improves the cupping removal. The profile of a line through the reconstructed image is depicted on Figure 16.
As a last remark, Figure 17 shows the result of this method without using the known zone constraint, that is, without applying the constraint in (8). As expected, there is a notnull mean bias, even if it has been reduced with respect to padded FBP.
Beside visual inspection, reconstructions can be quantitatively compared to the true interior of the test image. Table 1 shows the comparison with two image metrics: peak signal to noise ratio (PSNR) and the structural similarity index (SSIM). As these metrics are indicators of an average distance between two images, we believe it is well suited for this purpose of evaluating how the low frequencies are corrected by the proposed method.
Image  Reconstruction method  Parameters  PSNR  SSIM 
SheppLogan  Padded FBP  16.68  0.4578  
SheppLogan  Proposed  ,  26.74  0.6045 
SheppLogan  Proposed  ,  26.56  0.6067 
Lena  Padded FBP  23.16  0.8605  
Lena  Proposed  ,  33.97  0.9560 
Lena  Proposed  ,  35.47  0.9589 
Pencil  Padded FBP  26.41  0.8542  
Pencil  Proposed  ,  31.15  0.9840 
Pencil  Proposed  , ,  31.91  0.9901 
Pencil  Proposed  ,  34.22  0.9906 
These results suggest that the proposed method yield better overall reconstruction quality than padded FBP. In particular, it does not induce critical distortion in the reconstruction.
The following advantages of this method can be highlighted. Using a Gaussian basis allows for efficient computation: the correction can be implemented as a convolution with a simple kernel. This basis can also be extended to a multiresolution grid, where big Gaussian functions are placed outside the ROI and small Gaussian functions are placed inside the ROI, to reduce the degrees of freedom even further. This approach would be similar to the method proposed in [local_wavelet_sparsity_besov], although here only the correction is expressed in a multi resolution grid, not the image variables. What can also be noted is that no assumption is done on the shape or location of the ROI and the known region: the known region can for example be several regions of various shapes, corresponding to pores in a sample. Finally, by using the correction in the forward model (8), the HelgasonLudwig conditions are naturally fulfilled.
The proposed method depends on some parameters. The first is the size of the extended image, which should be big enough to model the contribution of the external part. The other parameters are the Gaussian standard deviation and the spacing of the grid. Both are related in a way that the Gaussian functions should slightly overlap to approximate constant functions: if value is high, then should also be high and conversely. These parameters essentially tune how coarse is the Gaussian basis: high values would yield fast convergence but coarse result, while small values would yield slow convergence and fine result.
Using a Gaussian basis does not yield an exact correction of the error, as Gaussian functions defined in Equation (3) do not form a basis. For example, Gaussian functions do not yield a partition of unity, although a very close approximation of this property can be achieved [gaussians_partition_unity]. Thus, the final reconstruction cannot be exact due to the basis coarseness, but can provide results quite close to FBP with full data as seeing Figures 8, 12 and 16.
The fact that only a known zone of the ROI is enough to guarantee an almostexact reconstruction might be counterintuitive, especially in our case where this constraint is expressed in a coarse basis. In the Gaussian basis, local constraints are propagated to the global image by the projection and backprojection operators involved in the process. Using a coarse basis greatly reduces the degrees of freedom of problem (8). The classical tomographic reconstruction problem turned into a least squares optimization is illposed, even for complete data. In a local tomography setup, the illposedness is even worse [local_tomoreconst21]. Iterative solvers dealing with problem (10)
(10) 
for in the pixel space, have very slow convergence in general due to the high number of degrees of freedom, even with spatial constraints. The importance of reducing the degrees of freedom of (10) is highlighted for example in [local_wavelet_sparsity_besov] and [local_tv_gouillart].
4 Conclusion
We presented a new technique of local tomography reconstruction based on the knowledge of a zone of the region of interest. This technique corrects the cupping effect in an initial reconstruction by expressing the error in a coarse basis of Gaussian functions. In accordance to local tomography uniqueness theorems, this method yield almost exact reconstructions, in spite of being only a correction with a coarse basis. Besides, practical considerations are given for an efficient implementation suitable for reconstruction of real data. A commented implementation of this method can be found at [gitlocaltomo].