Tensor-based formulation and nuclear norm regularization for multi-energy computed tomography

07/19/2013 ∙ by Oguz Semerci, et al. ∙ Schlumberger 0

The development of energy selective, photon counting X-ray detectors allows for a wide range of new possibilities in the area of computed tomographic image formation. Under the assumption of perfect energy resolution, here we propose a tensor-based iterative algorithm that simultaneously reconstructs the X-ray attenuation distribution for each energy. We use a multi-linear image model rather than a more standard "stacked vector" representation in order to develop novel tensor-based regularizers. Specifically, we model the multi-spectral unknown as a 3-way tensor where the first two dimensions are space and the third dimension is energy. This approach allows for the design of tensor nuclear norm regularizers, which like its two dimensional counterpart, is a convex function of the multi-spectral unknown. The solution to the resulting convex optimization problem is obtained using an alternating direction method of multipliers (ADMM) approach. Simulation results shows that the generalized tensor nuclear norm can be used as a stand alone regularization technique for the energy selective (spectral) computed tomography (CT) problem and when combined with total variation regularization it enhances the regularization capabilities especially at low energy images where the effects of noise are most prominent.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 8

page 11

page 18

page 21

page 22

page 23

page 25

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

A conventional computed tomography (CT) imaging system utilizes energy integrating detector technology [1] and provides a monochromatic reconstruction of the linear attenuation coefficient distribution of an object under investigation. The polychromatic nature of the X-ray spectra is either neglected [2, 3] or incorporated into the model in an iterative reconstruction method to achieve more accurate results [4, 5]. However, neglecting the polychromatic nature may cause the loss of significant energy dependent information [6, 4, 7]. A multi-energy CT system, on the other hand, distinguishes specific energy regions of the polychromatic spectra at the detector side. Energy selective detection is accomplished with the use of photon counting detectors (PCDs) [8], instead of the energy integrating detectors [1] used in conventional and dual energy CT.

PCDs, which are also referred to as energy discriminating detectors, have the ability to identify individual photons and classify them according to their energy. This property allows the recovery of spectral properties of the object being imaged and opens the door to “color” CT technology with the simplicity of monochromatic reconstruction models

[9]. Multi-energy CT promises improved diagnostic medical imaging [10, 11] as well as in the security domain [12] due to its contrast enhancement and ability to characterize material composition.

Within an energy integrating detector, incoming photons are converted to electrical charge and accumulated on a detector, which is read out to determine the output signal. The latter step is the source of so called detector-read-out noise, which degrades the image quality [5]. In a photon counting detector, on the other hand, an incoming photon is converted to an electrical pulse, whose amplitude is determined by the energy of the photon and the output signal is based on a counter that is incremented according to the charge of the electric pulse [13]. This direct relationship between the counter and an incoming photon with a certain energy eliminates the main cause of the detector-read-out noise. Hence, in addition to their energy discriminating properties, PCDs offer better signal quality compared to energy integrating detectors [14].

The driving application of our work is security [15, 5, 16]. More specifically, the possibility of reconstructing the total attenuation distribution as a function of energy indicates the applicability of multi-energy CT to the luggage screening problem, as accurately reconstructed attenuation curves of nominal objects in luggage potentially lead to material identification. Nevertheless, the methods considered here are more broadly applicable both to the application of multi-energy CT for medical imaging as well as to other multi-linear inverse problems.

We propose an iterative reconstruction method for the multi-energy CT problem where we model the multi-spectral unknown as a low rank 3-way (third order) tensor. With the term tensor we refer to the multidimensional generalization of matrices, i.e., matrices are two-dimensional (2-way) tensors. Recently, there has been considerable work on recovering corrupted matrices or tensors based on low-rank and sparse decomposition [17] or solely on low-rank assumptions [18, 19, 20, 21]. These ideas were also applied to 4D cone beam CT [22] and spectral tomography [9], where the multi-linear unknown is modelled as a superposition of low rank and sparse matrices. In those efforts, the multi-linear unknown is represented as a matrix where each column is the lexicographically ordered collection of pixels at a given energy or time. The authors applied low rank plus sparse decomposition to this multi-linear unknown where the matrix nuclear norm penalty is applied to the low rank component.

We take a different approach and exploit the inherent tensorial nature of the multi-energy CT problem allowing us to make use of a broader collection of tools for the analysis of these structures. To date, tensor decomposition tools such CANDECOMP/PARAFAC (CP) and [23], Tucker [24]

decompositions have found application in data mining and analysis for chemistry, neuroscience, computer vision and communications

[25, 26, 27]

. The higher-order generalization of the singular value decomposition (SVD), which is also referred to as multi-dimensional SVD

[28]

, has been used for image processing applications such as facial recognition

[29].

Despite being efficient tools for multidimensional data processing, to find these decompositions requires the solution of a difficult non-convex optimization problem that also has poor convergence properties. Moreover, for CP and Tucker methods the number of components (unknowns) needs to be known a priori [25, 30, 31]. Thus here we consider an alternate approaches in which these tensor decomposition ideas form the basis for a generalization of the sparsity promoting nuclear norm concepts that have received so much attention recently [19, 32].

For the first approach, we are motivated by [20, 21, 33] where the idea of matrix completion via nuclear norm minimization is generalized to the tensor case using the matricization (unfolding) operation. The unfolding operation refers to rearranging the columns of a tensor along a certain mode or dimension into a matrix [30] (see Section 2 for a more detailed explanation). The multidimensional nuclear norm, or the generalized tensor nuclear norm, is obtained by the summation of nuclear norms of the unfoldings in each mode. Successful results were reported for tensor completion for multi-spectral imaging [34], color image impainting [35, 36] and multi-linear classification and data analysis [36, 20] but, to the best of our knowledge have not been considered for use in a linear inverse problems context before. This then represents a first contribution of this paper.

We use this simple, yet effective generalization in the multi-energy CT problem [37] where we assume the multi-spectral unknown is low rank in each of its unfoldings and construct a regularizer. The resulting tensor nuclear norm regularizer (TNN-1) allows fast processing and has satisfactory noise reduction capabilities. Applying the low rank prior to the multi-spectral matrix, which has vectorized images of different energies in its columns, is a special case of our tensor model where only the unfolding in the energy dimension is considered [34]. Our approach provides a more powerful regularization method for the case where the number of energy bins is limited and redundancy in the spatial dimensions can be exploited with the incorporation of unfoldings in spatial dimensions [21, 34]. One of the contributions of this work is to demonstrate the benefits of low-rank assumptions on the unfoldings in the spatial dimensions to design regularizers.

The generalized tensor nuclear norm is based on the rank of each unfolding which give a weak upper bound on the rank of a tensor 111Tensor rank is defined as the minimal number of 3-way outer products of vectors needed to express the tensor. We refer the reader to Section 3 of Kolda and Bader [30] for more details. [30]. However, it does not exploit the correlations among all the dimensions simultaneously. With this motivation, as the second approach, we propose a new tensor nuclear norm based on tensor singular value decomposition (t-SVD), which is introduced by Kilmer and Martin [38] and has been proved to be useful for applications such as facial recognition and image deblurring [39]. The t-SVD is based on a new tensor multiplication scheme and has similar structure to the matrix SVD which allows optimal low-rank representation (in terms of the Frobenius norm) of a tensor by the sum of outer product of matrices [38]. We devise a new tensor nuclear norm based on t-SVD, which leads to our second regularizer (TNN-2). Similar to TNN-1, TNN-2 can be written in a matrix nuclear norm form. Introduction of this new tensor nuclear norm and its utilization for regularization is the second contribution of this paper.

In addition, we combine TNN-1 and TNN-2 with total variation (TV) regularization [40]. Typically, edge enhancement/preservation is crucial for all imaging applications. One of the most widely used edge preserving regularization technique is total variation (TV) [40] which has been applied to CT as well [41, 42]. With the expectation that the spatial structure of the image at each energy is appropriately regularized using total variation, in this work, we use the summation of the TV of images at each energy as a regularizer.

Although the images at different energies are treated independently with TV, the low rank assumptions on the multi-dimensional unknown results in the implicit coupling of information across the energy dimension. Therefore, when we combine TV with TNN-1 or TNN-2, the accuracy of the reconstructions, especially at low energies where the noise level is higher due to reduced photon counts, are enhanced. As materials are better distinguished at low energies reliable recovery of low energy images is a significant benefit of our approach.

The paper is organized as follows: Section 2 describes the preliminaries on tensors and gives the notation that will be used throughout the paper. Section 3 describes the measurement model and the multi-spectral phantom in the form of a 3-way tensor. Section 4, after a brief introduction to the rank minimization problem, provides the details of the tensor based modeling of the unknown and mathematical details about our nuclear norm regularizers. In Section 5 we give the details of the ADMM algorithm that is used in inversion. Section 7 shows simulation results and Section 8 gives concluding remarks and future directions.

2 Preliminaries on Tensors

In this section, we give the definitions and the notation that will be used throughout the paper. For a more comprehensive discussion, we refer the reader to the review by Kolda and Bader [30], and Kilmer and Martin [38]. A -way tensor is a multi-linear structure in . The unfolding (matricization) operation is defined as the following: For a -way tensor, the mode- unfolding is a matrix whose columns are mode- fibers, where mode- fibers are vectors in that are obtained by varying the index in the dimension of the tensor and fixing the others [30]. As shown in Fig. 1, for 3-way tensors, we can visualize the unfolding operations in terms of frontal, horizontal and lateral slices. Although we do not give formal definitions for horizontal and lateral slices, which are obvious from Fig. 1, we denote the frontal face of a 3-way tensor by , as this notation will be useful.

Figure 1: Mode-1, mode-2 and mode-3 unfoldings of a 3-way tensor . It is easy to visualize unfolding operations in terms of frontal, horizontal and lateral slices. The unfolding operation corresponds to aligning the corresponding slices for each mode next to each other.

Folding and unfolding operators can be represented with permutation of lexicographically ordered vectors [20]. Specifically, the mode- unfolding maps the tensor element to the matrix element according to [30]

(1)

Let us denote the vectorized form of by and of by . Then the relationship between and is

where is the permutation matrix that corresponds to the unfolding operation given by (1). Note that and are equal and

is the identity matrix. This notation will be useful in Section

5.

To construct our second tensor nuclear norm regularization approach in Section 4.2 we need the t-SVD of [38] which in turn requires that we introduce three operators: fold, unfold and bcirc. While the mode-1 unfolding aligns frontal slices next to each other, the operation aligns them on top of each other:

and folds them back to a tensor form:

Using the ’s, one can form the block circulant matrix as follows:

(2)

The -mode product of a -way tensor with matrix produces a tensor in with size and defined as

where is the -mode product operation.

While the -mode product defines an operation between a tensor and a matrix, multiplication of 3-way tensors can be performed using the t-product [38]. For and the t-product is given as

(3)

Notice that, is in . The t-product defined in (3) is the basis for the t-SVD (tensor SVD) and the regularizer, TNN-2, which is introduced in Section 4.2. Given , its transpose, , is obtained by applying matrix transpose to each frontal face and then reversing the order of transposed frontal slices 2 through :

The tensor is orthogonal in the sense of the t-product if

where is the identity tensor whose first frontal face is the identity matrix, and whose other frontal slices are all zeros.

We now review the block diagonalization property of block circulant matrices. For any block circulant matrix we have

(4)

where and are the identity matrices in and , respectively,

is the normalized discrete Fourier Transform matrix

[43] and ’s are the frontal faces of the tensor , which is obtained by applying the Fast Fourier Transform (FFT) to the mode-3 fibers of . We will use this notation, ie., and for the tensor and its frontal face in the Fourier domain in Section 4.2.

Finally, we note that is a linear operation, which can be written in terms of permutation matrices. Let denote the vectorized version of as described before, and let denote the vectorized version of . Then, we have

(5)

where ’s reorder the elements of according to the column blocks of (2).

3 The Measurement Model and The Multi-spectral Unknown as a Tensor

Figure 2: Parallel beam X-ray measurement geometry.

Polychromatic CT [44, 4, 7] is based on the projection model

(6)

where is the energy dependent attenuation coefficient, is the source spectrum and parametrizes the x-ray path . In this work, we used parallel beam measurement geometry [45] as depicted in Fig. 2. Under the assumption of infinitesimal detector bin width (i.e., perfect energy resolution) the polychromatic projection given in (6) simplifies to a monochromatic one, where , resulting in the following model for the data corresponding to the bin:

(7)

where is the number of energy bins. We refer the reader to [46] for an example of a fully polychromatic energy-resolved CT model.

In order to obtain a discrete representation of (7), we discretize each into images of pixels: where and refers to the number of pixels in spatial dimensions and . We also discretize the space into source detector pairs and introduce the system matrix where represents the length of that segment of ray passing through pixel . Incorporating the Poisson statistics of X-ray interactions, the multi-energy measurement model is written

(8)

where and index detector bins and source-detector pairs respectively. Note that, the electronic noise can be neglected for PCDs, This is different than conventional CT where energy integrating detectors are used [13, 14].

Figure 3: Multi-spectral phantom and the attenuation curves for existing materials.

Our goal is to develop an image formation method that treats all of the ’s in a unified manner, as done in [9], rather than reconstructing each independently of the others [12]. Towards this aim, we utilize tensors which are multi-linear generalizations of vectors and matrices. Specifically, we define the three-way (-order) tensor , where first two are spatial dimensions the third dimension is energy, and the ’s are the lexicographical ordering of the frontal slices. A depiction of the multi-spectral phantom used in this study along with the corresponding attenuation curves are given in Figure 3. Note that the multi-linear structure can be extended to higher dimensions for different classes of problems. For example, one can consider a 5D dynamical problem with additional 3 spatial dimension and time dependency. The goal of the multi-energy CT problem in this paper however is to reconstruct given for and .

4 Low-Rank modeling and Regularization

As mentioned before, low-rank modeling is an important tool not only for compressing [47] and analyzing [29, 48] large data sets but also for regularization and the incorporation of prior information [49, 9]. Traditionally, low-rank modeling is applied to a matrix variable, which is assumed to be low order or of low complexity [50]. As the multi-linear generalizations, such as our multi-spectral unknown in the form of a 3-way tensor, are closely related to the matrix case, we briefly describe the rank minimization problem of matrices in this section.

Let the matrix denote the unknown variable that is assumed to be low-rank. For instance, can be the system parameters of a low-order control system [50], a low dimensional representation of data [51], or adjacency matrix of a network graph [52]

. The problem of estimating

with minimal rank from the output of a system can be formulated as a minimization problem:

(9)

However, minimization of , which is a non-convex function of , is an NP-hard problem [32]. Consequently, Fazel et al. [50] proposed the replacement of the rank function with the nuclear norm, which is defined as

where ’s are the singular values of the matrix . This replacement results in the following optimization problem.

(10)

The minimization problem (10) is motivated by the fact that the nuclear norm provides the tightest convex relaxation for the rank operation in matrices [19]. The replacement of rank with the nuclear norm is analogous to the use of the norm as a proxy to the semi-norm to achieve sparse signal reconstructions [53]. Analysis of this convex relaxation technique and the equivalence of (9) and (10) for compressed sensing are analyzed in [32]. In the sequel we interpret the nuclear norm term as a regularizer and seek solutions to problems in the form

(11)

where and is the regularization parameter.

The low-rank assumptions and the nuclear norm heuristics have been generalized to the multi-linear case,

i.e., to tensors, using the unfolding operations [21, 35]. Inspired by these works, we developed the tensor nuclear norm regularizer (TNN-1), which is introduced in Section 4-A. Additionally, we define a new tensor nuclear norm, where we exploit the T-SVD [38]. This new tensor nuclear norm and the regularizer (TNN-2) based on its definition are defined in Section 4.2. The common property of TNN-1 and TNN-2 is the fact that they can be formulated as a matrix nuclear norm minimization problem, which we shall explain next.

4.1 Tensor Rank and the Generalized Tensor Nuclear Norm Regularizer (TNN-1)

We start with the definition of Tucker decomposition, as the generalized tensor nuclear norm is related to it. The Tucker model [24, 30] is a multi-linear extension of SVD where a -way tensor is decomposed into a core tensor , which controls the interactions between the modes and matrices, which multiply the core tensor in each mode:

Figure 4: Tucker decomposition of a 3-way tensor. The core tensor controls the interactions between the modes and matrices that multiply the core tensor in each mode

Here, columns of can be considered as left singular vectors of mode- unfolding and for is referred to as the -rank222The CP decomposition can be seen as a special case of Tucker where, the core tensor is super-diagonal. Therefore, where the tensor rank due to CP, , needs to be known a priori [30].. In the general Tucker model, the ’s need not be orthogonal. The special case of the Tucker model where ’s are orthogonal matrices is referred to as the Higher-Order-Singular-Value-Decomposition (HOSVD) [28]. The Tucker model can also be written in terms of unfoldings. For the three-way case we have

(12)

where is the Kronecker product. Fig. 4 illustrates the Tucker decomposition for the 3-way case. The equations given in (12) and definition of -rank shows that if a tensor is low rank in its mode (i.e., ), its unfolding in the same mode is a low rank matrix. Due to this connection, the matrix nuclear norm has been generalized to tensors by utilizing the unfolding operation in each mode in order to estimate low-rank tensors via a convex minimization problem [35, 21, 33]. The generalized tensor nuclear norm for a -way tensor is given as [35, 21]

(13)

We refer the reader to [35] for a thorough discussion about the relation of (13) to Tucker decomposition and Shatten 1-norm of matrices.

Figure 5: The unfoldings of and their singular values in log scale. First row: and . Second row: , and .

The low-rank tensor concept is quite relevant to the multi-energy CT problem. X-ray attenuation at neighboring energies are highly correlated. Therefore, for spectral CT, one expects the third unfolding, to be low-rank [9]. However, structural redundancies can also be exploited by enforcing low-rank structure on the other two unfoldings. To this end, we use the more general form of the tensor nuclear norm given in (13), which was proposed by Tomioka et al. [20] as a regularizer:

(14)

where ’s can be regarded as regularization parameters that tune the importance of each unfolding. A different parameter for each unfolding is assigned in order to make the regularizer more flexible [20], in a way that low-rank assumptions on any mode can be discarded if desired. For instance, see Section 7 for the examples where we set and to zero. Fig. 5 demonstrates the unfoldings of the phantom with 12 energy levels given in Fig. 3 and their singular values. The rapid decay of the singular values provides an indication of the usefulness of (14) as a regularizer for multi-energy CT reconstruction.

4.2 A t-SVD Based Tensor Nuclear Norm and Regularization (TNN-2)

In [38] it is shown that any tensor can be factored as

where and are orthogonal, and is made up with diagonal frontal faces. It is easy to show that, as is the case with the common matrix SVD, the t-SVD allows the tensor to be written as a finite sum of outer product of matrices [39]:

(15)

where and correspond to the lateral and frontal faces respectively, and is the mode-3 fiber, similar to Matlab’s indexing. Now, we have the following relationship in the Fourier domain [38]

(16)

where the left-hand-side is the block diagonalized version of as given in (4) and is the SVD of the block, . In the light of (15) and (16) we propose the following tensor nuclear norm:

Notice that we use the circled asterisk for this tensor nuclear norm definition. From (16) we have

(17)

where the first follows immediately from (4) and the second line is due to the unitary invariance of the matrix nuclear norm. A consequence of (17) is that is a valid norm since

  • For any tensor , , and when , by definition .

  • Let , then

  • Let and be two tensors.

Figure 6: The block circulant matrix and its singular values.

Similar to the TNN-1 case, we use the new tensor nuclear norm given in (17) as a regularizer and form TNN-2 as:

Fig. 6 demonstrates of the phantom given in Fig. 3 and its singular values, which rapidly decay.

It has been shown that a truncated t-SVD representation provides an optimal representation in the same way that a truncated matrix SVD would give an optimal low rank approximation to the matrix in terms of the Frobenius norm [38]. Thus, our newly defined tensor nuclear norm, which is based on t-SVD, is more analogous to the matrix nuclear norm than generalized tensor nuclear norm given in (13) in this sense. Additionally, compared to TNN-1, TNN-2 has only one regularization parameter that needs to be determined.

5 Inverse Problem Formulation

The measurement model given in (8) leads to the penalized weighted least squares (PWLS) formulation [54] which gives quadratic approximation to the Poisson log-likelihood function for energy bin as

Here, and is a diagonal weighting matrix with . Adding ’s and the regularization function we obtain a convex objective function:

(18)

where is a combination of or with a total variation (TV) regularizer. We have considered two types of TV regularizers. The first one, which is denoted by , is the weighted superposition of isotropic TV operator (2D TV) applied to the frontal slices of as

(19)

where with

and

Here, the ’s are the regularization parameters. The second one, which is denoted by , is the three dimensional TV (3D TV) operator defined as

where is the obvious extension of the 2D case to 3D. In the remainder of the paper we refer the first approach as TV and the second as 3D-TV regularization. TV regularization favors images with sparse gradients, hence it works well for piecewise constant image reconstruction. However, due its stair-casing effect TV tends to be problematic for texture recovery [55]. As we demonstrate empirically in Section 7, combining the tensor nuclear norm regularizer with TV reduces the amount of TV needed for reasonable noise cancellation and helps with recovering the texture.

6 Solution Algorithm via alternating direction method of multipliers (ADMM)

ADMM is a combination of dual decomposition and augmented Lagrangian methods [56, 57]. Although it results in a four-fold increase in the number of variables in the minimization procedure (see Section VI-A), ADMM provides a simple splitting scheme that breaks down a cost function, which is hard minimize, into pieces that are more tractable and can be minimized efficiently. Splitting based methods have been used for several problems including iterative CT reconstruction [58, 59, 60], image recovery [61] and restoration [62] and tensor completion [21, 20]. We examine the solution algorithm according to the structure of .

6.1 TNN-1 and TV Regularization

The first case is when is combined with :

First, the optimization problem given in (18) for is reformulated as

(20)
subject to

To solve (20) we form the augmented Lagrangian as

(21)

where ’s are dual variables, is the penalty term and is the inner product in the sense of Frobenius norm defined for and as

ADMM minimizes (21) for and ’s in an alternating manner and then updates the dual variables:

(22)

Using the permutation matrix notation given in Section 2 we can write

and

where refers to the index set of corresponding energy (e.g., for ). Hence, the update in (22) can be decoupled and each can be treated separately:

which is a total variation regularized quadratic problem that can be solved using various methods [63, 64]. We used FISTA [65] in this work.

With a straightforward reformulation one finds that the updates can be obtained via the proximity operator of the nuclear norm as

which has an analytical solution via the singular value shrinkage operator [18]. Specifically

(23)

where is the singular value decomposition of and is the shrinkage operator with applied to the singular values.

6.2 TNN-2 and TV Regularization

Replacing TNN-1 with TNN-2 gives

and

(24)
subject to

needs to be solved. The augmented Lagrangian for (24) is given as

In order to update ’s separately as in the TNN-1 case, using the definition of operation given in (5), we can write

and

where refers to the index set of energy (e.g., for we have ). Given this notation, we note that the solution algorithm of (24) is identical to the TNN-1 case described in Section VI-A.

Figure 7: Ground truth for Phantom-1. Left: 25 keV. Right: 85 keV

In Section 7 we show examples where , , and . Solution to first two cases are straightforward variations where the latter case corresponds to reconstructing images for each energy independently using TV regularization. The last case results in a 3D linear inverse problem with TV regularization, for which we have used the UPN algorithm described in [66].

Figure 8: Phantom-1: Reconstructions results for 25 keV.