Non-rigid Registration Method between 3D CT Liver Data and 2D Ultrasonic Images based on Demons Model

12/31/2019
by   Shuo Huang, et al.
NetEase, Inc
11

The non-rigid registration between CT data and ultrasonic images of liver can facilitate the diagnosis and treatment, which has been widely studied in recent years. To improve the registration accuracy of the Demons model on the non-rigid registration between 3D CT liver data and 2D ultrasonic images, a novel boundary extraction and enhancement method based on radial directional local intuitionistic fuzzy entropy in the polar coordinates has been put forward, and a new registration workflow has been provided. Experiments show that our method can acquire high-accuracy registration results. Experiments also show that the accuracy of the results of our method is higher than that of the original Demons method and the Demons method using simulated ultrasonic image by Field II. The operation time of our registration workflow is about 30 seconds, and it can be used in the surgery.

READ FULL TEXT VIEW PDF

Authors

page 5

page 7

page 8

page 9

page 10

01/12/2020

A Comparative Study for Non-rigid Image Registration and Rigid Image Registration

Image registration algorithms can be generally categorized into two grou...
11/29/2020

A method for large diffeomorphic registration via broken geodesics

Anatomical variabilities seen in longitudinal data or inter-subject data...
03/05/2019

FastReg: Fast Non-Rigid Registration via Accelerated Optimisation on the Manifold of Diffeomorphisms

We present a new approach to diffeomorphic non-rigid registration of med...
10/25/2016

Anatomically Constrained Video-CT Registration via the V-IMLOP Algorithm

Functional endoscopic sinus surgery (FESS) is a surgical procedure used ...
03/10/2021

A registration error estimation framework for correlative imaging

Correlative imaging workflows are now widely used in bioimaging and aims...
07/29/2019

Automatic Registration between Cone-Beam CT and Scanned Surface via Deep-Pose Regression Neural Networks and Clustered Similarities

Computerized registration between maxillofacial cone-beam computed tomog...
11/25/2015

SlicerPET: A workflow based software module for PET/CT guided needle biopsy

Biopsy is commonly used to confirm cancer diagnosis when radiologically ...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Ultrasonography, as a real-time imaging technique with no radiation to both patients and doctors, has been widely studied for using in computer-assisted surgeries [Ref1] [Ref2]. The registration and fusion between ultrasonic image (US image) and computed tomography (CT) data is receiving a lot of attention in recent years, since it is hard for the doctors to locate the exact location only by the 2D ultrasonic image. Moreover, this technique also helps doctors to acquire more information from data of other modalities like CT or magnetic resonance imaging (MRI), which also helps to improve the outcome of diagnosis and treatment plan. The aim of image registration is to obtain an optimal spatial transformation (deformation model) that can maximize the similarity measure (objective function) of the two images, as shown in Equation (1), where S and T are the similarity measure and the spatial transformation, respectively.

(1)

Since image registration can be regarded as a multiple parameter optimization problem for transformation model parameters, as shown in Equation (2), some optimization methods are employed in the iterative process to obtain the optimal value of parameters of the spatial transformation T in Equation (1).

(2)

The image registration methods can be divided into two main categories: rigid and non-rigid registration. The rigid transform can correct the rotation and translation of images, and it is often used as the initialization of the non-rigid transform [Ref3]. The affine transform is a basic method of non-rigid registration, which can further correct the scaling and shear mapping of images [Ref4].

For the non-linear deformation, a lot of deformable registration methods with different deformation models, objective functions and optimization methods have been put forward [Ref5] [Ref6], [Ref7]. The Demons model based on the mutual information (MI) force and Quasi-Newton method is widely employed in image registration and the correction of the non-rigid deformation in different images in recent years. The Demons model is a representative optical flow model in the non-rigid registration of images, which has strong ability in dealing with the non-smooth deformation and has good anti-noise performance. The mutual information (MI) is widely employed in multi-modal registration algorithms. It has a property of generality and is efficient and reliable in multi-modal registration. And the Quasi-Newton methods are widely used optimization methods in the registration methods, which aims to accumulate information from the previous iterations and take advantage of it in order to achieve better convergence.

However, the performance of the Demons model on the registration between 3D CT liver data and 2D ultrasonic images needs to be improved. Due to the low signal-to-noise ratio and contrast of the ultrasonic image, the reflection of boundaries of organs and tissues and the shadowing of bones, the registration between ultrasonic images to CT data is challenging

[Ref8]. Especially when it comes to the Demons method based on the mutual information force, these differences severely influences the mutual information between images, which leads to the wrong registration results. Moreover, it is also difficult to acquire the correspondence CT slice directly by rigid registration with ultrasonic image, since the iterative process is easily to be trapped in the local optimum as a result that the spatial structure information of the ultrasonic image is limited.

To overcome these challenges, a lot of methods has been put forward in these years [Ref8] [Ref9]. Typically, these approaches contains two steps. The first one is matching the world coordinate systems between CT volume data and ultrasonic image before the registration, which is called “calibration”. Matching the world coordinate systems is an important and efficient pre-processing method since setting a precise initial value is efficient in avoiding the local optimum and reducing the calculation time. The coordinates of the ultrasonic probe can be obtained by ultrasonic probe tracking devices [Ref10][Ref11][Ref12][Ref13][Ref14]. After the extraction of the coordinates of probe, a lot of methods have been provided to obtain the transformation between the world coordinate systems and the coordinates of CT volume data or ultrasonic probe [Ref13][Ref14][Ref15].

The second one is reducing the influence of differences between 3D CT or MRI volume data and ultrasonic images caused by the reflection and shadowing and improve the accuracy of registration. One research direction is to extract some control points from the data and images, register these points and then deform the data and images using the deformation field of these points [Ref16][Ref17][Ref18]. Another research direction is to register the ultrasonic data with simulated ultrasonic data from the CT volume data or MRI volume data. This kind of methods can acquire better overall registration performance since more data are involved in the registration, especially in the parts that do not have control point.

In recent years, many efficient simulation method has been put forward and employed to register the ultrasonic data and CT or MRI data [Ref19][Ref20]. Field II is an efficient model-based simulation method, which describes the wave propagation using a spherical wave theory and incorporates the transducer effect by combining the impulse responses of points over the transducer surface [Ref21][Ref22][Ref23][Ref24]. The accuracy of this numerical model relies on its discrete precision [Ref21]. W. Wein et al have put forward a ultrasonic image simulation method through the simulation of large-scale reflection and transmission of sound waves and the intensity mapping of different tissue [Ref20][Ref25]. These simulation method helps to reduce the differences between the simulated and the real ultrasonic data, thus can improve the registration accuracy between CT volume data and ultrasonic images. However, these methods all require a lot of calculation, and the calculation time needs to be reduced.

In this work, with a novel boundary extraction and enhancement method based on radial directional local intuitionistic fuzzy entropy in the polar coordinates, the non-rigid Demons model based on the mutual information force and Quasi-Newton method has been improved to improve the registration accuracy between the 3D CT volume data and the 2D ultrasonic images after the matching of world coordinate system. A new registration workflow has also been put up. Experiments on 3 ultrasonic images and 1 CT volume data show that our method can acquire high-accuracy registration results. Experiments also show that the accuracy of the results of our method is higher than that of the original Demons method and the Demons method using the simulated ultrasonic image by Field II.

2 Methods

In order to get good register results, a novel 2D US image to 3D CT volume registration method based on the non-rigid Demons method is put forward in this paper. To improve the performance of the Demons method, the Block Matching and 3D Filtering (BM3D) method [Ref26][Ref27][Ref28] and Block Matching and 4D Filtering (BM4D) method [Ref29] have been used to remove the additive random noise in US and CT data, respectively, and a novel ultrasonic image-like edge extraction and enhancement method for CT slice based on radial directional local intuitionistic fuzzy entropy in the polar coordinates has been put forward.

Figure 1: The flowchart of our method.

Our method contains three steps: the extraction of the corresponding CT slice, the edge enhancement of the CT slice and the non-rigid registration and resampling of the de-noised US image. The flowchart of our method is shown in Fig. 1.

2.1 Data denoising

The mutual information is employed as the similarity measure in the image rigid and non-rigid registration in this work. According to the definition of mutual information in Section Appendix A, suppressing the random noise (mostly the speckle and Gaussian noise) in the CT data and US images is an efficient way to improve the performance of the registration method, since such randomly deformed noise can severely influence the mutual information, and hereby reduce the accuracy of registration.

The noise suppression technique has been widely studied in a long time period, and a lot of efficient approaches has been put forward [Ref30][Ref31][Ref32][Ref33][Ref34]. The BM3D algorithm can acquire good de-noising effect with little influence on the details of images [Ref30] when removing the additive noise such as the Gaussian noise and speckle noise [Ref31], which is multiplicative noise in definition but can be regarded as zero-mean additive noise according to the paper by J. S. Lee [Ref34]

. Moreover, on comparison with the denoising algorithms based on deep learning algorithms, the BM3D method is still competitive in the denoising performance. Therefore, the BM3D and BM4D methods are employed to remove the random noise in the US image and the CT data, respectively. In Section Appendix B, we briefly introduced BM3D and BM4D methods.

2.2 The extraction of the corresponding CT slice

To extract the corresponding CT slice of the US image, firstly, the world coordinate systems of the CT and ultrasound image are matched to align the data roughly. However, since the shooting of CT and US is not coinstantaneous, the slight movement of patient is ineluctable. To correct this movement and extract the corresponding CT slice of US image, the rigid registration algorithm is employed. 31 slices of the CT volume are extracted and de-noised by the BM4D method, where the roughly aligned corresponding slice is the 16th slice in the extracted volume. And the US image is de-noised by the BM3D method. Then the rigid registration algorithm is implemented on the de-noised CT and US data and the corresponding CT slice is extracted.

In this section, the affine transformation is employed to correct the rigid deformation [Ref35][Ref36][Ref37]. The general affine model can describe the scale, rotation, shear and translation between two images. The transformation between pixel with coordinate in the fixed image and its corresponding position in moving image is shown in Equation (3).

(3)

Since image registration can be regarded as a multiple parameter optimization problem for transformation model parameters, as shown in Equation (2), the steepest decent method is employed in the iterative process to obtain the optimal value of parameters in Equation (3).

2.3 Edge extraction and enhancement

Another strategy to improve the performance of the 2D-2D Demons registration method is to enhance the edges such as the boundaries of the liver and the blood vessels in the CT image to improve the similarity between CT and US images. As shown in Chapter 1, this strategy has been widely used in the registration of CT and US data, and several efficient US image simulation approaches has been put forward. However, all these strategies need complicated calculation. As in the de-noising step, the scattering spots has been suppressed, high similarity in the smooth regions between de-noised CT and US images has been reached, a simple edge enhancement method is enough for acquiring high registration accuracy in the edges. Based on the theory of intuitionistic fuzzy set (IFS) [Ref38][Ref39][Ref40][Ref41], a novel edge extraction and enhancement method has been put forward.

The IFS (denoted as ) of a given domain can be expressed as:

(4)

where and are the membership function and the non-membership function of set D, respectively. Moreover, for all in set , we have .

In order to better describe the information of membership degree in set , a hesitation function has been employed, which combines both the membership and the non-membership functions. Also, for all in set , we have . Based on the functions above, the intuitionistic fuzzy entropy of IFS in a given domain is defined as:

(5)

where is the number of elements in set .

Based on the gradient information of image, the local intuitionistic fuzzy entropy (LIFE) [Ref42] of a pixel in the image reflects the possibility of whither this pixel locates in the smooth regions. Therefore, the LIFE can be used to extract the edges in the image. The definition of functions , and is:

(6)
(7)
(8)

where denotes the normalized gradient field of image. And the LIFE of pixel in its neighborhood can be expressed as:

(9)

Usually, to maximum the LIFE, we have .

One advantage of LIFE in edge detection is that by simply changing the rule in calculating the gradient field, the directional LIFE under directional gradient field can be acquired to better extract the edges that are enhanced in the US image, since the edges parallel to the propagation direction of ultrasound seldom reflects ultrasound and are not enhanced in the US image.

In order to better extract the enhanced edges in the CT image, the radial directional gradient field under the polar coordinates,

, is put forward. In this method, the gradient vector

, of a given pixel under the rectangular coordinate system is calculated firstly. Then, the deflection angle is acquired using the coordinates of this pixel and the center of the sectorial view field of the US image, which is also the origin of the polar coordinate system. As shown in Fig. 2 and Equation (10).

(10)

where is the coordinate under the rectangular coordinate system of the center of the circular-arc-shaped upper and lower boundaries of the ultrasonic image.

Figure 2: The 2D Cartesian coordinates and polar coordinates.
(11)

After that, the of pixel can be obtained by Equation (11). And using the normalized , the radial directional LIFE, , can be acquired. In our experiences, in order to reduce the time used for calculation, is set to be 4. Using a soft-threshold of 1.5 to reduce the weak textures, as shown in Equation (12a), where is the sign function, as shown in Equation (12b). Then we can get the extracted edges. Finally, we add the extracted edges to the de-noised CT slice to enhance the slice.

(12a)
(12b)

2.4 The Non-rigid registration and resampling of the de-noised US image

In order to reduce in influence of heart, bone or other organs that are not exist in the US image on the non-registration result, and acquire better performance of the registration algorithm on the liver, the region of liver is marked out using a mask on the enhanced CT slice. Then the liver in the US image is also extracted using the same mask.

After the extraction of the liver, the extracted regions in the CT slice and US image are firstly aligned by rigid translation. This is because that in the extraction of corresponding slice, the movement of other organs may influence the accuracy of registration on the liver, and the enhancement of edges can also improve the accuracy of rigid registration. After that, the 2D US to 2D CT non-rigid registration of the aligned regions is employed to acquire the deformation field. The Demons model based on the mutual information force is employed in this work to correct the non-rigid deformation of images in aligned regions, as shown in Section Appendix A.

For the region outside the mask, a Gaussian blur filter has been used to smooth the deformation field and avoid the sudden change on the boundaries of the mask.

Finally, the de-noised US image is resampled by the deformation field, and the resampled image is the output of our method. Then we compare the output US image with the extracted corresponding CT slice to evaluate our method.

3 Results

In this work, test on the registration between ultrasonic images and CT data of one patient form the Beijing Tsinghua Changgung Hospital has been done to evaluate our algorithm. In order to get a clearer view about the liver and blood vessels and get better registration performances, the window width and position of the CT data are adjusted before the registration.

Figure 3: The data before and after registration of test 1. (a) is the corresponding CT slice of the original ultrasonic image (b) after the matching of world coordinate system. (e) is the result of the rigid registration from CT volume data to ultrasonic image. (f) is the non-rigid registration result of the ultrasonic image (b). (c) and (g) are the fusion image of (a) (b) and (e) (f), (d) and (h) are the details of (c) and (g), respectively.

The data before and after the registration are shown in Fig. 3-5, where subfigures 3(a), 4(a) and 5(a) are CT slices corresponding to original ultrasonic images, subfigures 3(b), 4(b) and 5(b) after the matching of the world coordinate system. Subfigures 3(e), 4(e) and 5(e) are CT slices after rigid registration to original ultrasonic images, and subfigures 3(f), 4(f) and 5(f) are non-rigid registration results of original ultrasonic images to their correspondence rigid registered CT slices. And fusion images of CT slices and ultrasonic images before and after registration are obtained and shown in Figs. 3(c) – 5(c) and 3(f) – 5(f) to better evaluate our algorithm. According to these figures, on comparison with the fusion image before registration, boundaries of both the liver and vessels are better matched. And the positions and shapes of vessels are also corrected. And the positions of bones in Figs. 4(g) and 5(g) are also closer to their positions in the ultrasonic images, which can be inferred by their “shadows” in the ultrasonic image.

Figure 4: The data before and after registration of test 2. (a) is the corresponding CT slice of the original ultrasonic image (b) after the matching of world coordinate system. (e) is the result of the rigid registration from CT volume data to ultrasonic image. (f) is the non-rigid registration result of the ultrasonic image (b). (c) and (g) are the fusion image of (a) (b) and (e) (f), (d) and (h) are the details of (c) and (g), respectively.
Figure 5: The data before and after registration of test 3. (a) is the corresponding CT slice of the original ultrasonic image (b) after the matching of world coordinate system. (e) is the result of the rigid registration from CT volume data to ultrasonic image. (f) is the non-rigid registration result of the ultrasonic image (b). (c) and (g) are the fusion image of (a) (b) and (e) (f), (d) and (h) are the details of (c) and (g), respectively.

The details of results in Figs. 3-5 are shown in subfigures (g) and (h) of Figs. 3-5, which also show that our registration algorithm can improve the matching accuracy between the CT data and ultrasonic images, especially in the parts pointed by arrows in Figs. 3 and 4. However, although our algorithm has better registration performances than the Demons algorithm, there still exists some parts that can be improved, such as the places pointed by the blue arrow in Fig. 4(h) although the distance between points A2 and B2 is shorter than that between points A1 and B1, points A2 and B2 are still not matched.

Figure 6: The comparison of registration accuracy of different methods using data in Fig. 3. (a1) is the corresponding CT slice of the original ultrasonic image, (b1) is the fusion image of the CT and ultrasonic images after rigid registration, and (c1), (d1), (e1) are the fusion image of CT slice and non-rigid registration results of ultrasonic image using the original Demons method, Demons method with the simulated ultrasonic image of the CT slice by Field II and our method, respectively. (a2) – (e2) are the details of (a1) - (e1), respectively.

4 Discussion

The registration results of Demons method and the Demons method uses the simulated ultrasonic image by Field II as well as our method on the data of Fig. 3 are compared, as shown in Fig. 4. Both the Field II method and our method can improve the matching accuracy on comparison with the rigid registered results. However, the original Demons method failed to improve the registration accuracy. The original Demons method enlarged the distance between the corresponding vessels and the liver boundaries. On comparison with the result of the Demons method, the registration accuracy is improved with the help of the Field II method. And our method further improved registration performance of either blood vessels or the boundaries of liver. The Demons method fails to register the liver and the vessels, since the reflection by boundaries of liver and vessels changes the “structure” of the image, and thus influences the mutual information in the non-rigid registration and leads to the wrong result. Field II is efficient in the improvement of the similarity between the real ultrasonic image and the simulated image, however, this method is weak in the simulation of the boundaries.

To better compare the results, the details of Figs. 3 and 4 are shown in Fig. 5, and the gray level values of pixels in the position of the red, blue and green lines on Figs. 6 (a) and (b) are shown in Figs. 6(c) – (e). Figs. 5 and 6 further proves that our method can acquire better results, since the registered ultrasonic image better matches the reference CT image. And according to the line charts, curves of our method have better matching accuracy to the reference curve of the CT image, since these curves have the best matching accuracy in the positions and lengths of the peaks and valleys and the closest distances between the position of the corresponding boundaries of the peaks and valleys.

Figure 7: The gray level value distribution curves about pixels on the red, green and blue lines of the registration results of different methods. (a) and (b) show the position of these lines on the rigid-registered CT slice and the registration result of ultrasonic image of our method, respectively. (c) – (e) are curves about pixels on the blue, green and red lines.

Moreover, our method is less time-consuming than the Field II method, since it requires less calculation. It only takes about 2 seconds for the boundary enhancement approach in a CT slice on Matlab R2014a on a computer with 3.4 GHz and 3.41 GHz Intel i5-7500U CPU. And as for the Field II method, the calculation time is more than 10 hours. As for our method, the most time-consuming step, the BM4D denoising of CT slices, can be done before the ultrasonic examination, and the total operation time of our method to register one ultrasonic image is about 30 seconds. Therefore, our method can be used in the surgery.

Reducing the calculation time and extend the application of this algorithm to ultrasonic images that boundaries are not enhanced are our research directions in the future. With the development of deep learning algorithms, several ultrasonic image simulation algorithms based on deep learning has been put forward [Ref43], which may be useful in improving the registration accuracy and reducing the calculation time. And the research work on these algorithms will also be our future work.

5 Conclusion

The Demons method based on mutual information force and Quasi-Newton method is an efficient method in the registration of multi-model medical data. However, its performance needs to be improved when it comes to the registration between CT volume data and ultrasonic images. We put forward a novel boundary extraction and enhancement method based on radial directional local intuitionistic fuzzy entropy in the polar coordinates to improve the similarity between original ultrasonic images and enhanced CT slices with less calculation than existed ultrasonic image simulation methods. We furtherly designed a new workflow for the registration between CT volume data and ultrasonic images after the matching of the world coordinate system. Experiments shows that our method can acquire better registration accuracy than the original Demons method and the Demons method based on simulated images by Field II. The operation time of our edge detection and enhancement is about 2 seconds, and that of the whole workflow is about 30 seconds for the registration on -pixel CT volume data and -pixel ultrasonic image, and it can be used in the surgery.

Future work mainly includes the improvement on the performance of large deformation correction and reducing the operation time using parallel computation methods. Using the deep learning methods to obtain simulated CT or ultrasonic data with higher similarity in shorter time may also be useful in increasing the registration accuracy. Extending the application of this method to the ultrasonic images whose boundaries are not enhanced is another research direction in the future. Moreover, more non-rigid registration algorithms will be studied in the future to improve the registration accuracy and reduce the calculation time, such as the iLogDemons method [Ref44], the Hybrid feature-based Diffeomorphic registration method [Ref45] and deep learning-based registration methods [Ref46].

Acknowledgements

The author thank the Beijing Tsinghua Changgung Hospital for providing the CT volume data and the ultrasonic images. The author also thank the Mr. Zan Liu, Ms. Wenjun Yu, Ms. Weijuan Li and other colleagues in Shanghai United-imaging Co., Ltd who have given the registration software of Demons method and the useful suggestions and discussions.

Funding

This work was supported by the National Key R&D Program of China (2017YFC0112801) and the Key Project of Special Development Foundation of Shanghai Zhangjiang National Innovation Demonstration Zone (1701-JD-D1112-030).

Disclosures

The authors declare that there are no conflicts of interest related to this article.

Appendix

Appendix A A The Demons algorithm based on the mutual information force

In the registration of multi-modal images, the mutual information is widely used as the similarity measure. Mutual information (MI) is a basic conception in information theory. And it is usually used to describe the statistical correlations between two between two random variables or the amount of information that one variable contains about the other. The MI registration criterion presented here states that the MI of the image intensity values of corresponding voxel pairs is maximal if the images are geometrically aligned

[Ref47]. The mutual information between two systems and , , is defined using the entropy, as shown in Equation (A.1).

(A.1)

where and are the entropy of each system, and is the joint entropy between these systems. As shown in Equations (A.2) and (A.3).

(A.2)
(A.3)

where and

are the marginal probability density function and the joint probability density function, respectively. Therefore, the mutual information

between images and can be calculated by Equation (A.4).

(A.4)
(A.5)

Moreover, the normalized measure of mutual information, , proposed by Studholme et al [Ref48] is less sensitive to changes in overlap, as shown in Equation (A.5) [Ref49]. Therefore, it is employed in the non-rigid registration as the similarity measure in this work.

The Demons model is based on the principle of intensity conservation between image frames, and the non-rigid registration can be regarded as a diffusion progress from the source image to the target image . As shown in Equation (A.6) [Ref50][Ref51][Ref52][Ref53][Ref54][Ref55]:

(A.6)

where

in the registration of 2D images, which donates the estimated displacement, or the velocity for a point

with intensity in image to match the corresponding point with intensity in image . is the gradient of the target image . The term is used to stabilize the velocity equation.

(A.7)

In the multi-modal images’ registration, the mutual information force is employed to calculate the velocity of pixel at time . Equation (A.8) shows the mutual information force in the -direction, [Ref56].

(A.8)

where is a square window in the target image , and and denote pixels of the source image in the squire windows on the left side and right side of the corresponding window in image of window in image . and are joint intensity probabilities between the specified windows, and and are marginal intensity probabilities for the source and target images, respectively.

Component of mutual information force in y-direction follows similarly where probabilities associated with neighboring voxels in -direction will replace those in Equation (A.8).

The specific steps of Demons registration algorithm are as follows:

Algorithm 1. The Demons registration algorithm
1. Input: two images , , Gaussian filter parameter , relaxation factor , maximum iteration number
2. Initialize deformation field, .
3. While and convergence not reached do
4.  Calculate
5.  Regularize with a Gaussian filter.   (A.9)
6.  Calculate , and use it to insure a displacement smaller than a pixel
7.  Update   (A.10)
8. End while
9. Output: deformation field .
Table 1: The specific steps of Demons registration algorithm

Appendix B B The BM3D and BM4D method

The BM3D method contains two main steps, which are basic estimation (step 1 in Fig. (B.1)) and final estimation (step 2 in Fig. (B.1)) [Ref26][Ref27][Ref28]. Fig. (B.1) shows the flowchart of BM3D algorithm. The detailed steps are similar in each main step, which are grouping of the 2D blocks (images in each group form a 3D array), calculating local estimates by the three-dimensional collaborative filtering in the spectrum domain (3D hard-threshold method in step1, and Wiener filtering in step 2) of each 3D array and the aggregation of the weighted means of the local estimates. The preliminarily de-noised basic estimate results are employed in the step 2 to improve the accuracy of grouping and to acquire more accurate filtering results by using it as the pilot signal of the empirical Wiener filtering.

Figure B.1: The flowchart of BM3D method. Fig. B.1 is from Reference [Ref26].

The BM4D algorithm is an extension of BM3D algorithm into 3D volumetric data. The flowchart of BM4D algorithm is shown in Fig. B.2. It has the similar principle and processes as the BM3D method, but uses cubes of voxels instead of the blocks of pixels as basic data patches to form the 4D group [Ref29]. It has good de-noising performance, but it is more time-consuming. Therefore, before the registration, the CT volume data is de-noised firstly.

Figure B.2: The flowchart of BM4D method. Fig. B.2 is from Reference [Ref29].

References