DeepAI
Log In Sign Up

IGCN: Image-to-graph Convolutional Network for 2D/3D Deformable Registration

10/31/2021
by   Megumi Nakao, et al.
9

Organ shape reconstruction based on a single-projection image during treatment has wide clinical scope, e.g., in image-guided radiotherapy and surgical guidance. We propose an image-to-graph convolutional network that achieves deformable registration of a 3D organ mesh for a single-viewpoint 2D projection image. This framework enables simultaneous training of two types of transformation: from the 2D projection image to a displacement map, and from the sampled per-vertex feature to a 3D displacement that satisfies the geometrical constraint of the mesh structure. Assuming application to radiation therapy, the 2D/3D deformable registration performance is verified for multiple abdominal organs that have not been targeted to date, i.e., the liver, stomach, duodenum, and kidney, and for pancreatic cancer. The experimental results show shape prediction considering relationships among multiple organs can be used to predict respiratory motion and deformation from digitally reconstructed radiographs with clinically acceptable accuracy.

READ FULL TEXT VIEW PDF

page 1

page 2

page 3

page 4

page 7

page 8

page 9

page 12

08/28/2021

Image-to-Graph Convolutional Network for Deformable Shape Reconstruction from a Single Projection Image

Shape reconstruction of deformable organs from two-dimensional X-ray ima...
10/28/2021

Deformable Registration of Brain MR Images via a Hybrid Loss

We learn a deformable registration model for T1-weighted MR images by co...
02/13/2018

BIRNet: Brain Image Registration Using Dual-Supervised Fully Convolutional Networks

In this paper, we propose a deep learning approach for image registratio...
09/16/2019

Instantiation-Net: 3D Mesh Reconstruction from Single 2D Image for Right Ventricle

3D shape instantiation which reconstructs the 3D shape of a target from ...
01/20/2023

Impact of PCA-based preprocessing and different CNN structures on deformable registration of sonograms

Central venous catheters (CVC) are commonly inserted into the large vein...
03/31/2017

Quicksilver: Fast Predictive Image Registration - a Deep Learning Approach

This paper introduces Quicksilver, a fast deformable image registration ...

Code Repositories

I Introduction

Organ positions and shapes from 3D medical images constitute patient-specific morphological information that is essential to diagnosis and pre-treatment planning. However, organs may move or deform during surgical treatment or through several weeks of radiation therapy [1, 2]. Post-imaging time-series shape changes in organs can prevent tumor localization and hinder treatment. Existing imaging devices for use during treatment have certain limitations; thus, 2D images facilitating real-time measurements (e.g., endoscopic and X-ray images) are available but 3D imaging is limited [3, 4, 5].

Organ shape reconstruction based on a single-projection image during treatment has wide clinical scope including image-guided therapy/intervention. However, this problem is ill-posed without prior knowledge as it requires transformation of 2D space points into points in a higher-dimensional space. One solution is 2D/3D registration, which involves the patient-specific organ shape from dense 3D computed tomography (CT) or magnetic resonance imaging (MRI) images taken prior to treatment and use of these data as prior knowledge. This approach aims to solve alignment and deformation of the organ-shape models to 2D projection images in real time, and has undergone intensive research in the field of medical image analysis over the past decade [6, 7]. In particular, many studies have examined rigid-body 2D/3D image registration [8][9] as an optimization problem for a parameter sets that determine the position and orientation.

2D/3D deformable registration of soft organs requires local point-to-point correspondence between 2D images and 3D volumes. Unlike rigid-body registration, large-scale parameters must be optimized proportional to the number of sampling points. Deformable registration between 3D volumes poses a similar problem [10]. Diffeomorphic mapping-based regularization [11, 12]

enables calculation of a displacement field that can obtain smooth correspondence between sampling points; however, pairwise optimization has high computational cost for a large-scale parameter set. Thus, recent studies have investigated 3D displacement field learning using a convolutional neural network (CNN)

[13, 14, 15, 16, 17, 18]

. Notably, machine learning models trained via parallel computing with a graphics processing unit can provide accelerated registration.

Single-image-based 2D/3D deformable registration has less constraints than the above-mentioned registration between 3D volumes, making stable optimization difficult. Predictions based on input images alone have high uncertainty, and the mapping between the organ shape model and 2D images, along with its learning method, are key. Some studies use bi-planer X-ray images to improve prediction accuracy[19, 20]

. In the field of computer vision, human posture and various general objects have been investigated, with a camera image database corresponding to a 3D shape being used as a background

[21][22]. Recent works have proposed integrating the CNN and a graph convolutional network (GCN) [23]

, or an estimation framework that is robust against occlusion through a self-attention network

[22][24].

As regards medical imaging, collection of organ deformation data paired with 2D images is difficult, and few cases have been reported to date. A learning method for a registration map involving correspondence between an area on a 2D projection image and a local area in a 3D volume using a CNN has been proposed [25, 26, 27]. Additionally, for 2D/3D deformable registration of soft organs for surgical guidance, model-based optimization for endoscopic images has been attempted [28, 29, 30, 31, 32]. However, within the scope of our survey, no studies have provided a framework for deep learning-based 2D/3D deformable model registration of abdominal soft organs, and no empirical cases using actual patient data have been reported.

This study introduces an image-to-graph convolutional network (IGCN) that enables 3D organ-shape reconstruction and localization based on a single-viewpoint 2D projection image. The IGCN provides a new end-to-end framework that achieves real-time 2D/3D deformable registration through integration of an image-based generative network and GCN. The generative network learns the transformation from the 2D projection image to a displacement map based on pairwise 3D meshes obtained before and after deformation. The GCN samples the input features of each node from the generated displacement field and learns the transformation into a final 3D displacement vector that satisfies the geometrical constraints. Finally, the IGCN outputs a 3D mesh, the position and deformation of which are registered to the input 2D projection image.

Assuming application to radiation therapy, the shape reconstruction performance from a single 2D projection image targeting the abdominal organs of actual patients is verified experimentally. This is the first study to demonstrate 2D/3D deformable registration of the liver, duodenum, and kidney, and the gross tumor volume (GTV) of pancreatic cancer (Fig. 1). Many variations in organ shape and deformation exist between patients, and there are almost no visual clues (such as contours) in the low-contrast 2D projection images; thus, accurate prediction of the organ positions and shapes was previously considered difficult. We also show that respiratory dynamics and deformation can be predicted from digitally reconstructed radiograph (DRR) images via statistical data augmentation for 3D-CT and simultaneous prediction of multiple organ shapes.

The methods reported herein extend a preliminary framework [33] presented at the 2021 International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI). The contributions of this study are as follows:

  • A new 2D/3D deformable model registration framework that integrates a displacement field generator and GCN;

  • Simultaneous shape reconstruction of five abdominal organs, the contours of which are not directly visible on a 2D projection image;

  • Application to localization of GTV and organ-at-risk (OAR) volumes assuming dynamic tumor-tracking radiotherapy with clinically acceptable estimation accuracy;

  • Augmentation of respiratory deformation data based on a statistical generative model.

Fig. 1: Problem definition for 2D/3D deformable model registration for abdominal organs. (a) Pre-treatment 3D-CT and surface meshes of liver, stomach, duodenum, kidneys, and pancreatic GTV. 2D projection images (b) of target states and (c) overlaid with projected vertices of pre-treatment CT meshes.

Ii Related Work

Ii-a Optimization-based Approaches

Optimization-based 2D/3D registration has been extensively researched over the past 20 years [6, 34, 35, 36, 37]. Rigid-body and deformable registration involve formulation as a transformation matrix or as a displacement vector field optimization problem, respectively [8, 38, 39]. Because of the high density and large scale of 3D volumes in particular, approaches that construct shape models based on anatomical labels and seek positions and deformation on 2D images have been very successful. Notably, mesh-based shape representation can express organ elastic properties [30, 40, 41] and statistical shape variations [1, 42, 43] with few variables and high computational efficiency. Deformable model registration of X-ray and endoscopic images has been attempted, with the aim of surgical guidance [29, 30, 31, 32]. In radiation therapy, contour definitions and statistical atlases of GTVs and OARs [1, 7, 44, 45]

can be directly applied in model-based registration. However, 2D/3D registration based on model parameter optimization is limited to the subspace of physical models and statistical shape models defined in advance by the expressible shape variations. Additionally, the high degrees of freedom of the position and deformation in a 2D image make it difficult to set objective functions from which a stable solution can be derived, and each registration requires time for iterative optimization calculations.

Fig. 2: Full IGCN. The image translation network learns to generate a displacement field . The vertex coordinates and corresponding 3D displacement vectors are concatenated into the GCN for mesh deformation learning.

Ii-B Learning-based Approaches

Recently, deep learning-based frameworks have attracted attention [46, 47, 48] because of the high uncertainty and computational costs inherent in model-based 2D/3D deformable registration. Pointnet [21], a CNN-based framework that generates a 3D point cloud from a single-viewpoint image, was applied to 2D virtual images of statistical pneumothorax lung models, and lobe shapes were reconstructed [47]. However, in point cloud representation, surface and topological information on the inter-vertex relationships, which are important for deformation field computation, are lost. Wang at al. proposed Pixel2Mesh (P2M) [22] to generate a 3D mesh from a 2D projection image. P2M uses latent image features to deform an ellipsoid template into the target shape. A recent work [48] was the first to apply P2M to respiratory deformation estimation from a DRR, with 3D lung shapes being artificially generated from multiple initial 3D templates with free-form deformation. We previously implemented 2D/3D deformable registration methods using 4D-CT data for real patients [33, 49] and reported preliminary liver shape reconstruction results. However, in the abdominal regions, the available 2D contours or visual cues are poor. We found limitations in learning dense deformation fields and capturing distant features from low-contrast projection images. The improved IGCN framework presented herein addresses these problems and exhibits good estimation performance for multiple abdominal organs.

Iii Methods

We consider 3D-CT/MRI volumes obtained for pre-treatment planning and unregistered 2D projection images obtained during image-guided therapy. We do not focus on automatic segmentation techniques, and we assume that the organ contours are segmented and organ shapes are modeled as triangular surface meshes as a preprocessing step.

Let be the initial mesh generated from 3D-CT planning images and be the 2D projection image obtained from the target state. X-ray or endoscopic images are candidates for ; however, in our experiments for quantitative evaluation of the proposed 2D/3D deformable registration method, DRR images were used. These DRR images were generated from 4D-CT images for performance analysis targeting non-linear motion and shape variability of abdominal organs during respiratory motion.

The left and central images of Fig. 1 show two typical examples of and , respectively. In the projection images, the abdominal organs are invisible. Further, the anatomical variability in organ shape and location between patients is apparent. The right images are overlaid with the projected vertices of the mesh, indicating initial misalignment between and . The diaphragm shape visualized in the DRR does not match the projected initial shape because the two states differ in terms of patient condition (e.g., posture and respiratory phase). The deformation is nonlinear and exhibits local rotation and sliding motion [45]

in 3D, and simple linear transformation is not sufficient to register the two states.

Iii-a IGCN Architecture

Fig. 2 is an outline of the IGCN architecture and deformable model registration process. The IGCN is a generalized, organ-independent framework that integrates an image translation network

and a vertex transformation network

. Various architectures are acceptable for each network model; we employed a UNet-based network [50] and graph convolutional network [23] for and , respectively.

takes a 2-channel image formed by and a semantic label generated from . In our experiments, the input image size was ; however, the method is not limited to a particular size. learns the generation of a displacement map , which represents a spatial mapping function in 2D space. Then, receives feature vectors from and . is projected onto the 2D image space, and the pixel values of that correspond to each template vertex are sampled. The 3D vertex coordinates of and corresponding 3D displacement vectors are concatenated into for learning deformation. Finally, generates a deformed mesh registered to .

The IGCN implements a new 2D/3D deformation learning scheme characterized by and . Existing projected point sampling methods struggle to capture image features distant from the initial mesh[22, 33]. P2M [22] employs CNN-based feature encoding with hierarchical extension to fit an ellipsoid template to various 3D objects. However, it concentrates on mesh deformation and neglects the target object motion. Abdominal organs contain both local deformation and global translation, and the projection images have no clear edges. The displacement map and composite function address the non-linear organ motion and deformation. We describe the roles of these two functions in the next sections.

Iii-B Displacement Mapping Function

CNN-based encoding schemes struggle to learn image features distant from the initial template. In our preliminary study [33]

, we mapped the projection point to a new position for which a higher probability of obtaining effective image features was expected. Here, this method is referred as “IGCN Warp.” This scheme improved the registration results; however, when the deep features transformed from the input image were discretely referenced at the sparse vertex level of the mesh, the deep feature creation in the CNN and the pairwise updating of the mesh vertex coordinates were likely to become unstable.

Fig. 3: Learning scheme for image translation function generating spatial mapping function . (a) DMR between initial and target shapes, (b) 3D displacement vectors obtained from corresponding vertices, (c) forward displacement map and sampling, (d) displacement map in abdominal regions overlaid with their meshes, and (e) one-to-many correspondence between displacement field and mesh.

The method newly proposed in this study involves learning of the transformation function from to

based on the supervised learning framework via the image generative network. Fig.

3 illustrates the learning process for the liver. First, we obtain a registered mesh with point-to-point correspondence through deformable mesh registration (DMR) [45] between the initial and target meshes (Fig. 3(a)). The displacement vector of each vertex is obtained from the corresponding points before and after deformation (Fig. 3(b)). A 3-channel projection image (Fig. 3(c)) is obtained by transforming from Euclidean to color space and is then used as the surface color of the initial mesh for rendering. This is a forward displacement map in which a 3D displacement vector is stored in each pixel, which directly represents .

The 2D region of the patient-specific organ obtained by projecting the initial mesh can be used as a semantic attention label . Here, the proposed defines the transformation:

(1)

is used as additional attribute information, and both and are treated as the 2-channel input image for stable learning and convergence of the network parameters.

Fig. 3(d) shows the used for supervised learning and formed from the meshes of the five abdominal organs. Here, is a projection of the volumetric displacement field that expresses the 3D mesh deformation; thus, the projection points in the map are referenced from the multiple vertices in the mesh (Fig. 3(e)). In this case, identical displacement vectors can be assigned to all vertices mapped to . However, form parts of different organs and different parts of the same organ (e.g., the anterior and posterior); thus, they must each be able to express different displacements. This problem is resolved in the GCN described below through embedded learning using 3D vectors obtained from the displacement map as well as the local shape and topologies at each mesh vertex.

Iii-C Vertex Transformation Function

The vertex transformation function updates each vertex in the mesh using the generated and template mesh structure. Thus, is responsible for the spatial transformation of each vertex in the mesh based on the GCN, where

(2)

Here, is the vertex coordinates after normalization; is the 3D displacement vector obtained from the corresponding projection point in the displacement map; is composed of the GCN and a learnable parameter , where the input is a vector having and concatenated; and the output is the predicted value of the vertex coordinates. Deformation of the entire mesh is calculated by transforming all vertices composing using the trained function .

For the GCN layers, graph convolution is applied to obtain hierarchical topological features in non-Euclidean space [23]. The mesh is a type of graph , where is the set of vertices and is the set of edges. The per-vertex features are shared with the neighbor vertices. The GCN employed in this study consisted of eight sequential graph convolutional layers, each of which is defined in Eq. (3).

(3)

where and denote the feature matrix before and after convolution, respectively. In our experiments, was the concatenation of the vertex coordinates and displacement vectors , and was the learnable parameter matrix. was the adjacency matrix, i.e., a symmetric matrix with binary values, in which element was 1 if there was an edge between and , and 0 if the two vertices were not connected. was the degree matrix, i.e., a diagonal matrix, in which each element represented the number of edges connected to . The template mesh was deformed by updating .

Iii-D Loss Functions

The parameters

of the overall network are simultaneously updated and optimized by minimizing an objective function. In this section, we introduce three loss functions to achieve 2D/3D deformable mesh registration under the constraint of smooth deformation.

The ground-truth vertex coordinates of the target meshes are obtained from the deformable registration process. To strictly evaluate the point-to-point correspondence, we define the mean distance loss of the vertex positions between the estimated shape and the ground truth as

(4)

where is the target 3D position, and is the predicted position. This loss function induces the convergence of the estimated vertex to the correct position.

In our problem setting, the organ deformation is spatially non-linear and heterogeneous but expected to remain within a limited range. To preserve the curvature and smoothness of the initial surface, we use a regularization loss that evaluates a discrete Laplacian of the mesh:

(5)

where is the Laplace-Beltrami operator and is the discrete Laplacian of defined by . Here, is the number of adjacent vertices of the 1-ring connected by the vertex . This loss constrains the shape changes from the initial state and avoids generation of unexpected surface noise and low-quality meshes.

In addition to evaluating the mesh vertex coordinates, accurate prediction of improves the 2D/3D deformable registration results. Specifically, stable learning of is important when the target contains both translation and local deformation. Thus, we introduce the displacement map loss determined by the mean absolute error (MAE), such that

(6)

where is the target displacement map and is the predicted displacement map translated from . The existing 2D/3D deformable registration framework [22, 48] does not use a displacement map, and this study is the first to investigate the performance of the newly designed loss function.

The full objective is defined as the weighted sum of the above three loss functions:

(7)

The loss function values are normalized to using the maximum values in each space. Here, to facilitate supervised learning, we used 1.0 and 0.1 for and , respectively, after examination of several parameter sets. The optimized deformable registration model is obtained by solving

(8)

These are applied to the developed framework at each epoch to train

.

Fig. 4: Statistical models of abdominal organs with respiratory deformation: (a) mean (translucent) and patient-specific (mesh) shapes, and (b) first and (c) second principal components of vertex displacements. The colors represent 3D displacement vectors.

Iii-E Statistical Generative Models

In this section, we introduce a data augmentation method based on statistical generative models to overcome the limited training data volumes. Displacements that reflect the statistical properties of respiratory deformation, as obtained from 4D-CT data, are supplied to a mesh obtained from 3D-CT data. Specifically, for a mesh generated from a 4D-CT volume, DMR [45]

is conducted between all cases to obtain a mesh with the same topology. Then, principal component analysis is conducted to obtain a statistical model of the shape and displacement.

Fig. 4(a) shows patient-specific shapes obtained via 4D-CT and the average shapes calculated from data for multiple patients. Figs. 4(b) and (c) show results obtained by transforming the first and second principal components, respectively, of the displacement to the RGB space and visualizing these data as color maps for the mesh surfaces. The displacement z-component is large because of the characteristics of respiratory motion; however, the local displacement distributions of each organ have different orientations and sizes. The statistical is defined as

(9)

where is the mean displacement at vertex and is the th component of the displacement. Further, is the weight parameter for each component, and can be changed to yield various values and express the statistical deformation of the 4D-CT data.

Augmented data for supervised learning can be obtained by deforming the registered mesh obtained from the 3D-CT volume based on . In other words, the vertex coordinates are updated for each of as . The set of the deformed mesh and the projection image obtained from 3D-CT volume is used as the input data. The pre-update mesh can be used as the target shape of the true value corresponding to . Network training is implemented by randomly changing for each epoch and generating augmented data with various deformation variations online.

Iv Experiments

To verify the performance of the proposed method and its potential application in clinical settings, we conducted the following two experiments: comparison with conventional methods of 2D/3D deformable registration and deformation prediction for multiple organs, assuming clinical applications to moving-target tracking radiation therapy. We implemented our methods using Python 3.9 and TFlearn with a TensorFlow background. We used 1 for each training batch, 300 for the total number of training epochs, and the ADAM optimizer with a learning rate of

. Our code and demonstration movies are available online at https://github.com/meguminakao/IGCN.

Iv-a Dataset

3D-CT volumes of 124 cases and 4D-CT volumes of 35 cases were acquired from various patients who underwent intensity-modulated radiotherapy in Kyoto University Hospital. This study was performed in accordance with the Declaration of Helsinki and was approved by our institutional review board (approval number: R1446). Each 4D-CT volume consisted of 10 time phases (=0, 10, , 90) for one respiratory cycle and was measured under respiratory synchronization, with = 0 and = 50 corresponding to the end-inhalation and end-exhalation phases, respectively. Thus, 474 3D-CT volumes were used.

Each 3D-CT volume consisted of 512 512 pixels and 88-152 slices (voxel resolution: 1.0 mm 1.0 mm 2.5 mm). During routine clinical procedures, the following regions were labeled by board-certified radiation oncologists: the entire body, stomach, liver, duodenum, left and right kidneys, and the clinical target volume (CTV). We generated surface meshes (400-500 vertices and 796-996 triangles for one organ) from the region labels and obtained organ mesh models with point-to-point correspondence using DMR. The DMR algorithm and the registration performance for the abdominal organ shapes were reported previously [45], and template meshes registered to patient-specific organ shapes with a 0.2mm mean distance (MD) error and 1.1mm Hausdorff distance (HD) error, on average, were confirmed. This registration error was sufficiently small for the use of ground-truth meshes.

Fig. 5: Visual comparison of methods with respect to liver shape reconstruction, (a) registered shapes and latent image features for average (Case 13) and maximum (Case 25) error cases, and (b) learning curves.

Iv-B Method Comparison

The first experiment was designed with the aim of quantitatively and qualitatively comparing the registration results of the proposed and existing methods. The 4D-CT data were used as the test data, and the registration errors of the organ shape meshes calculated for the generated DRRs were compared. The liver was in contact with the diaphragm and the upper two-dimensional contour was detectable, but the contours on the lateral and lower regions could not be visually confirmed. The contours of the other abdominal organs (stomach, duodenum, kidney, and pancreatic cancer GTV) could not be visually confirmed on the 2D projection image, and 3D shape reconstruction was even more challenging. In this experiment, the liver was used as the estimation target to facilitate performance evaluation and error analysis.

Iv-B1 Baseline and experimental conditions

Few existing methods can achieve 2D/3D deformable model registration from a single-viewpoint projection image for deformable organs. We selected P2M [22], which was proposed for general images, and IGCN Warp [33], which we developed in previous research, as baselines for comparison, and compared their performance with that of the proposed IGCN. The true value was obtained for each vertex; thus, the Chamfer loss used in P2M was changed to the defined in Eq. (4), and the remaining P2M loss was used without alteration. Hierarchical learning was not applied to match the prediction process with each method.

Verifications were conducted using the following two conditions with respect to initial alignment of the liver mesh: the noise-free condition, which used the position at = 0 in the first phase and corresponded to the 4D-CT end-inhalation phase; and the noisy condition, for which the initial position was set as the position translated in 3D using random noise, with the maximum displacement being twice the average respiratory displacement. We considered this noisy condition because of the difference in setup of input images between the experiment and clinical situation. For both conditions, the liver shape and position were set as unknown for all phases, dynamic properties and hysteresis caused by time changes were neglected, and this problem was regarded as a problem of static reconstruction of the liver shape in each phase. For each method, training was performed through data augmentation using the statistical generative model mentioned above.

The number of 4D-CT cases were limited; therefore, we adopted a 3-fold cross validation method, which divided 35 patients into three groups of 12, 12, and 11. We calculated the mean and Eigen displacement from 4D-CT data for 23 patients, excluding the test data; these were then adopted to the 3D-CT data of 124 patients for learning while continuously generating variations of the organ displacement associated with respiration. The weight parameters were determined as after examination of the prediction performance with several parameter sets. Regarding the selection method and the effect of the data augmentation on the prediction performance, refer to the supplementary documents.

We evaluated the 3D shape and position accuracies for the predicted organs using three error indices, mean distance (MD), Hausdorff distance (HD)[52] and mean absolute error (MAE) between surfaces, as well as the Dice similarity coefficient (DSC). We obtained a mesh with vertex correspondence by applying the DMR for each organ [45] and used it as the target shape with ground-truth coordinates. The MD and HD were the average and maximum values, respectively, of the bidirectional distance defined by the two nearest vertices between the predicted and true-value mesh; these values quantified the error between shapes. The MAE was the average Euclidean distance between the predicted and correct positions of the corresponding vertex and reflected the prediction error for each vertex. The DSC quantified the overlap between the 3D regions of two meshes; a higher value indicated better prediction performance.

w/o noise Methods
Initial P2M IGCN Warp Proposed
MD [mm] 3.75 2.31 3.71 1.37 2.93 1.43 2.10 0.83
HD [mm] 14.38 6.77 12.77 3.88 12.27 5.35 9.90 4.05
MAE [mm] 8.93 4.68 8.57 2.57 7.47 3.30 6.08 2.40
DSC [%] 89.45 6.67 89.68 4.11 91.88 4.10 94.53 2.24
TABLE II: Liver shape reconstruction results for case of random noise added to the initial alignment.
w/noise Methods
Initial P2M IGCN Warp Proposed
MD [mm] 4.52 2.68 4.25 2.25 3.37 1.77 2.29 0.97
HD [mm] 16.12 7.82 14.24 5.46 13.36 5.93 10.44 4.33
MAE [mm] 10.26 5.31 9.61 3.98 8.25 3.72 6.41 2.52
DSC [%] 86.97 8.03 88.13 6.60 90.51 5.23 93.93 2.71
TABLE I: Quantitative comparison of liver shape reconstruction. Mean standard deviation of MD, HD, MAE and DSC.

Iv-B2 Comparison of results with baseline

Table II lists average values and standard deviations of the evaluation indices obtained for 350 test data points. Here, “Initial” refers to the magnitude of the deviation from the known 3D shape of the first phase = 0 and corresponds to the stage when deformation prediction was not performed. The proposed method exhibited superior performance to P2M and IGCN Warp, and a 3D liver shape was reconstructed with shape error values of MD = 2.1 mm and HD = 9.9 mm, and a shape similarity of DSC = 94.5

. Significant differences (one-way analysis of variance, ANOVA; p

0.05) were confirmed for the conventional methods (P2M and IGCN Warp) for all indices.

Table II lists the respective errors when noise was added to the initial template alignment. For the MD values, the errors increased by 14.6 (0.5 mm) and 15.0 (0.4 mm) for P2M and IGCN Warp, respectively, but the increased error of the proposed method was suppressed to 9.0 (0.2 mm). Thus, stable prediction could be achieved even for differences in the initial conditions associated with the 3D shape arrangement. As with the noise-less conditions, significant differences were confirmed from the conventional method for all indices.

The smoothness of the predicted shape and the mesh quality could not be evaluated using the above error indices only; therefore, we qualitatively confirmed the estimation results by visualizing the estimated shape. Fig. 5(a) shows results obtained by superimposing the liver shape predicted through DRR of the end-exhalation phase ( = 50) for each method, for the case in which predictions were based on the mean shape error (Case 13) and the case for which the shape error was largest (Case 25). Magenta indicates the true liver shape and position, and cyan shows the predicted liver shape. A heat map on the right-bottom of each figure show the sum of the latent image features in the same feature encoding layer.

The proposed method successfully predicted deformation similar to the target 3D shape despite the fact that visual confirmation of the contours was not achieved for many liver areas, and with only extremely low-contrast textures being confirmed. Visual comparisons of the latent image features and prediction results revealed that P2M responded strongly to the body contour edge, with large errors at locations with low correlation with the body contour movements, as indicated by the arrows in Fig. 5(a). Cases in which the prediction may fail with large displacement, even if the edge around the diaphragm is relatively clear, are shown. For IGCN Warp, responses to the low-contrast edges and texture were apparent. However, the errors increased in the lower liver, where the edge could not be visually confirmed. The proposed method generated features for the liver area and its surroundings, which yielded favorable predictions for the lower area of the liver. Fig. 5(b) depicts the learning curves of the three methods for MAE of the test datasets. Each method converged before 150 epochs, with IGCN converging fastest and the other two methods showing unstable curves.

Fig. 6: Motion dynamics and shape reconstruction errors for three abdominal organs and pancreatic cancer GTV. The means and standard deviations of the corresponding vertices are plotted in the graphs.

Iv-C Multiple-Organ Deformation Prediction

In the final experiment, we aimed to verify the organ deformation and displacement prediction performance assuming clinical applications to moving-target tracking radiation therapy. We generated a 10-frame sequential DRRs from 4D-CT data and conducted an experiment to predict the 3D shapes of the liver, stomach, duodenum, and pancreatic cancer GTV. Two approaches can be employed to estimate the shapes of multiple organs, here referred to as “single (SR)”and “multiple reconstruction (MR)”: SR learns each individual organ and MR simultaneously learns multiple organs as a tetrahedral mesh by generating connectivity between organs. MR increases both the number of vertices in the mesh to be estimated and the shape expression complexity, but it may effectively learn positional relationships between organs and the deformation interactions. The estimation performance was compared for both methods. We verified whether the final performance achieved the 3D organ area identification accuracy required for adaptive radiation therapy.

As superior shape reconstruction performance can be expected when 4D-CT data are also used for training, we adopted the 3-fold cross validation method for this training Thus, 35 patients were divided into three groups of 12, 12, and 11, as in the previous section. These data were then added to the statistical generative model obtained by deforming the 3D-CT data of 124 different patients. We then conducted learning using a total of 354 volumes from the 4D-CT data (230 3D-CT volumes), for 23 patients in the remaining two groups that were not incorporated in the test data. For SR, we calculated the error by predicting each 3D shape from one DRR for the liver, stomach, duodenum, and pancreatic cancer GTV based on the trained network. For MR, we calculated the shape error for each organ after simultaneously predicting the 3D shapes of all four organs from one DRR.

Single reconstruction (SR) Multiple reconstruction (MR)
MD [mm] HD [mm] DSC [%] MD [mm] HD [mm] DSC [%]
Liver 1.83 0.89 8.49 4.57 95.32 2.31 1.86 0.89 8.35 4.52 95.13 2.31
Stomach 3.59 1.92 11.59 9.24 80.14 9.77 1.77 0.92 6.49 3.93 90.89 5.40
Duodenum 2.97 1.38 11.33 4.72 75.83 13.19 1.64 0.82 7.34 4.47 86.75 7.82
GTV 2.08 1.43 6.42 3.00 82.16 13.25 1.10 0.74 3.86 2.36 89.90 6.56
TABLE III: Comparison of deformable registration performance for single and multiple abdominal organs.

Iv-C1 Performance analysis and motion dynamics

Fig. 6 shows the liver, stomach, duodenum, and GTV displacements for each phase, as well as the shape reconstruction errors due to SR and MR. The mean displacement for all corresponding vertices was visualized as the centerline, and the standard deviation was depicted as a colored band. Table III lists the errors for each organ for both SR and MR. For the liver, no significant differences between the two approaches were apparent, but significant differences (ANOVA; p 0.05) were confirmed between the two methods for the stomach, duodenum, and GTV. Shape error improvements of 50.7, 44.8, and 47.2, respectively, were obtained for MR. In the American Association of Physicists in Medicine guideline for image registration and fusion [51], the quantitative metric tolerance is MD = 2 - 3 mm and DSC = 80 - 90. The obtained results show that shape reconstruction can be achieved with accuracy equal to or exceeding this level and, thus, the proposed method is clinically applicable.

Fig. 7: Shape reconstruction examples and method comparison. Average and maximum error cases. The graph convolutions embed per-vertex displacement vector between organs, which results in better estimation performance of the regions with no visual cues.

Iv-C2 Abdominal-organ shape reconstruction

Cases 13 and 25 are shown as typical shape reconstruction examples in Fig. 8; these cases show the average and maximum shape errors, respectively, for = 50, which had the largest displacement. Fig. 8(a) shows the MR results, and the central image (predicted) shows the vertices of the 3D organ mesh obtained for the input DRR image, where coloring and superimposed visualization were conducted for each organ. The image on the right was obtained by projecting the true (magenta) and predicted (cyan) shapes on the projected image; the shape errors of each organ could be locally confirmed. Unlike with the liver, the contours could not be visually confirmed on the DRR images for the stomach, duodenum, and GTV; however, shape reconstruction with minimal deviation from the ground-truth shape was achieved. The supplemental movie (available online at https://github.com/meguminakao/IGCN) demonstrates the results for 10-frame sequential images.

Fig. 8(b) shows the results obtained by visualizing the 3D organ meshes of the liver, stomach, and duodenum using SR and MR from two different directions. For SR, large deviations in the stomach and duodenum positions were noted. Thus, shape reconstruction using only the image features obtained from the 2D area in the DRR image corresponding to each organ was difficult. For MR, good matching was found between the true and predicted shapes. Note that stomach shapes varied considerably between patients because of the stomach contents, and some deviations were observed.

V Discussion

This study presents a new framework that integrates an image generative network and GCN, which achieves model-based deformable registration for 2D projected images. Unlike image-based 2D/3D registration, a mesh that explicitly defines the organ areas to be estimated can be output. A wide range of clinical applications are possible, such as localization of GTVs and OAR volumes in radiation therapy, and tumor position identification for endoscopic camera images during surgery.

In conventional CNN-based feature encoding, image features away from the template-mesh projection points are referenced; this problem was resolved through displacement map generation by the image generative network. Additionally, significantly improved estimation accuracy for the stomach, duodenum, and pancreatic cancer GTV were confirmed through simultaneous reconstruction of multiple organs, even when there were no visual cues in the projected images. This outcome is thought to be due to successful localization through convolution of the features of adjacent vertices in the GCN, while referring to the image features and positional relationships between different organs.

Our experiments had certain limitations. For example, performance evaluation was performed using only DRRs as projected images. Thus, accuracy evaluations for X-ray images measured during treatment are required. However, multiple studies have reported that DRR-based learning is effective for prediction from measured X-ray images[19, 20]. The organ shape contained in the 3D-CT data used to generate the DRR could potentially be taken as the true value, yielding a quantitative and highly reliable performance comparison, although the estimation error would increase because of the differences between the X-ray and DRR images. However, this increase would be limited even when random noise were added to the initial placement of the template mesh, and robust estimates could be expected for different imaging conditions in clinical settings.

Vi Conclusion

In this study, we proposed an image-to-graph convolutional network (IGCN) that achieves deformable model registration of a 3D organ model for a single-viewpoint 2D projection image. We verified that clinically acceptable registration accuracy was achieved with a mean distance of less than 2 mm. The proposed technique could be directly applied for localization of radiation targets and organ-at-risk volumes in radiation therapy, and it could also be applied to a wide range of image-guided interventions.

References

  • [1] B. Rigaud, A. Simon, M. Gobeli, J. Leseur, L. Duverge, D. Williaume et al., Statistical shape model to generate a planning library for cervical adaptive radiotherapy, IEEE Trans Med Imaging, vol. 38, no. 2, pp. 406-416, 2019.
  • [2] J. Tokuno, T. F. Chen-Yoshikawa, M. Nakao, T. Matsuda, and H. Date, Resection process map: A novel dynamic simulation system for pulmonary resection, J Thorac Cardiovasc Surg, vol. 159, no. 3, pp. 1130-1138, 2020.
  • [3] H. Teske, P. Mercea, M. Schwarz, N. H. Nicolay, F. Sterzing, and R. Bendl, Real-time markerless lung tumor tracking in fluoroscopic video: Handling overlapping of projected structures, Med Phys, vol. 42, no. 5, pp. 2540-9, 2015.
  • [4] W. Zhao, L. Shen, B. Han, Y. Yang, K. Cheng et al., Markerless pancreatic tumor target localization enabled by deep learning, Int J Radiat Oncol Biol Phys, vol. 105, no. 2, pp. 432-439, 2019.
  • [5] W. Takahashi, S. Oshikawa, and S. Mori, Real-time markerless tumour tracking with patient-specific deep learning using a personalized data generation strategy: Proof of concept by phantom study, The British journal of radiology, p. 20190420, 2020.
  • [6] P. Markelj, D. Tomaževič, B. Likar, and F. Pernuš, A review of 3D/2D registration methods for image-guided interventions, Medical image analysis, vol. 16, no. 3, pp. 642-661, 2012.
  • [7] C. J. F. Reyneke, M. Lüthi, V. Burdin, T. Douglas, T. Vetter, and T. Mutsvangwa, Review of 2-D/3-D reconstruction using statistical shape and intensity models and x-ray image synthesis: Toward a unified framework, IEEE Reviews in Biomed Eng, vol. 12, pp. 269-286, 2019.
  • [8] J. Wang, R. Schaffert, A. Borsdorf, B. Heigl, X. Huang, J. Hornegger et al., Dynamic 2-D/3-D rigid registration framework using point-to-plane correspondence model, IEEE Trans Med Imaging, vol. 36, no. 9, pp. 1939-1954, 2017.
  • [9] H. Liao, W. Lin, J. Zhang, J. Zhang, J. Luo, and S. K. Zhou, Multiview 2D/3D rigid registration via a point-of-interest network for tracking and triangulation,

    Proc. IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)

    , pp. 12630-12639, 2019.
  • [10] A. Sotiras, C. Davatzikos, and N. Paragios, Deformable medical image registration: A survey, IEEE Trans Med Imaging, vol. 32, no. 7, pp. 1153-90, 2013.
  • [11] M. F. Beg, M. I. Miller, A. Trouvé, and L. Younes, Computing large deformation metric mappings via geodesic flows of diffeomorphisms, Internal Journal of Computer Vision, vol. 61, no. 2, pp. 139-157, 2005.
  • [12] J. Rühaak, T. Polzin, S. Heldmann, I. J. A. Simpson, H. Handels, J. Modersitzki et al., Estimation of large motion in lung CT by integrating regularized keypoint correspondences into dense deformable registration, IEEE Trans Med Imaging, vol. 36, no. 8, pp. 1746-1757, 2017.
  • [13] G. Balakrishnan, A. Zhao, M. R. Sabuncu, J. Guttag, and A. V. Dalca, Voxelmorph: A learning framework for deformable medical image registration, IEEE Trans Med Imaging, vol. 38, no. 8, pp. 1788-1800, 2019.
  • [14] B. D. de Vos, F. F. Berendsen, M. A. Viergever, H. Sokooti, M. Staring, and I. Išgum, A deep learning framework for unsupervised affine and deformable image registration, Medical image analysis, vol. 52, pp. 128-143, 2019.
  • [15] J. Krebs, H. Delingette, B. Mailhé, N. Ayache, and T. Mansi, Learning a probabilistic model for diffeomorphic registration, IEEE Trans Med Imaging, vol. 38, no. 9, pp. 2165-2176, 2019.
  • [16] S. Zhao, T. Lau, J. Luo, E. I. C. Chang, and Y. Xu, Unsupervised 3D end-to-end medical image registration with volume tweening network, IEEE Journal of Biomedical and Health Informatics, vol. 24, no. 5, pp. 1394-1404, 2020.
  • [17] K. Tang, Z. Li, L. Tian, L. Wang, and Y. Zhu, Admir–affine and deformable medical image registration for drug-addicted brain images, IEEE Access, vol. 8, pp. 70960-70968, 2020.
  • [18] Y. Lei, Y. Fu, T. Wang, Y. Liu, P. Patel, W. J. Curran et al., 4D-CT deformable image registration using multiscale unsupervised deep learning, Phys Med Biol, vol. 65, no. 8, p. 085003, 2020.
  • [19]

    X. Ying, H. Guo, K. Ma, J. Wu, Z. Weng, and Y. Zheng, X2CT-GAN: Reconstructing CT from Biplanar X-rays with generative adversarial networks, IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10611-10620, 2019.

  • [20]

    Y. Kasten, D. Doktofsky, and I. Kovler, End-To-End convolutional neural network for 3D reconstruction of knee bones from Bi-planar X-Ray Images,

    Machine Learning for Medical Image Reconstruction, pp. 123-133, 2020.
  • [21] H. Fan, H. Su, and L. Guibas, A point set generation network for 3D object reconstruction from a single image, IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 2463-2471, 2017.
  • [22] N. Wang, Y. Zhang, Z. Li, Y. Fu, H. Yu, W. Liu et al., Pixel2mesh: 3D mesh model generation via image guided deformation, IEEE Transactions on Pattern Analysis and Machine Intelligence, pp. 1-1, 2020.
  • [23] T. N. Kipf and M. Welling, Semi-supervised classification with graph convolutional networks, presented at the Proceedings of the 5th International Conference on Learning Representations, 2017.
  • [24] K. Lin, L. Wang, and Z. Liu, End-to-end human pose and mesh reconstruction with transformers, IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp.1954-1963, 2021.
  • [25] S. Miao, Z. J. Wang, and R. Liao, A CNN regression approach for real-time 2D/3D registration, IEEE Trans Med Imaging, vol. 35, no. 5, pp. 1352-1363, 2016.
  • [26] S. Miao, S. Piat, P. Fischer, A. Tuysuzoglu, P. Mewes, T. Mansi et al., Dilated fcn for multi-agent 2D/3D medical image registration, Proc. AAAI 2018.
  • [27]

    R. Schaffert, J. Wang, P. Fischer, A. Borsdorf, and A. Maier, Learning an attention model for robust 2-D/3-D registration using point-to-plane correspondences,

    IEEE Trans Med Imaging, vol. 39, no. 10, pp. 3159-3174, 2020.
  • [28] M. Nakao, Y. Oda, K. Taura, and K. Minato, Direct volume manipulation for visualizing intraoperative liver resection process, Comput Methods Programs Biomed, vol. 113, no. 3, pp. 725-735, 2014.
  • [29] A. Saito, M. Nakao, Y. Uranishi, and T. Matsuda, Deformation estimation of elastic bodies using multiple silhouette images for endoscopic image augmentation, IEEE Internal Symposium on Mixed and Augmented Reality, pp. 170-171, 2015.
  • [30] B. Koo, E. Ozgur, B. L. Roy, E. Buc, and A. Bartoli, Deformable registration of a preoperative 3D liver volume to a laparoscopy image using contour and shading cues, Proc. Medical Image Computing and Computer Assisted Intervention (MICCAI) 2017.
  • [31] M. D. Ketcha, T. De Silva, A. Uneri, M. W. Jacobson, J. Goerres, G. Kleinszig et al., Multi-stage 3D-2D registration for correction of anatomical deformation in image-guided spine surgery, Phys Med Biol, vol. 62, no. 11, pp. 4604-4622, 2017.
  • [32] R. Modrzejewski, T. Collins, A. Bartoli, A. Hostettler, and J. Marescaux, Soft-body registration of pre-operative 3D models to intra-operative RGBD partial body scans, Proc. Medical Image Computing and Computer Assisted Intervention (MICCAI), pp. 39-46, 2018.
  • [33] M. Nakao, M. Nakamura, and T. Matsuda, Image-to-Graph convolutional network for deformable shape reconstruction from a single projection image, Proc. Medical Image Computing and Computer Assisted Intervention (MICCAI), pp. 259-268, 2021.
  • [34] K. M. Brock, J. M. Balter, L. A. Dawson, M. L. Kessler, and C. R. Meyer, Automated generation of a four-dimensional model of the liver using warping and mutual information, Med Phys, vol. 30, no. 6, pp. 1128-33, 2003.
  • [35] C.-R. Chou and S. Pizer, Real-time 2D/3D deformable registration using metric learning, Proc. Medical Computer Vision. Recognition Techniques and Applications in Medical Imaging, pp. 1-10, 2013.
  • [36] U. Mitrovic, Ž. Špiclin, B. Likar, and F. Pernuš, 3D-2D registration of cerebral angiograms: A method and evaluation on clinical images, IEEE Trans Med Imaging, vol. 32, no. 8, pp. 1550-63, 2013.
  • [37] T. De Silva, A. Uneri, M. D. Ketcha, S. Reaungamornrat, G. Kleinszig et al., 3D-2D image registration for target localization in spine surgery: Investigation of similarity metrics providing robustness to content mismatch, Phys. Med. Biol., vol. 61, no. 8, pp. 3009-3025, 2016.
  • [38] U. Mitrovic, B. Likar, F. Pernus, and Z. Spiclin, 3D 2D registration in endovascular image-guided surgery: Evaluation of state-of-the-art methods on cerebral angiograms, International Journal of Computer Assisted Radiology and Surgery, vol. 13, pp. 193-202, 2017.
  • [39] A. Lange and S. Heldmann, Multilevel 2D-3D intensity-based image registration, Proc. Biomedical Image Registration, pp. 57-66, 2020.
  • [40] M. Nakao and K. Minato, Physics-based interactive volume manipulation for sharing surgical process, IEEE Transactions on Information Technology in Biomedicine, vol. 14, no. 3, pp. 809-816, 2010.
  • [41] S. Suwelack, S. Röhl, S. Bodenstedt, D. Reichard, R. Dillmann, T. dos Santos et al., Physics-based shape matching for intraoperative image guidance, Med Phys, vol. 41, no. 11, p. 111901, 2014.
  • [42] J. Ehrhardt, R. Werner, A. Schmidt-Richberg, and H. Handels, Statistical modeling of 4D respiratory lung motion using diffeomorphic image registration, IEEE Trans Med Imaging, vol. 30, no. 2, pp. 251-265, 2011.
  • [43] M. Nakao, J. Tokuno, T. Chen-Yoshikawa, H. Date, and T. Matsuda, Surface deformation analysis of collapsed lungs using model-based shape matching, International Journal of Computer Assisted Radiology and Surgery, vol. 14, no. 10, pp. 1763-1774, 2019.
  • [44] M. Nakamura, M. Nakao, N. Mukumoto, R. Ashida, H. Hirashima, M. Yoshimura et al., Statistical shape model-based planning organ-at-risk volume: Application to pancreatic cancer patients, Phys Med Biol, 2020.
  • [45] M. Nakao, M. Nakamura, T. Mizowaki, and T. Matsuda, Statistical deformation reconstruction using multi-organ shape features for pancreatic cancer localization, Medical image analysis, vol. 67, p. 101829, 2021.
  • [46]

    D. Toth, S. Miao, T. Kurzendorfer, C. A. Rinaldi, R. Liao, T. Mansi et al., 3D/2D model-to-image registration by imitation learning for cardiac procedures,

    International Journal of Computer Assisted Radiology and Surgery, vol. 13, no. 8, pp. 1141-1149, 2018.
  • [47] S. Wu, M. Nakao, J. Tokuno, T. Chen-Yoshikawa, and T. Matsuda, Reconstructing 3D lung shape from a single 2D image during the deaeration deformation process using model-based data augmentation, IEEE Internal Confeference on Biomedical and Health Informatics (BHI), pp. 1-4, 2019.
  • [48] Y. Wang, Z. Zhong, and J. Hua, Deeporgannet: On-the-fly reconstruction and visualization of 3D / 4D lung models from single-view projections by deep deformation network, IEEE Transactions on Visualization and Computer Graphics, vol. 26, no. 1, pp. 960-970, 2020.
  • [49] F. Tong, M. Nakao, S. Wu, M. Nakamura, and T. Matsuda, X-ray2shape: Reconstruction of 3D liver shape from a single 2D projection image, Annu Int Conf IEEE Eng Med Biol Soc, vol. 2020, pp. 1608-1611, 2020.
  • [50] O. Ronneberger, P. Fischer, and T. Brox, U-net: Convolutional networks for biomedical image segmentation, Medical Image Computing and Computer Assisted Intervention (MICCAI), pp. 234-241, 2015.
  • [51] K. K. Brock, S. Mutic, T. R. McNutt, H. Li, and M. L. Kessler, Use of image registration and fusion algorithms and techniques in radiotherapy: Report of the aapm radiation therapy committee task group no. 132, Med Phys, vol. 44, no. 7, pp. e43-e76, 2017.
  • [52] D. P. Huttenlocher, G. A. Klanderman, and W. A. Rucklidge, Comparing images using the hausdorff distance, IEEE Transaction on Pattern Analysis and Machine Intelligence, vol. 15, no. 9, pp. 850-863, 1993.

Supplementary document

A. Method Comparison

Fig. 8: Box and jitter plots of shape reconstruction results with respect to 350 liver shapes when noise was added to the initial template alignment: (a) MAEs, (b) MDs, (c) HDs and (d) DSCs.

B. Effectiveness of Statistical Generative Model

We determined the effects of the different data augmentation methods on 3D organ shape reconstruction considering respiratory displacement variation. First, to determine the of the statistical generative model, training was conducted using eight parameter sets for 3D-CT models of 124 cases while generating variations in respiratory motion. The test data and subject organs were the same as in the section IV-B. We predicted the shape of the liver included in the 4D-CT data and calculated the MAE.

Table IV presents the investigated parameter set in order and obtained MAEs. When only was changed, relatively good performance was obtained for , which corresponded to double the mean respiratory displacement. When both and were changed, improved (poorer) performance was obtained when the first (second) principal component was considered. Thus, we adopted , for which the best performance was found.

MAE [mm]
1 0 0 4.13 0.14
2 0 0 3.99 0.36
3 0 0 4.06 0.41
1 1 0 4.01 0.26
2 1 0 3.85 0.22
2 2 0 4.49 0.46
1 1 1 4.57 0.31
2 1 1 4.05 0.20
TABLE IV: Statistical augmentation results for different weight parameter sets

Next, we compared the following cases: no data augmentation (“No augmentation”); application of random translation, which has traditionally been used as a data augmentation standard (“Random”); and data augmentation using the proposed statistical generative model (“Statistical”). For “Random,” twice the mean respiratory displacement was randomly applied in 3D regardless of direction; for “Statistical,” learning was conducted for a direction-dependent displacement based on the displacement vector corresponding to the average respiratory displacement vector and the first principal component, for weights obtained as described above. Table V lists the liver shape reconstruction errors for each data augmentation method. For each index, the best performance was achieved for learning using the statistical generative model, and the MAE decreased by 12.4 and 6.1 compared with “No Augmentation” and “Random,” respectively.

Methods
No augmentation Random Statistical (proposed)
MD [mm] 2.51 1.24 2.32 1.00 2.10 0.83
HD [mm] 11.34 4.70 10.66 4.18 9.90 4.05
MAE [mm] 6.94 3.15 6.47 2.48 6.08 2.40
DSC [%] 93.41 3.24 93.91 2.62 94.53 2.24
TABLE V: Liver shape reconstruction results of augmentation methods

C. Multi-organ shape reconstruction results

Fig. 9: Shape reconstruction performance of multiple abdominal organs for all patient data. Each box plot represents MAEs for 10-frame images.
Fig. 10: Multi-organ shape reconstruction results with large errors confirmed in liver and stomach. Despite that very low-contrast textures and different appearance of projection images are confirmed between patients, the shapes can be stably reconstructed with acceptable errors.
Fig. 9: Shape reconstruction performance of multiple abdominal organs for all patient data. Each box plot represents MAEs for 10-frame images.