I Introduction
Statistical formulation of respiratory motion including the deformation of multiple organs and tumors is of increasing interest in externalbeam radiotherapy. Specifically, in imageguided radiotherapy (IGRT), it is important to reduce exposure in normal tissues while accurately targeting lesions with respiratory motion. However, large anatomical variations in the shape and motion of abdominal organs can occur when treatment lasts several weeks, while timeseries threedimensional (3D) computed tomography (CT) images can only be obtained for initial radiation planning [1][2]. Recent technical advances enable treatment plans to be modified based on a daily Xray conebeam CT (CBCT). This process is called adaptive radiotherapy (ART) [3][4]. Abdominal organs such as the stomach, and duodenum or the pancreas neighbor each other, but the pancreas cannot clearly be detected, even on CBCT images. Because missing pixel values or artifacts often appear in CBCT images (see Fig. 1), ART for abdominal regions is technically more difficult and remains a challenging area of research [2][3]. To approach these issues, this paper focuses on the potential of modelbased tumor localization in pancreatic cancer treatment using statistical modeling of patientspecific multiorgan motion and deformation.
Statistical shape modeling (SSM) has been widely investigated for the modeling of organ shapes based on prior knowledge [5]
, and interest has grown in its application in machine learning with intra or interpatient datasets with pointtopoint correspondence
[1][2][6][7]. For instance, interfractional shape variations in the prostate and rectum have been statistically modeled for radiation therapy planning [6][8][9]. A statistical deformation model (SDM) [10][11] based on a fourdimensional (4D) CT images has also been reported. Unlike physicsbased modeling [12][13][14], statistical modeling is a datadriven approach that does not explicitly describe the elasticity and physical conditions of organs. Imagebased [15][16], pointbased [8][17], or meshbased [1][2][6][18] deformable registration techniques have been used to obtain pointtopoint correspondence between two datasets. Specifically, respiratory motion [11][12][16][19] has mainly been investigated in the field of imagebased lung modeling.Our aim in this paper is to statistically model inter and intrapatient deformation along with the motion of multiple abdominal organs. The relationships among organs, especially those between pancreatic cancer and its surrounding organs, are also interesting and worth investigating [20][21]. Imagebased registration assumes that the 3D image or the regions of interest of organs are a continuous space. It also assumes that the displacement is spatially smooth for regularization [15][22][23][24]. However, the mechanism for multiorgan deformation is complex and not fully understood. Respiratory motion widely affects abdominal regions, in which several organs are adjacent to or connected with each other. Therefore, to analyze individual motion and deformation accurately, a model capable of capturing sliding motion or rotation near the organ boundary is needed [11]. Hence, interest in deformable mesh registration (DMR) has recently resurged. The inter and intrapatient variations in cervix–uterus anatomy were recently modeled for radiation therapy planning [1][7]. The correlation of liver and pancreas tumor motion [25] and daily changes in the stomach, duodenum, and pancreas [2] have been analyzed through DMR. To the best of our knowledge, there has been no report describing SDMbased deformation reconstruction or clarifying the relationship between the inter and intrapatient deformations of multiple organs and tumors, even though the clinical needs for such modeling in radiotherapy and motion recognition are high [26][27][28][29].
In this paper, we introduce a localized deformation reconstruction framework based on the shape features of multiple organs. To model the relationships of the spatial deformation fields between abdominal organs and the tumor, a statistical multiorgan deformation model (SMDM) is constructed by shape matching organ meshes generated from 4DCT datasets (250 volumes). We formulate the SMDM as a reproducing kernel, with the motion and deformation of the pancreatic cancer region as the estimation target. Five abdominal organs (the liver, stomach, duodenum, and right and left kidneys) located around the pancreas are used as multidimensional shape features. In such a procedure, a low number of cancer patients in the dataset can decrease the estimation performance, which is a common problem in clinical machine learning. This paper proposes the concept of localized deformation reconstruction to model nonlinear motion for a local small region of the estimation target, rather than on a perpatient basis. Here, to design the reproducing kernel framework that maps motion with deformation of pancreatic cancer and its surrounding organs, we address the following fundamental issues:

We investigate the level of complexity of the motion dynamics of multiple abdominal organs, their deformation, and relationships.

We determine the design of DMR to build SMDM in abdominal region.

We investigate which organ sets are good estimators to predict the motion/deformation of gross tumor volume (GTV) and determine an appropriate number of dimensions of the feature space.

We evaluate the final estimation performance for the inter and intrapatient validation of GTV and address whether the proposed concept is clinically acceptable.
We note that the purpose of shape/deformation reconstruction is different from that of image segmentation, for which a variety of deep learning methods that directly use image features in the regions of interest have been proposed
[30]. In the proposed kernelbased formulation of SMDM, as shown in Fig. 1(b), even if pancreatic cancer is totally "invisible" because of missing pixels or artifacts in CBCT images, its motion and deformation can be indirectly reconstructed from multiorgan features. This estimation helps optimize radiation treatment in that the radiation dose can be locally transported to the moving tumor while keeping safe a margin around organs at risk.Ii Methods
Iia Statistical Multiorgan Deformation Model
In this study, 4DCT images (: patient ID, : time phase of 4DCT images) of 25 pancreatic cancer patients who underwent intensitymodulated radiotherapy (IMRT) in Kyoto University Hospital were used for statistical modeling and deformation learning. This study was performed in accordance with the Declaration of Helsinki and was approved by our institutional review board (approval number: R1446). 4DCT images for a patient consisted of 3DCT images (image size: pixels and 88–152 slices, voxel resolution: ) of 10 time phases for one respiratory cycle. All images were measured under the condition of respiratory synchronization, where corresponds to the endinhalation phase and corresponds to the endexhalation phase. The region labels of the entire body, stomach, liver, duodenum, left and right kidneys, and the GTV of the pancreatic cancer were predefined by boardcertified radiation oncologists in the radiotherapy planning, as shown in Fig. 2(a). We note that accurate automatic extraction is difficult for the stomach, duodenum, and pancreas because of their unclear boundaries, contrasts, and air contents. Their shapes can be substantially deformed during the radiotherapy period. Most of anatomical segmentation techniques assume that there are no image defects nor a wide range of artifacts appearing in the CBCT images. In this study, the 3D contours manually defined as the organs at risk (OARs) for dose calculation in the radiation planning were used as the multiorgan shape database, and their tetrahedral meshes were generated. Figure 2(a) show an example of tetrahedral meshes and their volumetric point distribution for five organs and the GTV of the pancreatic cancer.
Figure 2(b) shows the flow of the developed framework. The meshes differ in the number of vertices and the structure of the mesh, because they were independently generated from different CT images. As shown in Fig. 3, the corresponding models (with the same vertex and the same mesh structure) that precisely approximate the surfaces of , respectively, were computed by DMR using a template and the target mesh. Because the registered models achieve pointtopoint correspondence, spatial deformation
can be represented by calculating the displacement vector of the corresponding vertex.
To stably compute a 3D deformation field of multiple organs, accurate shape matching is required. The shape variation and deformation of abdominal organs such as the stomach and duodenum include considerable volume changes including rotations and sliding boundaries. Figure 4 shows examples of complex interpatient shape changes and the interaction between multiple abdominal organs. A part of the liver and stomach deform in the different directions near their boundaries. The duodenum and right kidney do not contact each other in the template model, but are in contact in the patient models. Imagebased registration generally deals with multiple organs as one continuous image space, and an individual organ’s deformation or interactions among specific organs cannot be discriminated. In addition, the registration error tends to increase in areas with large curvature such as the tips of the organs and boundaries of neighboring organs [11][18]. Specifically, our focus in this paper is to model multiorgan motion/deformation and its interaction, and to examine the efficacy of the proposed featurepreserving registration methods for abdominal organs with large shape variations such as the stomach and duodenum. To capture rotational components or possible sliding motions caused by the interaction of multiple organs, the registration process is applied to each organ’s mesh independently. In addition to intrapatient registration, our approach enables the construction of a SDM, making interpatient deformation analysis possible. The details of the applied DMR method, LDSM, are described in Section IIC.
For the template mesh generation, first, one case was randomly selected to be the initial mesh for , and its surface was resampled to 400 vertices and 796 triangles. Next, the corresponding mesh models were obtained by registering to the individual surfaces in the inflated state. Because the mesh models have pointtopoint correspondence, the average shape can be obtained by calculating the average of each coordinate. We used as the final template. This process was performed on the stomach, liver, left and right kidneys, and duodenum. By keeping the template close to the data in advance, our aim is to reduce the influence of the adopted data selection while preventing an increase in matching error.
IiB Localized deformation learning model
In the prediction stage, the timevarying position of the pancreatic cancer is estimated using the reproducing kernels of the multiorgan features. Here, as the second technical challenge, we introduce the localized multiorgan features for modeling inter and intrapatient variation of deformation from a dataset containing a limited number of patients. To model spatial deformation, perpatientbased learning, in which one patient’s data are used as one set of training data, has been generally employed as a straightforward approach. The displacement of the region is computed by synthesizing that of the corresponding regions of the training dataset with similar shape features. In this case, the number of training data equals the number of patients in dataset .
Instead of reconstruction on a perpatient basis, the aim of this paper is to formulate a method for local deformation reconstruction that models nonlinear motion for a small region of the estimation target. In the proposed learning model, a small local region of the target organ is used as one set of training data. This approach is based on the hypothesis that local regions with similar shape features show similar displacements, which is commonly assumed in each continuous space of organs. In this approach, the displacement of the region is calculated by combining the training datasets of all the regions including other parts of the target organs. The number of training data is when the number of patients in the data is and small regions of the target organ are considered. The details of the deformation reconstruction framework is described in Section IID.
IiC Feature preserving deformable mesh registration
In this study, to achieve both globally stable and locally strict DMR, our aim is to address the tradeoff between featurepreserving shape matching and spatially smooth deformation. To approach this in DMR, the concept of progressive, featurepreserving shape update [18][31] is introduced into the large deformation diffeomorphic metric mapping (LDDMM) scheme [22][32]. The objective function in the proposed LDSM is described as follows.
(1) 
where is the source (template) mesh, is the target surface, and is the distance function between the two surfaces. In addition, is a continuous and differentiable transformation that maps to the deformed mesh, is the Laplace–Beltrami operator and is the discrete Laplacian of the displacement field.
The first term evaluates the difference between the deformed template and the target surface. The second is a regularization term to make the deformation field smooth. In the context of imagebased LDDMM, the difference in voxel intensities between the deformed and the target image is evaluated in the first term. Minimizing the nearest pointtopoint distance was originally employed as a basic strategy in DMR, however, it does not consider 3D geometry, and maintaining mesh topology is difficult with this approach. We focus on the importance of feature preservation to avoid incorrectly matching local structures [18] in mapping distant structures, 3D geometric information of the template mesh is used to preserve the local features of organ shapes. In the mapping function , a Laplacianbased shape matching (LSM) scheme [31] is introduced for progressive shape updates while preserving features as much as possible. The mean value of the nearest bidirectional pointtosurface distance (called the mean distance in this paper) is used for . A discrete Laplacian was first formulated for geometry modeling [33], and has recently been applied to the nonrigid shape registration of anatomical structures [18][31]. In [31], LSM registers curved surfaces with shape variations better than LDDMM.
Let denote a tetrahedral mesh with vertices and edges, the deformation map , that is, the deformation field in the LSM, is obtained by iteratively updating while minimizing the following objective function:
(2)  
where is the vertex position to be solved, is a positional constraint set to , and is a weight parameter configured according to the problem. For the definition of the positional constraints between the template and target shape, refer to [18]. is the discrete Laplacian at vertex , defined by
(3) 
Here, are the edge weights and is the number of adjacent vertices of one ring connected by vertex and the edge. The discrete Laplacian is used as a shape descriptor and approximates the mean curvature normal of the triangular mesh. Although there are several variations of the weights, the general one is cotangent discretization based on the peredge Voronoi areas. The first term in Eq. (2) is a penalty for shape changes to the mesh, and the second term increases if the constrained vertex is distant from the nearest surface of the target mesh. By computing , which minimizes the objective function, the template model is updated while preserving the shape as much as possible.
In the proposed LDSM, in addition to LSM, the regularization term in Eq. (1) is used to achieve a spatially smooth and diffeomorphic deformation. This can be simply implemented by adding the term to Eq. (2) as follows:
(4) 
where is the magnitude of the discrete Laplacian of the deformation field defined by
(5) 
Because Eq. (5) is a quadratic form at vertex positions , the minimization problem can be calculated efficiently. The stepbystep update avoids local mismatches at the early stage if there is a considerable distance between the two surfaces. Once the template is updated, local displacement is obtained. We note that the original LDDMM does not preserve the shape of the template, and some studies report results with unstable or irregular matching especially for large deformations with rotation [18][31]. In the proposed framework, in addition to the regularization term, Laplacianbased shape preservation is introduced into the iterative deformation process.
IiD Reproducing kernels for deformation reconstruction
This section explains the perregionbased localized deformation learning proposed in this paper, by comparing it with the perpatientbased deformation learning generally used in populationbased modeling [1][6]. Figure 5 briefly illustrates the concept. Because the aim of this paper is to reconstruct target deformation from multiorgan features, the displacement vector is mapped from multiple points sampled from the surrounding organs, as shown in Fig. 5(a). The local displacement of GTV can be calculated from the feature vectors of the sampled points and is composed using the relative position and displacement .
Here, the problem is to learn the mapping from the registered multiorgan models generated through DMR. Perpatientbased learning is a straightforward approach, where the mesh model obtained from one patient’s data is used as one set of training data. It is based on the idea that the displacement of local regions of organs is similar to that of the corresponding regions of the training dataset with similar shape features. Specifically, the displacement at vertex is learned from the displacement of the corresponding vertex in other cases. However, in perregionbased localized deformation learning, a small region of the mesh model is used as one set of training data. This approach is based on the idea that local regions with similar shape features show similar displacement, which is commonly assumed if a curved manifold is used to represent an organ surface. For instance, in Fig. 5(b), the displacement vectors adjacent to should be similar to those of . In other words, the displacement can be learned from the displacement of all vertices in other cases.
We formulate the perregionbased deformation learning model using reproducing kernels. Based on the spatial mapping in Fig. 5(a), the local displacement of GTV is computed from multiorgan shape features using Eq. (6).
(6) 
where is the reproducing kernel, is the number of training datasets, and is the weight vector. A Gaussian function is used for kernel function , which is . For a given , are calculated by minimizing the cost function E(), which is expressed as
(7) 
where is the kernel matrix whose elements are defined by , and is the regularization parameter, which penalizes deviations of . The optimized weights are given by (: indentity matrix).
The feature vector of the local region for perregionbased learning is constructed using the relative position of the target vertex and the displacement vector of the surrounding organs as follows.
(8) 
In this study, the sampled vertices of the SMDM obtained from 24 patients were used in leaveoneout crossvalidation to construct the kernel matrix, and all vertices of the five organs (liver, left and right kidneys, stomach, and duodenum) were considered as candidates for shape features .
(9) 
where ST, DU, LI, LK, and RK are the stomach, duodenum, liver, left kidney, and right kidney, respectively. For instance, when 50 vertices are sampled from each organ model, the dimension of the feature vector is 1,500 according to . Because the shape features are evaluated in highdimensional feature space, in perpatientbased deformation reconstruction, the estimation error may increase if there are no corresponding vertices with similar characteristics among the 24 cases.
Alternatively, in the proposed perregionbased deformation learning, the displacement is locally learned per vertex; in other words, it can be reconstructed from the deformation of different regions. Therefore, when 200 vertices are used to represent the target pancreatic cancer (for example), the training dataset is substantially increased to , whereas the number of dimensions of the feature space is 1,500, which is the same as that of perpatientbased approach. This means that the number of neighborhood data points increase in the generated feature space, and the displacement of the target vertex can be reconstructed using more datasets. Therefore, using a perregionbased kernel formulation improves the estimation performance and more stable results can be expected.
As described in the introduction, we are here interested in the following questions: which organ sets are good estimators for deformation reconstruction and what number of dimensions of the feature space is appropriate. Figure 6 shows two examples in which the three vertices are mapped to the target vertex, which is a problem of obtaining the target displacement based on features sampled from different organ sets, from the liver or from the stomach and duodenum. The liver does not have clear anatomical connectivity to the pancreas, and the shape feature is relatively stable with a small deformation. The stomach and duodenum are connected to the pancreas, but their deformation is large [2] and unstable. Because the shape features of all organs are not always obtained from CBCT images, to explore the prediction performance when using specific organ sets is worth investigating for clinical application. In the experiments, we examine the efficacy of the proposed concept and investigate the possible organ sets and the proper number of dimensions of the feature space for localizing pancreatic cancer.
Iii Experiments
In the experiments, statistical multiorgan deformation models were first generated by inter and intrapatient shape matching. The registration performance for each organ was confirmed while comparing it with the results from three existing registration methods. Then, the efficacy of the multiorgan shape features in predicting the motion and deformation of pancreatic cancer was analyzed. The performance of the proposed perregionbased deformation learning was evaluated by comparing it with conventional perpatientbased learning. The value of weight parameter for kernel function was determined from the results of numerical experiments.
Iiia Shape matching performance
In this study, the mean distance [1][31], the Hausdorff distance [34], and Laplacian of the displacement [18] were used as the shape similarity criteria. The Hausdorff distance measures the longest distance between two surfaces, whereas the mean distance is the mean value of the nearest bidirectional pointtosurface distance. Unlike segmentation or recognition problems, statistical modeling requires pointtopoint local correspondence between two shapes. For example, because the Dice coefficient only measures volume overlap, it is not suitable for evaluating pervertex correspondence, nor is it suitable for measuring the quality of local matching. The Laplacian of the displacement is the magnitude of the second derivatives of the displacement field, and it evaluates the smoothness of deformation. If local shape matching is achieved while preserving vertex density, this value decreases.
The proposed shape matching method (LDSM) was compared with three existing shape matching approaches, that is,

Large deformation diffeomorphic metric matching
(LDDMM) [32]
For all algorithms, the affine transformation was processed in advance to match the posture and volume of the overall shape globally.
Table I shows the quantitative comparison results of the DMR algorithms for each of the five organs. LDSM and LSM achieved a significantly smaller mean distance and Hausdorff distance with an error of less than 1 mm, outperforming PWA and LDDMM in terms of matching volumetric regions. Regarding the Laplacian of the displacement field, LDSM obtained smaller values than LSM, indicating that smooth deformation that reduces unstable surface matching can be performed by LDSM. In DMR, the accuracy of shape representation and smooth deformation from the template is a tradeoff. Using the proposed LDSM method, the Hausdorff distance is less than 1 mm, and the Laplacian of displacement is better than that pf LSM. Based on these results, we selected the LDSM for constructing SMDM of the five organs.
Liver  Methods  

PWA  LDDMM  LSM  LDSM  
MD [mm]  1.6 (1.1 – 2.4)  0.5 (0.3 – 0.7)  0.2 (0.1 – 0.3)  0.2 (0.1 – 0.3) 
HD [mm]  10.0 (4.7. – 28.7)  3.6 (1.3 – 14.8)  0.9 (0.4 – 1.9)  1.1 (0.5 – 2.2) 
LD (mean) [mm]  2.3 (1.0 – 4.6)  1.1 (0.8 – 1.6)  1.5 (1.0 – 2.2)  1.2 (0.8 – 1.7) 
LD (max) [mm]  12.0 (4.6 – 26.9)  6.8 (3.6 – 13.3)  8.8 (4.4 – 20.6)  7.0 (3.7 – 15.2) 
DSC [%]  95.7 (92.4 – 97.2)  98.0 (96.1 – 98.6)  98.1 (96.1 – 98.7)  98.1 (96.1 – 98.7) 
Stomach  Methods  

PWA  LDDMM  LSM  LDSM  
MD [mm]  1.4 (0.9 – 2.1)  0.5 (0.2 – 1.1)  0.2 (0.1 – 0.3)  0.2 (0.1 – 0.4) 
HD [mm]  7.3 (3.7 – 14.9)  3.1 (1.3 – 10.4)  0.8 (0.3 – 1.9)  0.9 (0.4 – 1.8) 
LD (mean) [mm]  2.1 (0.8 – 5.0)  1.1 (0.7 – 2.0)  1.3 (0.8 – 2.5)  1.1 (0.6 – 2.0) 
LD (max) [mm]  8.7 (3.6 – 43.2)  6.3 (3.0 – 10.8)  7.7 (3.9 – 15.7)  6.6 (2.9 – 12.2) 
DSC [%]  92.6 (89.4 – 95.6)  96.5 (95.3 – 97.5)  97.2 (96.0 – 98.1)  97.0 (95.7 – 98.2) 
Duodenum  Methods  

PWA  LDDMM  LSM  LDSM  
MD [mm]  1.2 (0.7 – 2.9)  0.5 (0.3 – 1.4)  0.2 (0.1 – 0.5)  0.2 (0.1 – 0.5) 
HD [mm]  6.7 (2.9 – 23.0)  3.6 (0.8 – 13.1)  0.8 (0.3 – 4.2)  0.9 (0.4 – 5.0) 
LD (mean) [mm]  1.5 (0.6 – 2.6)  1.0 (0.6 – 2.1)  1.2 (0.6 – 2.9)  1.0 (0.6 – 2.3) 
LD (max) [mm]  6.6 (2.2 – 16.3)  6.9 (3.0 – 26.8)  7.8 (2.9 – 28.6)  7.2 (2.9 – 22.5) 
DSC [%]  86.9 (54.1 – 92.6)  93.1 (70.6 – 96.4)  94.2 (69.0 – 97.4)  94.2 (72.2 – 97.2) 
Left kidney  Methods  

PWA  LDDMM  LSM  LDSM  
MD [mm]  0.8 (0.6 – 1.2)  0.3 (0.2 – 0.9)  0.1 (0.1 – 0.2)  0.1 (0.1 – 0.3) 
HD [mm]  4.8 (2.7 – 9.4)  2.8 (0.7 – 16.4)  0.5 (0.2 – 1.8)  0.7 (0.2 – 2.1) 
LD (mean) [mm]  1.7 (0.7 – 4.1)  0.9 (0.5 – 2.2)  1.2 (0.6 – 2.9)  1.0 (0.5 – 2.3) 
LD (max) [mm]  7.5 (2.6 – 21.7)  6.9 (3.1 – 15.4)  7.8 (3.0 – 24.6)  7.2 (2.6 – 21.7) 
DSC [%]  95.9 (90.0 – 96.9)  97.2 (95.9 – 98.4)  97.4 (96.0 – 98.4)  97.4 (95.9 – 98.5) 
Right kidney  Methods  

PWA  LDDMM  LSM  LDSM  
MD [mm]  0.8 (0.6 – 1.0)  0.3 (0.2 – 1.0)  0.1 (0.1 – 0.2)  0.1 (0.1 – 0.3) 
HD [mm]  4.3 (2.4 – 9.1)  2.9 (0.5 – 13.1)  0.4 (0.2 – 0.9)  0.6 (0.3 – 2.0) 
LD (mean) [mm]  1.6 (0.7 – 3.1)  0.8 (0.4 – 1.6)  1.0 (0.5 – 2.0)  0.8 (0.5 – 1.6) 
LD (max) [mm]  8.6 (2.7 – 20.9)  6.9 (2.7 – 14.5)  8.1 (2.9 – 23.1)  7.0 (2.7 – 21.6) 
DSC [%]  96.2 (94.8 – 97.3)  97.4 (96.3 – 97.9)  97.7 (96.6 – 98.6)  97.6 (96.4 – 98.4) 
IiiB Multiorgan deformation analysis
So far, no study has investigated the impact of inter and intrasubject variation on abdominal multiorgan deformation. Our DMR framework can directly provide a statistical representation of the registered organ models , which can then generate the mean and variation of deaeration deformation between subjects.
IiiB1 Statistical motion dynamics
Figure 7 shows the statistical motion dynamics with deformation computed from registered organ models for ten time phases in one respiratory cycle. The mean displacement for all corresponding vertices are visualized as the centerline, and the standard deviation is depicted as a colored band. The graph shows that the mean and standard deviation of the displacement at the endexpiration phase is mm for the liver, mm for the stomach, mm for the duodenum, mm for the left kidney, mm for the right kidney, and mm for the GTV of the pancreatic cancer. The standard deviation is relatively large compared with the magnitude of the displacement. This indicates that there are large individual variations in respiratory motion and organ deformation. Hence, to estimate the 3D tumor region with only a mean deformation model would be difficult.
IiiB2 Statistical deformation model
Figure 8
shows the deformation modes that correspond to the first two eigenvalues of the registered organ models. The eigenvalues and eigenvectors were computed from the set of displacement vectors of all vertices based on singular value decomposition. The central figures show the mean shape and mean displacement. The translucent shape is the endinspiration phase (
), and the opaque shape is the endexpiration phase (). The left/right images were generated by changing the weights to plus/minus, which is twice the square root of the eigenvalues. The supplemental movies visualize the sequential motion expression by interpolating the endinspiration and endexpiration phases. The color map shows the signed distance from the mean deformation.
The types of variety of motion and deformation can be characterized according to their morphological properties as follows:

The first eigenvector mainly encompasses variation in the scale of deformation, which indicates that individual difference is large during the respiratory cycle.

The second eigenvector is associated with the directions and rotations of deformation. Interestingly, the rotation axis and direction of rotation differ for each organ.
We also confirmed that the subspace representation using two eigenvectors explains 96.1% of the total deformation variation.
IiiC Deformation reconstruction performance
The aim of the next experiment was to investigate the prediction performance and characteristics of perpatient and perregionbased learning models. For the experimental setup, 100 corresponding vertices were randomly sampled from each organ model, and a total of 500 vertices were used as multiorgan shape features, meaning that the dimensionality of the feature vector was 3,000. The displacement vectors of the GTV were calculated using Eq. (1) through leaveoneout crossvalidation. The perpatient and pervertexbased learning models were evaluated using the mean distance, Hausdorff distance, and Dice similarity coefficient between the estimated and ground truth regions of the pancreatic cancer. Figure 9 shows box plots of the three error metrics for different weight parameter values. Significant differences were found for all the error metrics when using
for the weight parameters by oneway analysis of variance (ANOVA; p < 0.05 significance level). The minimal estimation error was
mm for perpatientbased learning, and mm for pervertexbased learning, which shows that pervertexbased learning improved the estimation performance by 28.4%.The characteristics of the multidimensional feature sets depend on the complexity of the tumor localization problem and the number of data sets. These are both important factors affecting the estimation performance and the calculation cost. Therefore, we investigated the relationship between the estimation error of tumor localization and the number of sampling points . A mean estimation error was calculated for ten trials of random sampling from five organs while increasing from 1 to 400. The weight parameter of the reproducing kernel was set to , which results in good estimation performance.
Figure 9(d) shows the transition of the prediction performance of the two models. In the graph, perregionbased learning model shows consistently better prediction performance regardless of the number of sampling points. The estimation error decreased as the number of sampling points increased, and the error tends to converge over around 300 points. In the proposed pervertexbased learning model, an average of 4.8 mm estimation error was achieved in the case of , which is a performance that is similar to the previous setting, which used 500 sampling points.
IiiD Multiorgan shape features
The investigation of effective feature sets in respiratory motion analysis is an important topic not only for reproducing kernels but for deep learning applications. The next experiments confirm the relationship between 31 feature sets (all combinations of the five organs) and estimation accuracy in the prediction of pancreatic cancer deformation. This experiment also compares the results of the performance of the proposed multidimensional features and that of other regression approaches with a single organ or lowdimensional features. The number of sampling points was fixed to 300 based on the results of the previous experiments.
Figure 10 shows the median of the Hausdorff distance sorted in ascending order for 31 feature sets. These results suggest the following:

Features from smaller organ sets show better estimation performance than ones sampled from all five organs.

Estimations using only one organ tend to increase estimation error. Specifically, the right kidney leads to poor estimation performance.

The stomach, duodenum, and left kidney are the best motion descriptors for estimating the deformable region of pancreatic cancer.
These findings suggest the validity of using the features of multiple neighboring organs rather than features sampled from the entire abdominal area. Shape features from the liver perform worse despite the fact that this organ is relatively close to the pancreas, but they are good candidates for feature descriptors when the contours of the stomach and duodenum are not available.
IiiE Estimation performance on motion dynamics
The goal of the motion/deformation analysis in this paper was to investigate the estimation performance of pancreatic cancer using multiorgan shape features. We analyzed the estimation error of GTV for ten time phases of the 4DCT dataset, as shown in Fig. 11. The analysis of the multiorgan feature sets show that, the stomach, duodenum, and left kidney were used for this prediction. The number of sampling points and parameters for the reproducing kernels were the same as in the previous experiments. Figure 11(a) shows the interpatient variation of the GTV’s motion with local deformation. The mean, maximum, and minimum path in one respiratory cycle are plotted as three lines, and each colored bandwidth is the standard deviation for 25patient data. Because the displacement is calculated per vertex for the pancreas model, the difference between the maximum and minimum paths represents a local deformation exceeding 5 mm on average. The overall path also includes considerable variations with over 10 mm differences in individual respiratory motion. This can also be confirmed in the full bandwidth of Fig. 7.
The transition of the estimation error for ten time phases using the designed reproducing kernel is plotted in Fig. 11(b). Despite the interpatient variation of motion and deformation observed in Fig. 11(a), the mean standard deviation of the distance was 1.2 0.7 mm and that of the Hausdorff distance was 4.2 2.3 mm. Both errors remained small throughout the respiration. These results suggest that the GTV can be adequately localized by the shape features of the surrounding organs, specifically the stomach, duodenum, and left kidney, even if the pancreas is not directly detected.
Iv Discussion
To our knowledge, this study is the first to build an SMDM, a statistical multiorgan deformation library of five abdominal organs that includes inter and intrapatient shape variations. Imagebased deformable registration [15][23] is a popular approach for deformed bodies, however, matching multiorgan regions tends to result in large registration errors, especially around the organ boundaries. The potential of DMR has recently been rediscovered [1][2][6], and the proposed LDSM for each organ address problems with matching organs with rotational components and sliding boundaries. Moreover, it achieved stable registration with a Hausdorff distance error of less than 1 mm. We note that this result outperforms the registration error of 3.1–3.3 mm reported for the SSM of CTV of the cervix–uterus and bladder in recent work [1].
To clarify the focus of this research, the spatial displacement of the internal structures of the mesh was considered to be outside the scope of this paper. This is because the assumed application of SMDM is radiotherapy planning, in which the 3D contour (i.e., surface) of the GTV and OARs must be estimated. Because the developed library uses a tetrahedral mesh in the DMR and can express internal deformation, we consider that further modeling and evaluation of internal structures would be possible in future work.
In this experiment, the kernelbased modeling approach was used as a nonlinear deformation regression for a limited number of 4DCT data. Despite this study’s limited data size, the results showed that stable estimation could be achieved throughout the respiration cycle. These findings and the developed database will be useful for determining the planning target volume of pancreatic cancer from the available features of the surrounding organs. Some recent studies report pixeltoshape techniques, i.e., 3D shape/deformation reconstruction from a single 2D image [38][39][40]. Combining such 3D shape reconstruction techniques, SMDM, and extracted multiorgan feature sets, would be an interesting approach for realtime tumor localization from Xray images.Our future work includes application of the developed multiorgan library to markerless radiation for tumortracking radiotherapy.
V Conclusion
In this paper, we introduced a multiorgan deformation library and its application to pancreatic cancer localization based on the shape features of multiple organs. The statistical multiorgan motion/deformation library of the stomach, liver, left and right kidneys, and duodenum was generated by the DMRs of their organ meshes generated from 4DCT images (250 volumes). The proposed LDSM method achieved stable registration with a Hausdorff distance error of less than 1 mm. Perregionbased deformation learning using a reproducing kernel was also proposed to predict the displacement of pancreatic cancer for ART. The experiment results show that the proposed concept better estimates, and achieves a clinically acceptable estimation error for mean distance (1.2 0.7 mm) and the Hausdorff distance (4.2 2.3 mm) throughout the respiratory motion.
References
 [1] B. Rigaud, A. Simon, M. Gobeli, J. Leseur, L. Duverge, D. Williaume et al., “Statistical shape model to generate a planning library for cervical adaptive radiotherapy,” IEEE Trans. Med. Imag., vol. 38, no. 2, pp. 406–416, 2019.
 [2] A. MagallonBaro, M. Loi, M. T. W. Milder, P. V. Granton, A. G. Zolnay, J. J. Nuyttens et al., “Modeling daily changes in organatrisk anatomy in a cohort of pancreatic cancer patients,” Radiother. and Oncol., vol. 134, pp. 127–134, 2019.
 [3] M. Posiewnik and T. Piotrowski, “A review of conebeam CT applications for adaptive radiotherapy of prostate cancer,” Phys. Med., vol. 59, pp. 13–21, 2019.
 [4] C. A. Hvid, U. V. Elstrøm, K. Jensen, and C. Grau, “Conebeam computed tomography (CBCT) for adaptive image guided head and neck radiation therapy,” Acta Oncol., vol. 57, no. 4, pp. 552–556, 2018.
 [5] T. Heimann and H. P. Meinzer, “Statistical shape models for 3D medical image segmentation: A review,” Med. Image. Anal., vol. 13, no. 4, pp. 543–563, 2009.
 [6] M. Nakamura, M. Nakao, H. Hirashima, H. Iramina, and T. Mizowaki, “Performance evaluation of a newly developed threedimensional modelbased globaltolocal registration in prostate cancer,” J. Radiat. Res., vol. 60, no. 5, pp. 595–602, 2019.
 [7] D. Tilly, A. J. Schoot, E. Grusell, A. Bel, and A. Ahnesjö, “Dose coverage calculation using a statistical shape model applied to cervical cancer radiotherapy,” Phys. Med. Biol., vol. 62, no. 10, pp. 4140–4159, 2017.
 [8] M. Haekal, H. Arimura, T. A. Hirose, Y. Shibayama, S. Ohga, J. Fukunaga et al., “Computational analysis of interfractional anisotropic shape variations of the rectum in prostate cancer radiation therapy,” Phys. Med. Biol., vol. 46, pp. 168–179, 2018.
 [9] L. Bondar, M. Intven, J. P. Burbach, E. Budiarto, J. P. Kleijnen, M. Philippens et al., “Statistical modeling of CTV motion and deformation for IMRT of earlystage rectal cancer,” Int. J. Radiat. Oncol. Biol. Phys., vol. 90, no. 3, pp. 664–672, 2014.
 [10] J. Ehrhardt, R. Werner, A. SchmidtRichberg, and H. Handels, “Statistical modeling of 4D respiratory lung motion using diffeomorphic image registration,” IEEE Trans. Med. Imag., vol. 30, no. 2, pp. 251–265, 2011.
 [11] C. Jud, A. Giger, R. Sandkühler, and P. C. Cattin, “A localized statistical motion model as a reproducing kernel for nonrigid image registration,” Medical Image Computing and ComputerAssisted Intervention (MICCAI), pp. 261–269, 2017.
 [12] B. Fuerst, T. Mansi, F. Carnis, M. Sälzle, J. Zhang, J. Declerck et al., “Patientspecific biomechanical model for the prediction of lung motion from 4D CT images,” IEEE Trans. Med. Imag., vol. 34, no. 2, pp. 599–607, 2015.
 [13] M. Nakao and K. Minato, “Physicsbased interactive volume manipulation for sharing surgical process,” IEEE Transactions on Information Technology in Biomedicine, vol. 14, no. 3, pp. 809–816, 2010.
 [14] M. Nakao, A. Kawashima, M. Kokubo, and K. Minato, “Simulating lung tumor motion for dynamic tumortracking irradiation,” Proc. IEEE NSSMIC, vol. 6, pp. 4549–4551, 2007.
 [15] A. Sotiras, C. Davatzikos, and N. Paragios, “Deformable medical image registration: A survey,” IEEE Trans. Med. Imag., vol. 32, no. 7, pp. 1153–1190, 2013.
 [16] J. Rühaak, T. Polzin, S. Heldmann, I. J. A. Simpson, H. Handels, J. Modersitzki et al., “Estimation of large motion in lung CT by integrating regularized keypoint correspondences into dense deformable registration,” IEEE Trans. Med. Imag., vol. 36, no. 8, pp. 1746–1757, 2017.
 [17] Y. Shibayama, H. Arimura, T. A. Hirose, T. Nakamoto, T. Sasaki, S. Ohga et al., “Investigation of interfractional shape variations based on statistical point distribution model for prostate cancer radiation therapy,” Med. Phys., vol. 44, no. 5, pp. 1837–1845, 2017.
 [18] M. Nakao, J. Tokuno, T. ChenYoshikawa, H. Date, and T. Matsuda, “Surface deformation analysis of collapsed lungs using modelbased shape matching,” Int. J. Comput. Assist. Radiol. Surg., vol. 14, no. 10, pp. 1763–1774, 2019.
 [19] M. Wilms, I. Y. Ha, H. Handels, and M. P. Heinrich, “Modelbased regularisation for respiratory motion estimation with sparse features in imageguided interventions,” Medical Image Computing and ComputerAssisted Intervention (MICCAI), pp. 89–97, 2016.
 [20] A. Yu, S. M. Woo, J. Joo, H. R. Yang, W. J. Lee, S. J. Park et al., “Development and validation of a prediction model to estimate individual risk of pancreatic cancer,” PLoS ONE, vol. 11, no. 1, e0146473, 2016.
 [21] G. Fontana, M. Riboldi, C. Gianoli, C. I. Chirvase, G. Villa, C. Paganelli et al., “MRI quantification of pancreas motion as a function of patient setup for particle therapy  a preliminary study,” J. Appl. Clin. Med. Phys., vol. 17, no. 5, pp. 60–75, 2016.
 [22] D. Zhang and K. Chen, “A novel diffeomorphic model for image registration and its algorithm,” J. Math. Imaging Vision, vol. 60, no. 8, pp. 1261–1283, 2018.
 [23] S. Oh and S. Kim, “Deformable image registration in radiation therapy,” Radiation oncology journal, vol. 35, no. 2, pp. 101–111, 2017.
 [24] S. Klein, M. Staring, K. Murphy, M. A. Viergever, and J. P. Pluim, “Elastix: A toolbox for intensitybased medical image registration,” IEEE Trans. Med. Imag., vol. 29, no. 1, pp. 196–205, 2010.
 [25] K. Robert, J. P. Anthony, S. Reith, T. Josephine, E. F. Katherine, P. Neil et al., “Correlation of liver and pancreas tumor motion with normal anatomical structures determined with deformable image registration,” Biomed. Phys. Eng. Exp., vol. 3, no. 1, e017001, 2017.
 [26] R. Jadon, C. A. Pembroke, C. L. Hanna, N. Palaniappan, M. Evans, A. E. Cleves et al., “A systematic review of organ motion and imageguided strategies in external beam radiotherapy for cervical cancer,” Clin. Oncol., vol. 26, no. 4, pp. 185–96, 2014.
 [27] T. Iwai, M. Nakao, M. Nakamura, and T. Matsuda, “A tracking method of invisible tumors using surrounding features on time series Xray images,” Int. J. Comp. Assist. Radiol. Surg., vol. 12, no. 1, S206–207, 2017.
 [28] H. Teske, P. Mercea, M. Schwarz, N. H. Nicolay, F. Sterzing, and R. Bendl, “Realtime markerless lung tumor tracking in fluoroscopic video: Handling overlapping of projected structures,” Med. Phys., vol. 42, no. 5, pp. 2540–2549, 2015.
 [29] G. Whitfield, P. Jain, M. Green, G. Watkins, A. Henry, J. Stratford et al., “Quantifying motion for pancreatic radiotherapy margin calculation,” Radiother. Oncol., vol. 103, no. 3, pp. 360–366, 2012.
 [30] X. Xu, F. Zhou, B. Liu, D. Fu, and X. Bai, “Efficient multiple organ localization in CT image using 3D region proposal network,” IEEE Trans. Med. Imag., published online, 2019.
 [31] J. Kim, C. ValdesHernandez Mdel, N. A. Royle, and J. Park, “Hippocampal shape modeling based on a progressive template surface deformation and its verification,” IEEE Trans. Med. Imag., vol. 34, no. 6, pp. 1242–1261, 2015.

[32]
M. F. Beg, M. I. Miller, A. Trouvé, and L. Younes, “Computing large
deformation metric mappings via geodesic flows of diffeomorphisms,”
Int. J. Comput. Vision
, vol. 61, no. 2, pp. 139–157, 2005.  [33] A. Nealen, T. Igarashi, O. Sorkine, and M. Alexa, “Laplacian mesh optimization,” Proc. of 4th Int. Conf. on Computer Graphics and Interactive Techniques, pp. 381–389, 2006.
 [34] D. P. Huttenlocher, G. A. Klanderman, and W. A. Rucklidge, “Comparing images using the Hausdorff distance,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 15, no. 9, pp. 850–863, 1993.
 [35] A. Pitiot, E. Bardinet, P. M. Thompson, and G. Malandain, “Piecewise affine registration of biological images for volume reconstruction,” Med. Image. Anal., vol. 10, no. 3, pp. 465–483, 2006.
 [36] J. Zhou, S. Kim, S. Jabbour, S. Goyal, B. Haffty, T. Chen et al., “A 3D globaltolocal deformable mesh model based registration and anatomyconstrained segmentation method for image guided prostate radiotherapy,” Med. Phys., vol. 37, no. 3, pp. 1298–1308, 2010.
 [37] A. Saito, M. Nakao, Y. Uranishi, and T. Matsuda, “Deformation estimation of elastic bodies using multiple silhouette images for endoscopic image augmentation,” IEEE Int. Symp. on Mixed and Augmented Reality, pp. 170–171, 2015.
 [38] Y. Wang, Z. Zhong, and J. Hua, “DeepOrganNet: Onthefly reconstruction and visualization of 3D/4D lung models from singleview projections by deep deformation network,” arXiv preprint, 2019.
 [39] S. Wu, M. Nakao, J. Tokuno, T. ChenYoshikawa, and T. Matsuda, “Reconstructing 3d lung shape from a single 2D image during the deaeration deformation process using modelbased data augmentation,” IEEE Int. Conf. on Biomed. Health Info.(BHI), pp. 1–4, 2019.
 [40] M. Nakao, A. Saito, and T. Matsuda, “A simulation study on deformation estimation of elastic materials using monocular images,” Int. J. Comput. Assist. Radiol. Surg., vol. 12, no. 1, S257–258, 2017.
Comments
There are no comments yet.