1 Introduction
Cardiovascular disease is one of the leading causes of death worldwide [21]. Cardiac CT angiography (CCTA) image is widely adopted for the diagnosis of the cardiovascular disease because of its noninvasion and high sensitivity [15]. In the clinic, doctors need a series of manual labor to obtain the diagnostic report, which is a timeconsuming effort. If a computeraided diagnosis (CAD) system can generate the diagnostic report automatically, a huge amount of time can be saved. While automated anatomical labeling of the coronary artery tree extracted from CCTA image is a prerequisite step in the automated CAD system.
The coronary artery tree consists of two components, i.e., left domain (LD) and right domain (RD). Also they both originate from the aorta. According to [23], the main coronary arteries of interest are left main (LM), left descending artery (LAD), left circumﬂex artery (LCX), left ramusintermedius (RI), obtuse margin (OM), diagonal artery (D), septal artery (S), right coronary artery (RCA), right posterior lateral branches (RPLB), right posterior descending artery (RPDA), right acute marginal artery (AM) (shown in Figure 1). From prior knowledge, we know that LAD, LCX, and RI are from LM. RPLB, RPDA, and AM are from RCA. Also S’s and D’s are from LAD, while OM’s are from LCX. This makes the whole coronary artery tree as structured data. Normally, RCA, LM, LAD, and LCX are treated as the main branches. Other branches are treated as the side branches.
Several anatomical labeling techniques have been developed for coronary arteries [23, 3, 22], brain arteries [1], abdominal arteries [18, 25] and airways [10]. However, as shown in Figure 1, the coronary arteries vary much among subjects, which is the main challenge for the labeling system. The number of branches, the length and size of each branch, and the direction that the branch span all vary from person to person. [23, 3] both rely on registration algorithm and prior knowledge. They first identified four main branches (i.e, LM, LAD, LCX and RCA) and then labeled the side branches (e.g., AM, RPDA, D, etc.). Finally, the results were refined by logical rules which are translated from the clinical experience. But these conventional methods are not datadriven, i.e., they can not leverage the advantage of big data. Recently, in [22]
, the author brought a novel deep neural network (TreeLabNet), which can learn from the position features extracted from the coronary artery centerlines for labeling the segments. Although, this model is datadriven, it only utilizes position information from vessel centerline and leaves the full information in the CCTA images aside. In addition, the input for the TreeLabNet is built by topological structure. Missing the main branches (e.g., LM, RCA, etc.) might have deep influence in labeling the side branches. Therefore, a robust and selfadaptive model is needed for this structured data.
In the deep learning field, it’s quite mature for the study in Euclidean space where elements are treated equally. However, there are normally two view aspects for the structured data. In
[5, 4], the authors viewed the structured data from a manifoldvalued aspect. Also in [14, 12, 26] where the structured data can be viewed as the graph, the authors introduced the graph models with nodes and edges, which can be used to extract the information from the relationship between each node in the structured graph data. It’s natural to treat the coronary arteries as the tree because of the path the branches spread and the connection among the branches.In this paper, we propose a conditional partialresidual graph convolutional network (CPRGCN), which can make full use of both position information and 3D image information in the CCTA volume. The partialresidual block is applied to the position domain features to enhance the features. Also, we use 3D Convolutional Neural Network (CNN) together with Bidirectional Long ShortTerm Memory (BiLSTM) to extract the features along each branches as the conditions for the graph model. These two parts compose the CPRGCN which can be trained endtoend.
In a summary, our main contributions are as follows:

We propose the CPRGCN, a conditional partialresidual graph convolutional network, which can label the coronary artery tree endtoend.

To our best knowledge, this is the first time we take 3D image features into consideration in coronary artery labeling field.

Our CPRGCN and the hybrid model (i.e., 3D CNNs following BiLSTMs) can be jointly trained. We evaluate the CPRGCN on a large private collected dataset. Our approach outperforms the stateoftheart result.
2 Related Work
Most of the related work can be divided into two categories, i.e., traditional based and deep learning based. Traditional methods are based on the knowledge and the topology of the coronary arteries. Normally, they require two steps of registration and correction. With the development of deep learning, there are also some methods on the graphbased structural data. These methods extract features as nodes and train the model with the data they acquire. Also these methods heavily depend on the size and quality of the dataset.
2.1 Traditional methods
Most traditional methods are based on registration. In [23], the author presented a twostep method. In the registration step, the main branches, LM, LAD, LCX, and RCA, are identified. Then the rest branches are matched afterward. In [3], they built the 3D models for both right dominant and left dominant. The 3D coronary trees from subjects are aligned with the 3D models to get the label of each segment. They also applied the logical rules to fulfill the clinical experience.
However, traditional methods highly rely on the main branches. If the main branches are missing in the automated segmentation system, the performance will deteriorate dramatically. Also, they require prior knowledge about how the coronary tree span. Thus if some subbranches are missing, it will affect the topology and the performance. Finally, all these traditional methods have the humaninterpretation, which means that if the topology is too complicated, the automatic system will raise the information that it is not capable of determining.
2.2 Deep learning based methods
With the power of deep learning, the TreeLabNet is developed in [22]
. They combined the multilayer perceptron encoder network and the bidirectional treestructural LSTM to construct the TreeLabNet. They used several selected features from positions and stacked the ChildSum Tree LSTMs as the components. The left and right coronary arteries are trained independently.
In this method, the missing branches will cause a massive problem since this will change the layer index of the nodes inside the tree. With the treestructured model, the message is only passing between nearby layers. The closer the branches are to the root nodes, the higher impact they will raise. The left and right are also classified with the rules and the thresholds. This method has a strong assumption that the branches will only bifurcate so that each node in the treestructure can only have two children. But in the coronary arteries, it’s quite reasonable that several subbranches bifurcate from the points near to each other. So the node for the parent branch will return to have more than two children, which is beyond the capacity of the TreeLabNet.
In a broad sense, the TreeLabNet is a simplified version of the graph models. In [14], the author brought out the Graph Convolutional Networks (GCN) which operate directly on graphs. By projecting the graphs into the Fourier domain, the author defined the convolution operator and filter kernels in the Fourier domain using the Chebyshev polynomials. In [17], they modified the Graph Neural Networks (GNN) [19]
to use gated recurrent units.
Even though these graph models are successful in molecular fingerprints [7] and protein interface prediction [8], they have not been used in labeling coronary arteries. Also graph models also suffer from shallow structure problems. Since stacking multiple GCN layers will result in oversmoothing [16].
3 Our Approach
In this section, we detail the CPRGCN, which makes full use of both CCTA images and the position of the coronary artery centerlines. As shown in Figure 2, our method considers both CCTA images and the features from positions mentioned in [22]. The centerlines are extracted using the automated coronary artery tracking system [24]. Then, our CPRGCN extracts the features from the centerlines within SCT block. Also, in the image domain, we use the subsampled control points on centerlines to get the moving cubes with a fixed radius along each branches as the image domain data. The conditions for our CPRGCN model are obtained with the 3D CNN and the BiLSTM. The detailed architecture of the CPRGCN model are shown in Table 1.
Block  Details 

SCT  first, middle, and last points 
tangent direction and firstlast direction  
3D CNN  kernel size = 3, in channel = 1, out channel = 16 
maxpooling size = 2  
kernel size = 3, in channel = 16, out channel = 32  
maxpooling size = 2  
kernel size = 3, in channel = 32, out channel = 64  
maxpooling size = 2  
BiLSTM  layer = 4, hidden size = 128 
CPRGCN  out channel = 256 
out channel = 256  
out channel = 256  
Fully Connected  out channel = 128 
out channel = # of classes 
3.1 Position domain features
In [22], the author introduced a spherical coordinate transform 2D (SCT2D) which transform the positions in 3D into the azimuth and elevation angles
. They argued that this could normalize the variance of centerlines in the default Cartesian coordinate system. However, it is noticeable that
and have the periodic . So with limiting the range of the angle to be , the small amount of shaking, due to the noise, will return to be difference near the angle .The similar idea of using spherical coordinate transform is applied in our approach. Since each branches are processed separately, so as to get the azimuth and elevation angles, we need to define the origin and the axes for each branches. For each branches, the first control point is chosen as the origin. The direction pointing from the first point to the second point is defined as
axis. The vector from the first point to the last point of the centerline lies in
plane.To overcome the instability due to the periodic, we use the manifold to represent . The manifold is the sphere with unit radius in . A trivial method is to use the matrix . Because of the periodic of and , the matrix is stable on the whole manifold . This kind of spherical coordinate transformation is called SCT in the rest of this paper. The Cartesian coordinate and the manifold transformation is in Eq. 3.1.
(1)  
We use the similar features mentioned in [22]: (1) The projection and the normalized 3D positions of the first point, center point, and the last point. (2) Directional vector between first and last points and the tangential direction at the start point in both 3D and .
3.2 Image domain conditions
Most of the medical images, including Magnetic resonance imaging (MRI) and the CCTA we use, lie in 3D. The branches, unlike other images, have the sequential dependency. Thus we use the 3D CNN to extract the spatial features and use the BiLSTM afterward to summarize the tubular sequential features. An example of processing the RPLB branch is shown in Figure 3. The dimension of CCTA images are the slices, which might have different spacing from and dimensions. So we resample all the CCTA images to have the same spacing along dimensions.
Using the automatic segmentation method, we can get the centerlines of all branches which build into a coronary artery tree. We separate the branches where the centerlines end or bifurcate. If the starting points of the children branches are close to each other, we consider all these children branches are from the same point on the parent branch. The control points of the centerlines are smoothed using the CatmullRom spline [20]. Finally, the control points are subsampled with the same length.
The image domain data is the cubes with the fixed radius around each control points of . Three layers of 3D CNN and 3D maxpooling are used to extract the features of
. The weights of CNN are shared among the segments. In order to train the model in a minibatch manner, these feature vectors are padded to the
as the input of the multilayer bidirectional LSTM [9]. The last hidden state is treated as the final conditions , which represents the image information of this branch. We treat this as the conditional information since the images are of less importance than the position domain features.3.3 Partialresidual block of GCN
The layerwise propagation rule for the traditional GCN is Eq. 2. In the multilayer GCN, the features of nodes is the input for the first layer, . Here, the is the number of nodes. The is the dimension of the features for each nodes. The is the adjacency matrix for the graph. The is the layerwise trainable weights, . The
is the activation function. In this paper, we choose
as our activation function to include the nonlinear ability.(2)  
The is the adjacency matrix
added the selfloop identity matrix
. The input for the first layer is .Our conditional partialresidual block requires both the position features and the CCTA image domain conditions . The combination of the features and conditions from two domains is used as the representative of the nodes in the graph model. The edges are defined as the parentchildren relationship. As we mentioned above, the topology of the whole coronary tree is collected from . Whenever the branches bifurcate, we treat these parentchildren branches as three (or more) nodes. The edges are from the parent branch to the children branches. This will build the graph of the subject with the adjacency matrix . As shown in Figure 4, the RCA first bifurcate the AM and then bifurcate the RPLB and RPDA. So the extracted graph has 4 edges and 5 nodes.
In [11], the author argued that the deep residual learning framework (Figure 5 (a)) can help in the performance. Instead of directly learning the map , the neural network learns the residual part assuming the input and the output having the same dimension. When changing the output dimension, it’s a straightforward method to add the linear projection by the shortcut connection:
(3) 
Also in [2], the authors extended the idea of residual connection into the Residual Gated Graph ConvNets. The updated propagation rule is:
(4) 
If we view the residual connection in Eq. 4 as a continuous function of and add enough number of layers. In the limit, [6]
parameterize the continuous dynamics of hidden units using the ordinary differential equation (ODE):
(5) 
In our setup, we have two kinds of input: position domain features and the CCTA image domain conditions . If we treat as the layer index number and is the function of layer in Eq. 5, we have the partial differential equation (PDE) on :
(6)  
(7)  
(8) 
Eq. 7 is based on the fact that we treat as the conditions. We use the trainable to approach the partial differential . If we take and approximate , we have
(9) 
as the discrete numerical estimates. In our case, as shown in Figure
5 (b), the discrete partialresidual block of the CPRGCN takes the weighted because of the change of the channel sizes. If we push our PDE a bit further, we can have:(10) 
Also, the here should be weighted for the flexibility of the channels.
3.4 Data Flow
The algorithm of our CPRGCN is shown in Alg. 1. CCTA image , centerlines and reference label compose the training sample of the model. We first build the graph via priorknowledge from . Then position domain feature and image domain feature are extracted via SCT and a hybrid network (i.e., 3D CNN following BiLSTMs). In addition, we concatenate and as the input of GCN layer with as the shortcut in residual connection. At last, a fully connected layer predicts the final labeling and our object is to minimize the crossentropy of these two distributions, i.e., and .
4 Experimental Results
Number  Avg.(std))  Max  Min  

Branches  4929  9.65 (2.13)  15  3 
Segments  6760  13.23 (3.55)  22  3 
Edges  5714  11.18 (3.56)  20  2 
Method  Metric  RCA  RPDA  RPLB  AM  LM  LAD  LCX  RI  D  OM  S  Avg(std) 

Conventional [3]  Recall  0.918  0.850  0.852  0.893  0.984  0.911  0.832  0.848  0.799  0.720  0.835  0.8590.066 
Precision  0.925  0.835  0.860  0.871  0.991  0.929  0.810  0.803  0.781  0.739  0.865  0.8550.069  
F1  0.922  0.842  0.856  0.882  0.987  0.920  0.821  0.825  0.789  0.730  0.850  0.8570.067  
TreeLabNet [22]  Recall  0.950  0.858  0.818  0.871  0.996  0.948  0.913  0.770  0.816  0.805  0.862  0.8730.067 
Precision  0.948  0.823  0.842  0.871  0.970  0.937  0.0.936  0.714  0.841  0.807  0.859  0.8680.072  
F1  0.949  0.840  0.830  0.871  0.983  0.942  0.924  0.741  0.829  0.807  0.860  0.8710.069  
Our CPRGCN  Recall  0.994  0.930  0.944  0.991  0.994  0.990  0.982  0.921  0.936  0.896  0.954  0.9580.033 
Precision  0.987  0.946  0.947  0.983  0.984  0.986  0.971  0.896  0.883  0.933  0.974  0.9540.035  
F1  0.990  0.938  0.945  0.987  0.989  0.988  0.976  0.909  0.909  0.914  0.964  0.9550.032 
4.1 Dataset and Evaluation Metics
To our best knowledge, a public dataset with CTA image and annotation of coronary artery labeling is not available until now. Previous works [23, 3, 22] all collected a private experimental dataset from clinic. For instance, conventional methods [23, 3] only used 58 and 83 subjects respectively, while deep learning based method [22] used a larger dataset with 436 subjects. In this study, we collected the largest relevant dataset from clinic. All vessel centerlines are first extracted using[24]
. This dataset contains 511 subjects and all of them are annotated by two experts with a twophase annotation process. These two experts give a label to every branch alone in the first round. Then the annotation results are merged and experts take discussions on inconsistent ones to obtain a final label. These 511 subjects and corresponding annotation compose our experimental dataset. The average number of branches in each subject is 9.65, with standard deviation (std) 2.13. The largest number of branches is 15, and the smallest number is 3. After bifurcated and separated, the average number of segments is 13.23. The detail is mentioned in Table
2. The edges in the table represent the relationship in the graph we build for each subject. The average number of edges in each graph is 11.18.The evaluation is performed on all branch segments by the predicted label and the ground truth label. The recall rate for each segments is calculated by . The precision is . The F1 score is . Since the numbers of segments in are imbalance, we also use the mean metrics of all classes for the segments. . It is similar for the other metrics.
4.2 Implementation Details
Hyperparameters selection. The CCTA images are scaled to mm voxel spacing. Since the radius of branches usually ranges in mm, i.e., voxels. So the radius of the cube is chosen to be voxels to keep the angel, size, and the texture information. Thus the subsampling rate for the centerline positions is voxels to have overlapping as well as keep the sequential information along the branches.
Training. The dataset is randomly and equally divided into five subsets. In the training stage, we use a leaveoneout crossvalidation strategy to evaluate all subjects in the dataset. The proposed CPRGCN model has two trainable components, i.e, 3D CNN following LSTMs (3D CNN module), GCNs following a FC layer (GCN module). A series of 3D image cubes extracted along the vessel centerlines from the 3D CCTA image are the input of the 3D CNN module. The position domain features extracted from the vessel tree via SCT are concatenated with the output of the 3D CNN module. The GCN module takes these combined features as input and predicts the label for every segment. In addition, the reference labels are needed to compute the crossentropy loss with the predicted labels. The 3D image cubes, position domain features and the reference labels compose the training samples. Our CPRGCN model is trained in a endtoend manner.
The algorithm is implemented using Pytorch with an NVIDIA Tesla P100 GPU. We use Adam optimizer
[13]with an initial learning rate of 0.001. Each minibatch contains 8 coronary artery trees. For each training period, we train the CPRGCN model up to 200 epochs which takes 2.7 hours. So the total training time for fivefold crossvalidation is 13.5 hours.
Testing. We first choose the best model in each fold according to the overall testing precision ignored classes. Then we use each model to evaluate the corresponding testing data. During inference, the average time spent on the CPRGCN model is 0.045 s per case, which is greatly important in the clinical utilization.
Results. Since there is not a public dataset in this field and conventional methods only evaluated their performance on a small private dataset. We also reproduced the conventional method and TreeLabNet based on deep learning. Considering conventional methods [23, 3] mainly rely on registration and prior knowledge, we only reproduce [3] which has a improvement on [23]. Table 3 reports the detail performance on our dataset. Our CPRGCN achieves the highest meanRecall of 0.958, meanPrecision of 0.954 and meanF1 of 0.955, which outperforms other methods with a large margin.
All the models have performed well on the main branches. But the side branches are also a crucially important part of the automated anatomical labeling in the CAD system. In our approach, we treat the main branches and side branches equally. So compared with other twostep methods, most of the side branches, such as OM and RPDA, performs better in our approach. But since the number of segments of the main branches like LM and RCA is relatively large, the performance of these main branches is better. For the side branches, especially for the D and OM, the number of samples in the dataset is relatively small. So the performance is slightly worse than the main branches.
4.3 Ablation Study
To make sure that all the components of the CPRGCN perform well, we design the ablation study experiments.
Image domain condition. One of the major difference between our approach and the other methods is that we use the extra information from CCTA images domain. As shown in the second column of Table 5, if we only remove the image domain conditions, the metrics, i.e., meanRecall, meanPrecision and meanF1 will drop over . For instance, the meanF1 score is compared with in our CPRGCN.
Residual GCN connection. Also, our approach brought the partialresidual connection in the graph model. In this part, we only remove the residual connection in the GCN block and keep other setting the same as the CPRGCN. The third column in Table 5 reports that the F1 score will drop to . This argues that with the help from residual, our model can absorb the features from both position and image domain as well as keep the original position domain features.
Undirected graph. In this part, we build the undirected graph, which means adding the opposite edges from the original graph. Detailed results are illustrated in Table 5, the fourth column. Although the average metrics among classes (i.e, meanRecall, meanPrecision and meanF1) are slightly worse than our best result, several classes (e.g, LM, LAD, RPDA, etc.) for undirected graph have higher precision than directed graph. It is worth to note that we select directed graph to achieve higher average metics. In clinical practice, we can make choice according to the requirement of the doctors.
Repeated GCN Blocks. [16] reports that stacking multiple GCN layers will result in oversmoothing. We also conduct the experiment to answer how the number of GCN blocks influences the performance. Considering the connection complexity in coronary tree graph, we respectively evaluate our CPRGCN with 1, 2, 3, 4 GCN layers. Figure 6 illustrates that we can improve the performance by stacking GCN blocks, especially from 1 GCN block to 2 GCN blocks. However, the model with 4 GCN blocks has no improvement compared with 3 GCN blocks. Therefore we use 3 GCN blocks in our CPRGCN. The depth of the graph for the coronary artery tree mostly is less than 4, which might lead to the evaluation results.
Method  Dataset  meanRecall  meanPrecision  meanF1 

TreeLabNet [22]  Original  0.873  0.868  0.871 
Synthetic  0.806  0.799  0.802  
Our CPRGCN  Original  0.958  0.954  0.955 
Synthetic  0.931  0.928  0.929 
Ablation Study  Without image domain conditions  Without residual connnection  Undirected graph  Our CPRGCN  

Precision  Recall  F1 Score  Precision  Recall  F1 Score  Precision  Recall  F1 Score  Precision  Recall  F1 Score  
LM  0.990  0.994  0.992  0.959  0.984  0.971  0.996  0.998  0.997  0.984  0.994  0.989 
LAD  0.977  0.983  0.980  0.975  0.978  0.976  0.987  0.993  0.990  0.986  0.990  0.988 
LCX  0.938  0.964  0.951  0.946  0.963  0.955  0.963  0.980  0.971  0.971  0.982  0.977 
RI  0.880  0.904  0.892  0.917  0.871  0.893  0.875  0.904  0.890  0.896  0.921  0.909 
RCA  0.982  0.989  0.986  0.980  0.981  0.980  0.989  0.994  0.992  0.987  0.994  0.991 
D  0.842  0.877  0.859  0.883  0.934  0.908  0.889  0.924  0.906  0.883  0.936  0.909 
S  0.954  0.954  0.954  0.979  0.948  0.963  0.967  0.937  0.952  0.974  0.954  0.964 
OM  0.830  0.814  0.822  0.915  0.913  0.914  0.912  0.904  0.908  0.933  0.896  0.914 
RPDA  0.938  0.927  0.933  0.952  0.950  0.951  0.960  0.939  0.949  0.947  0.944  0.945 
RPLB  0.925  0.925  0.925  0.951  0.930  0.941  0.947  0.953  0.950  0.946  0.930  0.938 
AM  0.977  0.977  0.977  0.968  0.971  0.969  0.988  0.997  0.993  0.983  0.994  0.987 
Avg.  0.930  0.937  0.934  0.948  0.947  0.947  0.952  0.957  0.954  0.954  0.958  0.955 
Std.  0.054  0.053  0.053  0.037  0.036  0.035  0.038  0.036  0.036  0.035  0.033  0.032 
4.4 Synthetic ”Data Attack”
We hold the opinion that our CPRGCN is more robust when main branch in the vessel tree is missing. So we build a synthetic dataset from our dataset. we randomly remove 20% LM and RCA branches in our dataset. Most of the other side branches (e.g., LCX, LAD, RI, AM, etc.) directly originate from those two branches. In this new systhetic dataset, 295 RCA and LM branches are removed. 1123 of 6760 vessel segments directly connect with these 295 missing branches. Since conventional methods [23, 3] strictly rely on the main branches. We only evaluate the trained CPRGCN and TreeLabNet [22] on this synthetic dataset. As shown in Table 4, our CPRGCN drops alomost 2.6% while the TreeLabNet drops almost 6.7% in three average metrics. This demonstrates that our method is more robust.
5 Discussion
As shown above, our approach achieves stateoftheart result. The CPRGCN takes the image domain information as the conditions for our approach. So as to stress the importance of the features from position domain, we introduced the partialresidual block on the position domain features.
Image domain information
As far as we know, we are the first to include the image domain information as the conditions. The result, which is shown above, suggests that even though the position domain features are important components of the automated anatomical labeling of coronary arteries, the CCTA images weigh more than just for semantic segmentation. For example, the sizes and the shrinking points are not visible in the centerlinebased position features, which may be extracted from the images. So even though the positions themselves have relatively rich information in this problem, all the other methods lack the ability to absorb the information from the original CCTA data.
Partialresidual block
One of the main contributions for our approach is that we brought the partialresidual block. In the traditional residual block, all dimensions of the input are treated equally. But in the coronary arteries labeling problem, the positions are proven to play an important role. So as to stress this importance as well as use the extra information inside the CCTA images, we use the partialresidual block to treat the image domain in formations as the extra conditions of the model. With the ablation study, we notice that this kind of structure can improve the metric from to . Also, it can make the model more stable.
Robustness
Our CPRGCN is purely driven by data. So the CPRGCN can make use of the enormous size of data. Without prior knowledge and the hardcoded “rule”, the CPRGCN is more robust to the noises. All the nodes in our graph, which represent different segments, weigh the same in the CPRGCN. So the wrong classification of LM or other major branches has less chance to spread through all other branches. In order to prove this opinion, we conduct a interesting synthetic ”Data Attack” experiment. The results show that missing main branch has less impact on our CPRGCN than other deep leaning method.
Disadvantages and future work
There is still some future work for this problem. Since we treat every branches equally, the imbalance among branches is an issue. It’s noticeable that the main branches perform better than side branches due to the number of samples are different. There will also be some tiny branches missing after the segmentation, which will increase the imbalance. Also in this model, we use the discrete approximation of the PDE. In the future, the model can be pushed forward to the continuous model.
6 Conclusion
In this paper, we propose the endtoend conditional partialresidual graph convolutional network (CPRGCN) model for automated anatomical labeling of coronary arteries, where few alternatives are available. Compared with the traditional methods and the recent deep learning based methods, our approach achieves the stateoftheart result. We show that with the conditional partialresidual block, both information in position domain and CCTA image domain can be taken into consideration. On the experiment side, we show that our CPRGCN is more robust and flexible compared with others. The result also shows that the CCTA image domain matters in coronary arteries labeling. Importantly, we show that our algorithmic contributions facilitate the CAD system.
References

[1]
(2013)
Anatomical labeling of the circle of willis using maximum a posteriori probability estimation
. IEEE transactions on medical imaging 32 (9), pp. 1587–1599. Cited by: §1.  [2] (2017) Residual gated graph convnets. arXiv preprint arXiv:1711.07553. Cited by: §3.3.
 [3] (2017) Automatic identification of coronary tree anatomy in coronary computed tomography angiography. The international journal of cardiovascular imaging 33 (11), pp. 1809–1819. Cited by: §1, §2.1, §4.1, §4.2, §4.4, Table 3.
 [4] (2019) A deep neural network for manifoldvalued data with applications to neuroimaging. In International Conference on Information Processing in Medical Imaging, pp. 112–124. Cited by: §1.
 [5] (2018) A statistical recurrent model on the manifold of symmetric positive definite matrices. In Advances in Neural Information Processing Systems, pp. 8883–8894. Cited by: §1.
 [6] (2018) Neural ordinary differential equations. In Advances in neural information processing systems, pp. 6571–6583. Cited by: §3.3.
 [7] (2015) Convolutional networks on graphs for learning molecular fingerprints. In Advances in neural information processing systems, pp. 2224–2232. Cited by: §2.2.
 [8] (2017) Protein interface prediction using graph convolutional networks. In Advances in Neural Information Processing Systems, pp. 6530–6539. Cited by: §2.2.
 [9] (2005) Framewise phoneme classification with bidirectional lstm and other neural network architectures. Neural networks 18 (56), pp. 602–610. Cited by: §3.2.
 [10] (2012) Automated lobebased airway labeling. Journal of Biomedical Imaging 2012, pp. 1. Cited by: §1.

[11]
(2016)
Deep residual learning for image recognition.
In
Proceedings of the IEEE conference on computer vision and pattern recognition
, pp. 770–778. Cited by: §3.3.  [12] (2015) Deep convolutional networks on graphstructured data. arXiv preprint arXiv:1506.05163. Cited by: §1.
 [13] (2014) Adam: a method for stochastic optimization. arXiv preprint arXiv:1412.6980. Cited by: §4.2.
 [14] (2016) Semisupervised classification with graph convolutional networks. arXiv preprint arXiv:1609.02907. Cited by: §1, §2.2.
 [15] (2014) SCCT guidelines for the interpretation and reporting of coronary ct angiography: a report of the society of cardiovascular computed tomography guidelines committee. Journal of cardiovascular computed tomography 8 (5), pp. 342–358. Cited by: §1.

[16]
(2018)
Deeper insights into graph convolutional networks for semisupervised learning
. InThirtySecond AAAI Conference on Artificial Intelligence
, Cited by: §2.2, §4.3.  [17] (2015) Gated graph sequence neural networks. arXiv preprint arXiv:1511.05493. Cited by: §2.2.
 [18] (2015) Automated anatomical labeling of abdominal arteries and hepatic portal system extracted from abdominal ct volumes. Medical image analysis 20 (1), pp. 152–161. Cited by: §1.
 [19] (2008) The graph neural network model. IEEE Transactions on Neural Networks 20 (1), pp. 61–80. Cited by: §2.2.
 [20] (2003) Catmullrom splines. Computer 41 (6), pp. 4–6. Cited by: §3.2.
 [21] (2008) The top ten causes of death  fact sheet n310. Cited by: §1.
 [22] (2019) Automated anatomical labeling of coronary arteries via bidirectional tree lstms. International journal of computer assisted radiology and surgery 14 (2), pp. 271–280. Cited by: §1, §2.2, §3.1, §3.1, §3, §4.1, §4.4, Table 3, Table 4.
 [23] (2011) Automatic coronary artery tree labeling in coronary computed tomographic angiography datasets. In 2011 Computing in Cardiology, pp. 109–112. Cited by: §1, §1, §2.1, §4.1, §4.2, §4.4.
 [24] (2019) Discriminative coronary artery tracking via 3d cnn in cardiac ct angiography. In International Conference on Medical Image Computing and ComputerAssisted Intervention, pp. 468–476. Cited by: §3, §4.1.
 [25] (2013) Automatic anatomical labeling of abdominal arteries for small bowel evaluation on 3d ct scans. In 2013 IEEE 10th international symposium on biomedical imaging, pp. 210–213. Cited by: §1.
 [26] (2018) Graph neural networks: a review of methods and applications. arXiv preprint arXiv:1812.08434. Cited by: §1.
Comments
There are no comments yet.