Coronary CT angiography (CCTA) images provide valuable information to determine the anatomical or functional severity of coronary artery stenosis . Methods for stenosis detection  and blood flow simulation  typically require highly personalized coronary lumen surface meshes with sub-voxel accuracy. Because manual segmentation of the full coronary artery tree in a CCTA image would hardly be feasible, such meshes are typically extracted using automatic or semi-automatic methods [8, 10, 2]
. Deep learning-based segmentation could further improve such methods, but widely adopted voxel-based segmentation methods do not meet the requirements of down-stream applications, i.e. sub-voxel accuracy and segmentation of the lumen as a contiguous structure.
An alternative to voxel segmentation is to incorporate a shape prior, i.e. to exploit the fact that an individual vessel or vessel segment has a roughly tubular shape. The wall of this tube can be deformed to match the visible lumen in the CCTA image and obtain an accurate segmentation . In one variant of this approach, a tubular surface mesh along the coronary artery centerline is considered and the spatial location of each mesh vertex is predicted so that the surface closely follows the artery wall. Lugauer et al. 
used probabilistic boosting trees to predict these locations based on steerable image texture features extracted from 2D cross-sectional images. As predictions were made based on 2D cross-sectional information, additional smoothing of the surface mesh was required through graph-cut optimization. Freiman et al. used a similar approach, but enforced smoothness of the obtained surface by penalizing jumps between neighboring vertices.
In this work, we propose to directly optimize the location of the tubular surface mesh vertices using graph convolutional networks (GCNs) [4, 3]. GCNs are a recent development in deep learning-based medical image analysis, with high potential for graph-based applications in e.g. airway extraction in chest CT  and cortical segmentation in brain MR . We consider the vertices on the coronary lumen surface mesh as graph nodes and solve a regression problem for each of these graph nodes. Predictions for vertices depend on local features as well as on internal representations of adjacent vertices on the mesh. Our experiments show that GCNs are well-suited to accurately predict vertex locations for this natural graph representations, and that direct operation on the tubular mesh reduces the need for additional post-processing.
We included coronary CT angiography (CCTA) images from the training set of the Coronary Artery Stenoses Detection and Quantification Evaluation Framework . This set contains 18 CCTA images acquired on Philips, Siemens, and Toshiba CT scanners. Images had an in-plane resolution of 0.29–0.43 mm and a slice spacing of 0.25–0.45 mm. Within the challenge, reference surface meshes were annotated by three observers in a selection of segments in each data set, for a total of 78 coronary artery segments. For each CCTA volume, we automatically extracted centerlines in all volumes using our previously proposed deep learning-based method .
We propose to use GCNs to obtain a coronary artery surface mesh based on a CCTA image and an (automatically extracted) coronary artery centerline. Fig. 1 provides an overview of the proposed method.
3.1 Surface mesh
The mesh delineating the vessel wall surface is represented by a graph , with vertices and edges . We assume that the vessel wall is a deformable tube in 3D Euclidean space with known centerline . We constrain the exact spatial location of each vertex to be dependent on only one parameter . For each centerline point , we determine a 2D cross-sectional plane orthogonal to the centerline direction. Within this cross-sectional plane, equiangularly spaced vertices define a cross-section of the surface mesh. Hence, within the 2D plane, the location of each vertex is defined in polar coordinates , where is fixed. The parameter is defined as the distance to the centerline point , and it is this value that we predict using regression.
The combination of all vertices in all cross-sectional planes forms the polygonal lumen surface mesh . Edges are added between neighboring vertices in cross-sectional planes, and between vertices at the same angle in adjacent cross-sectional planes. Four such edges define a quadrilateral mesh face, and each quadrilateral face is further split into two triangular faces. The structure of is fixed for a vessel or vessel segment and a GCN is trained to predict the value for each vertex based on information from the image
encoded in a input vector. In all our experiments, for a vertex this input vector contains the image values along a ray cast from the center of the cross-sectional plane at the angle .
3.2 GCN architecture
We use a graph convolutional network to predict – for each node in the graph – the value of the parameter given the input vector . The GCN consists of layers that aggregate information from neighboring nodes (Fig. 2). By concatenating several such layers, information from a growing neighborhood of nodes in the graph is combined. We use the element-wise mean aggregator as proposed in GraphSAGE . In the -th GCN layer, we aggregate features of vertices in the neighborhood of vertex , including itself, to obtain a feature vector for each node. We define , i.e. the input to the first GCN layer is the input vector derived from the image . The aggregation rule is
wherecontains trainable weights that determine how features are combined between layers. Different than in GraphSAGE, we do not sample neighbors but use the full available neighborhood. The neighborhood is defined by the edges in , which are fixed for a given vessel segment or segment.
We use a network with five GCN layers, so that predictions for individual vertices are based on vertices that are at most five steps away on the mesh. The first layer has 32 input features in , while the hidden GCN layers each have 64 nodes. The output layer has 1 node to predict . Dropout is used with
. Given that we train the GCN using individual samples and not with mini-batches, the GCN does not contain batch normalization layers. In total, the GCN network contains 14,567 trainable parameters.
The GCN is trained using a training set consisting of coronary artery segments, each of which is represented by a graph , image information and reference values for all vertices in . In each iteration, one segment is randomly selected and the following loss is computed
where is the set of vertices in and
is the distance to the centerline predicted by the GCN. Distance values are cubed in the loss function to better guarantee correspondence between automatically segmented and reference vessel volumes. We use no additional regularization on the values of. After training, the trained GCN can be applied to a new input graph and image data .
|DSC||MSD (mm)||HD (mm)|
|Lugauer et al. ||0.77||0.75||0.32||0.27||2.79||1.96|
|Freiman et al. ||0.69||0.74||0.49||0.28||1.69||1.22|
|Mohr et al. ||0.75||0.73||0.45||0.29||3.73||1.87|
|Graph convolutional network (GCN)||0.75||0.73||0.25||0.28||1.53||1.86|
Multi-layer perceptron (MLP)
4 Experiments and Results
The method was implemented in Python using PyTorch and the Deep Graph Library (DGL)111https://www.dgl.ai/. We performed leave-one-patient-out cross-validation experiments on the training set of the Coronary Artery Stenoses Detection and Quantification Evaluation Framework. The networks were trained using annotations by all three observers. Coronary centerlines were automatically extracted using the method proposed in  and resampled to 0.5 mm resolution. In all our experiments, we defined 24 discrete values for vertex angles between rays (Fig. 1) and the resolution of input signals was 0.1 mm. Each ray consisted of 32 input features for a circular field of view of 3.2 mm. Image values were clipped at 0 and 1000 HU to normalize for surrounding low-density tissue, air and hyperdense calcifications. We observed that this circumvents the need for explicit calcium removal steps such as proposed in .
Networks were trained end-to-end and parameters were optimized using the Adam optimizer with a learning rate of 0.001. We trained each network for 50,000 iterations. In each iteration, one training sample was randomly selected and the loss in Eq. 2 was evaluated. To improve training stability, gradients were accumulated for 10 iterations, after which the loss was back-propagated and parameters were updated. Experiments were performed using an NVIDIA Titan X GPU with 12GB of memory. Training of one GCN took around 20 minutes, while segmentation of all coronary arteries in a CCTA image took around 45 seconds, including fully automatic coronary centerline extraction .
We evaluated the quality of the obtained segmentations using the online platform of the Coronary Artery Stenoses Detection and Quantification Evaluation Framework . Dice similarity coefficients (DSC) were determined based on 2D cross-sectional segmentations obtained along the centerline. In addition, the mean surface distance (MSD) and Hausdorff distance (HD) were computed between automatic and reference meshes. Table 1 lists results the three expert observers in , state-of-the-art methods evaluated on the same 78 segments in 18 CCTA images, and the GCN. Results are shown separately for healthy and diseased segments. The results indicate that the GCN obtains results that are comparable to those obtained by other methods in terms of all three metrics, and that like other automatic methods, the GCN outperforms human experts in terms of Hausdorff distance. Fig. 3 shows an example segmentation in an artery with coronary artery disease. The segmentation follows the inner vessel wall and excludes coronary artery calcification.
In addition, we performed experiments to determine the value of GCN layers over traditional fully-connected layers. We removed the ability to propagate values between neighboring nodes on the surface mesh from the layer in Eq. 1. The result is a standard fully-connected layer with . Like in Eq. 1, this layer only has trainable parameters in . We built a network which contained only fully-connected layers, i.e. a multi-layer perceptron (MLP) and compared this network to the GCN. Both networks contained 14,567 trainable parameters and both networks were trained in the same manner. Fig. 6 shows an example of a left coronary artery tree segmented using the MLP or the GCN. By combining information from neighboring vertices on the surface, the GCN intrinsically leads to smoother segmentations than the MLP. Quantitative results in Table 1 show that the inclusion of GCN layers in the network architecture leads to substantially higher overlap (DSC) and better accuracy (MSD).
5 Discussion and Conclusions
We have proposed a method based on graph convolutional networks for coronary artery segmentation in cardiac CT angiography. Experiments using a publicly available framework showed competitive performance on segmentation of individual artery segments. To the best of our knowledge, this work is among the first applications of GCNs in medical image analysis, following recent works on airway extraction in chest CT  and cortical segmentation in brain MR . The method obtained accurate results in both healthy and diseased vessels. Given that segmentations meet requirements of down-stream applications, i.e. contiguous segmentations with sub-voxel accuracy, future work could include validation of the obtained segmentations for anatomical and functional stenosis detection.
We found that when using the same experimental settings on the same data, the inclusion of GCN-layers substantially improved segmentation performance over a baseline network using only fully-connected layers. This indicates that the GCN successfully uses information from a local neighborhood on the surface mesh to predict the location of an individual mesh vertex. Moreover, the method does not require post-processing steps such as conditional random fields, or additional regularization terms to smooth the obtained surface meshes, but directly generates a smooth surface mesh.
A limitation of the mean aggregator used in this work is its invariance to the spatial relation between a vertex and its neighbors. In future work, we will investigate the use of edge features in addition to the node features used in this work. Alternatively, information from neighboring vertices could be combined in different ways, such as convolution in cross-sectional planes. An additional limitation that this work shares with other methods for coronary lumen segmentation [10, 2, 6] is the dependence on a coronary artery centerline. In preliminary experiments, we found that the exact location of this coronary artery centerline can lead to noticeable differences in segmentation accuracy. We will further investigate this in future work.
In conclusion, GCN-based segmentation of coronary arteries in CCTA is feasible and accurate.
P15-26, Project 2, Dutch Technology Foundation with participation of Philips Healthcare.
Cucurull, G., Wagstyl, K., Casanova, A., Veličković, P., Jakobsen, E., Drozdzal, M., Romero, A., Evans, A., Bengio, Y.: Convolutional neural networks for mesh-based parcellation of the cerebral cortex. In: Medical Imaging with Deep Learning (MIDL) (2018)
-  Freiman, M., Nickisch, H., Prevrhal, S., Schmitt, H., Vembar, M., Maurovich-Horvat, P., Donnelly, P., Goshen, L.: Improving CCTA-based lesions’ hemodynamic significance assessment by accounting for partial volume modeling in automatic coronary lumen segmentation. Med. Phys. 44(3), 1040–1049 (2017)
-  Hamilton, W., Ying, Z., Leskovec, J.: Inductive representation learning on large graphs. In: Adv Neural Inf Process Syst (NIPS). pp. 1024–1034 (2017)
-  Kipf, T.N., Welling, M.: Semi-supervised classification with graph convolutional networks. In: Int Conf Learn Represent (ICLR) (2017)
-  Kirişli, H., Schaap, M., Metz, C., Dharampal, A., Meijboom, W., Papadopoulou, S., Dedic, A., Nieman, K., De Graaf, M., Meijs, M., et al.: Standardized evaluation framework for evaluating coronary artery stenosis detection, stenosis quantification and lumen segmentation algorithms in computed tomography angiography. Med Image Anal 17(8), 859–876 (2013)
Lee, M.C.H., Petersen, K., Pawlowski, N., Glocker, B., Schaap, M.: Tetris: Template transformer networks for image segmentation with shape priors. IEEE Trans Med Imag (2019)
-  Leipsic, J., Abbara, S., Achenbach, S., Cury, R., Earls, J.P., Mancini, G.J., Nieman, K., Pontone, G., Raff, G.L.: SCCT guidelines for the interpretation and reporting of coronary CT angiography: a report of the society of cardiovascular computed tomography guidelines committee. J Cardiovasc Comput Tomogr 8(5), 342–358 (2014)
-  Lesage, D., Angelini, E.D., Bloch, I., Funka-Lea, G.: A review of 3D vessel lumen segmentation techniques: models, features and extraction schemes. Med Image Anal 13(6), 819–845 (2009)
-  Litjens, G., Kooi, T., Bejnordi, B.E., Setio, A.A.A., Ciompi, F., Ghafoorian, M., Van Der Laak, J.A., Van Ginneken, B., Sánchez, C.I.: A survey on deep learning in medical image analysis. Med Image Anal 42, 60–88 (2017)
Lugauer, F., Zheng, Y., Hornegger, J., Kelm, B.M.: Precise lumen segmentation in coronary computed tomography angiography. In: International MICCAI Workshop on Medical Computer Vision. pp. 137–147. Springer (2014)
-  Selvan, R., Kipf, T., Welling, M., Pedersen, J.H., Petersen, J., de Bruijne, M.: Extraction of airways using graph neural networks. In: Medical Imaging with Deep Learning (MIDL) (2018)
-  Taylor, C.A., Fonte, T.A., Min, J.K.: Computational fluid dynamics applied to cardiac computed tomography for noninvasive quantification of fractional flow reserve: scientific basis. J Am Coll Cardiol 61(22), 2233–2241 (2013)
Wolterink, J.M., van Hamersvelt, R.W., Viergever, M.A., Leiner, T., Išgum, I.: Coronary artery centerline extraction in cardiac CT angiography using a cnn-based orientation classifier. Med Image Anal 51, 46–60 (2019)