In patients suffering from cardiovascular diseases, parameters that quantify arterial blood flow could complement anatomical measurements such as vessel diameters and stenoses. For example, the magnitude and direction of wall shear stress (WSS) has been found to correlate with atherosclerotic plaque development and arterial remodeling [15, 7]. Imaging techniques like particle image velocimetry or 4D flow MRI with which WSS could be measured non-invasively are not widely available and may be less accurate in smaller arteries. Therefore, WSS is often estimated using computational fluid dynamics (CFD). This requires the extraction of an artery model from e.g. CT images, meshing of this model, and iterative solution of the Navier-Stokes equations within the mesh . However, high-quality CFD solutions require fine meshes, leading to high computational complexity. Therefore, there is a demand for surrogate models that trade accuracy for speed.
Deep neural networks are an attractive class of surrogate models. They require generating training data (e.g. by CFD) but once they are trained, inference in new geometries only requires a single forward pass through the network, which can be extremely fast. Previous applications of deep learning to coronary CFD surrogate modelling have studied the use of multilayer perceptrons to estimate fractional flow reserve based on hand-crafted features, as well as the use of convolutional neural networks to estimate WSS based on 1D parametrisations of single arteries  and 2D views of bifurcated geometries . Furthermore, convolutional neural networks have been applied in stress and hemodynamics prediction in the aorta [10, 11].
All of these approaches have in common that they rely on a global or local parametrisation of the artery model, which is either limited to idealised vessels or does not generalise to more complex vessel trees. This limits their value as a plug-in replacement for simulation in existing CFD workflows. In contrast, in this work, we propose to let a deep neural network learn a (latent) parametrisation of the vessel geometry directly from a surface mesh representing the vessel lumen wall. We describe input meshes as graphs and use graph convolutional networks (GCNs) and their extension, mesh convolutional networks, to predict WSS vectors on the mesh vertices (Fig. 1). This offers a plug-in replacement for CFD simulation operating on a mesh that can be acquired through well-established meshing procedures.
GCNs have been previously studied for general mesh-based simulations . Moreover, several recent works have investigated the use of GCNs in the cardiovascular domain, e.g. for coronary artery segmentation , coronary tree labeling , or as a surrogate for cardiac electrophysiology models . In a very recent study, Morales et al.  showed that GCNs outperform parametrisation-based methods for the prediction of WSS-derived scalar potentials in the left atrial appendage. Here, we compare isotropic and anisotropic mesh convolution for the prediction of 3D vector quantities on the mesh vertices and show how the use of gauge-equivariant mesh convolution  results in a neural network that is invariant to translation and equivariant to rotation of the input mesh, two desirable properties for improved data efficiency. The latter is especially relevant in medical applications where access to large datasets is uncommon. Our method is flexible and avoids the loss of information that may occur due to ad-hoc parametrisation of the artery model. We demonstrate the effectiveness of this approach on two sets of synthetic coronary artery geometries.
We model the artery lumen wall as a 2D manifold in 3D Euclidean space. Its surface mesh is fully described by a tuple of vertices and polygons , whose undirected edges induce the graph . We propose to use GCNs to predict WSS vectors on the vertices of the graph. Those are defined as the tangential force exerted on the lumen wall by the blood flow. GCNs are trainable operators that map an input signal on a graph to an output signal on the same graph. Like many convolutional neural network architectures, they can be composed of convolutional and pooling layers, which are here defined as follows.
We implement convolutional layers by spatial message passing in a neighbourhood around vertex , expressed in a general form as
where are trainable kernel matrices. These matrices weigh the features of itself and those of its neighbours, respectively. We let denote the feature vector corresponding to . Equation (1) describes three graph convolutional layers, depending on the choice of and as follows. First, we consider GraphSAGE  convolution with full neighborhood sampling, which corresponds to choosing an isotropic kernel and . Isotropic kernels accommodate neighbourhoods of a varying number of vertices without canonical ordering, but sacrifice expressiveness. Second, we consider FeaSt  convolution, which implements an anisotropic filter through an attention mechanism with trainable weights where is the “softmax” function and is the identity matrix. The kernel is again chosen .
Third, we employ gauge-equivariant mesh convolution . Here, anisotropic kernels are learned as , where the orientation angles of all neighbours are measured from one arbitrary reference neighbour projected onto the tangent plane at . This arbitrary choice defines a local coordinate system (“gauge”). Linearly combining vertex features requires expressing them in the same gauge, which is done by the parallel transporter where is the angle between the previously chosen reference neighbours. The representation is determined by the irreducible representations (“irreps”) of the planar rotation group which compose into the features . In order to ensure that the convolution does not depend on the arbitrary choice of local coordinate systems, a linear constraint is placed on the kernel such that the linearly independent solutions to this kernel constraint span the space of possible trainable parameters.
Graph pooling consists of a clustering and a reduction component. We define clusters in the mesh based on a hierarchy of vertex subsets and assign each vertex in to a vertex in based on geodesic distances. In the gauge-equivariant framework, we average features after parallel transporting to the same gauge. For unpooling, features are distributed over the clusters using parallel transport. There is no notion of parallel transport for general -dimensional features in GraphSAGE or FeaSt convolution. Hence, for these networks we simply average over the -dimensional feature space during pooling and distribute static features over the clusters during unpooling.
2.3 Network architecture
Because WSS depends on the global shape of the blood vessel, we use encoder-decoder networks, akin to the well-known U-Net. These networks are formed by three pooling scales where each scale consists of residual blocks with two convolutional layers and a skip connection. We copy and concatenate signals between corresponding layers in the contracting and expanding pathway. ReLU activation is used throughout the architecture.
We construct input features that are invariant to translation and equivariant to rotation of the surface mesh. This is done by, for each vertex , considering a ball and averaging weighted differences to the vertices as well as the vertex normals. Subsequently, we take the outer product of the averaged differences with themselves, the averaged normals with themselves, and the differences with the normals. For the GraphSAGE and FeaSt networks, we flatten and concatenate the resulting matrices and use the resulting feature vectors as inputs for each vertex. For the gauge-equivariant network, we instead decompose the matrices into irreps. A schematic representation of the employed architecture is shown in Fig. 2. Gauge-equivariant convolutional layers are by construction equivariant under transformation of the local coordinate systems. Representing our network’s input features in these gauges leads to end-to-end equivariance under transformation of the global coordinate system. This means that the gauge-equivariant network is a globally rotation-equivariant operator. Each network is additionally provided with a scalar input feature that contains, for each vertex, the (invariant) shortest geodesic distance to the vessel inlet, giving a rotation-equivariant indication of the flow direction.
We synthesise two large datasets of coronary artery geometries and corresponding CFD simulations to train and validate our neural networks. The entire shape synthesis and CFD simulation is implemented and automated with the SimVascular  Python shell.
3.1 Single arteries
We emulate the dataset by Su et al.  and synthesise 2000 idealised 3D coronary artery models. Each model has a single inlet and single outlet, with a centerline defined by control points in a fixed increment along the x-axis and a uniformly random increment along the y-axis. The cross-sectional lumen contour has a random radius [mm]. Each artery contains up to two randomly-placed, asymmetric stenoses of up to 50 %. Each model is lofted and meshed using a global edge length of 0.4 [mm], local edge length refinement proportional to the vessel radius, and five tetrahedral boundary layers. Fig. 3 shows an example synthetic geometry. We simulate blood flow for the three-dimensional, incompressible Navier-Stokes equations. We assume dynamic viscosity , blood density , rigid walls, and no-slip boundary condition. The inlet velocity profile is constant and uniform: . A pressure [mmHg] 13.332 [kPa] is weakly applied at the outlet. A Reynolds number of implies the fluid flow is laminar. Blood flow is simulated until stationary and the simulation takes between 10 and 15 [min] on 16 CPU cores (Intel Xeon Gold 5218).
3.2 Bifurcating arteries
We synthesise 2000 3D left main coronary bifurcation models, randomly constructed based on an atlas of coronary measurement statistics . Each model consists of proximal main vessel (PMV), distal main vessel (DMV), and side branch (SB). Centerlines are defined by three angles sampled from the atlas. First,  is the angle between DMV and SB. Second, [
] is the angle between the bisecting line of the bifurcation and SB, describing skew of the bifurcation towards the child branch. Third, determines the angle at which the PMV centerline enters the bifurcation plane. We model the vessel lumen with ellipses that are arbitrarily oriented in the plane normal to the centerline curve tangent. The lumen radii are drawn from the coronary atlas: [mm], [mm], and [mm]. We sample lumen radii such that diameters follow a bifurcation law of the form with small and decrease approximately linear with vessel length. Each model is lofted and meshed using a global edge length of 0.2 [mm] and five tetrahedral boundary layers. Blood flow simulation follows that of the single artery models, with differences as follows. We assume , , and prescribe a parabolic influx profile corresponding to a uniform inflow velocity to arrive at WSS values which agree with a healthy range  of 1.0 to 2.5 [Pa]. We weakly apply a pressure at the union of the outflow surfaces. The Reynolds number for the fluid flow is so the flow is laminar. The simulation runs take between 8 and 12 [min] on 32 CPU cores (Intel Xeon Gold 5218).
We implement three versions of the encoder-decoder network based on GraphSAGE, FeaSt and gauge-equivariant mesh convolution and refer to these as SAGE-CNN, FeaSt-CNN, and GEM-CNN. Hidden layers are set up so that each network has just under 800,000 trainable parameters. Both datasets are split 80:10:10 into training, validation, and test set. The GCNs are trained to predict a 3D WSS vector at each vertex on the surface mesh. Since GEM-CNN naturally expresses its signal in the tangential planes w.r.t. each vertex, its output vector is here, for WSS, restricted to those tangential planes. However, outputting an additional normal component is also naturally supported in the form of an invariant
irrep. In our experiments, however, this only influenced the performance marginally. SAGE-CNN and FeaSt-CNN operate without the notion of tangential planes, thus they predict an unconstrained 3D vector to avoid overloading the methods and to save memory. The input meshes are the same surface meshes we use to obtain the ground truth via CFD, so that the CFD solution is directly used for network optimisation. All models are trained with a mean-squared error loss using the Adam optimiser. SAGE-CNN and FeaSt-CNN are trained on an NVIDIA GeForce RTX 3080 10 GB for 400 epochs taking 3:15 [h] and 12:30 [h] for the single artery models and twice as long for the bifurcating artery models. GEM-CNN is trained for 160 epochs on two NVIDIA Quadro RTX 6000 24 GB (ca. 29:00 [h]) for the single artery models, and 100 epochs (ca. 55:00 [h]) for the bifurcating artery models. Inference in a new and unseen data sample takes less than 5 [s] including pre-processing. We open-source our PyTorch Geometric implementation for the GraphSAGE and FeaSt networks.111https://github.com/sukjulian/coronary-mesh-convolution
We report mean absolute error normalised by the maximum WSS magnitude across the test set (NMAE) and define the approximation error where elements of are vertex-wise normed differences between the network output and ground truth so that for and . Additionally, we report the maximum and mean vertex-wise normed difference, i.e. and .
4.1 Prediction accuracy
Fig. 3 shows an example of the reference and predicted WSS in a single and a bifurcating artery model, illustrating that our method does not only predict the WSS magnitude but also the directional 3D vector. Table 1 lists quantitative results of the prediction accuracy over both test sets. Direct comparison of SAGE-CNN, FeaSt-CNN and GEM-CNN on the single artery models suggests strictly better performance when using anisotropic convolution kernels. GEM-CNN’s good performance can be attributed to its anisotropic convolution kernels as well as the availability of parallel transport pooling. The results in Table 1 are comparable to previously published results on a similar dataset 
. To demonstrate our method’s flexibility, we train SAGE-CNN, FeaSt-CNN, and GEM-CNN on the bifurcating artery dataset without further adaption of the architecture. We find that all networks perform well, even without task-specific hyperparameter tuning. The performance differences between networks are less pronounced on the bifurcating arteries than on the single arteries, which may be due to more homogeneous WSS values across this dataset.
Due to its signal representation, GEM-CNN is a globally -equivariant operator, i.e. the network output rotates with the input mesh orientation. SAGE-CNN and FeaSt-CNN are not equivariant under transformation. This is because their signals are expressed in Euclidean space which makes the networks implicitly learn a reference orientation. To illustrate this, we compare the prediction accuracy on randomly rotated samples of the test sets (see Tab. 1
). GEM-CNN’s prediction accuracy is equivalent, with only small numerical deviation due to discretisation of the activation functions. In contrast, the performance of SAGE-CNN and FeaSt-CNN drops dramatically, suggesting that they fail to predict useful WSS vectors on rotated samples.
5 Discussion and conclusion
We have demonstrated the feasibility of GCNs and mesh convolutional networks as surrogates for CFD blood flow simulation in arterial models. We found that GCNs effortlessly operate on the same meshes used in CFD simulations. Thus, shape parametrisation steps as proposed in other works [16, 4, 8] may be omitted and GCNs can be seamlessly integrated into existing CFD workflows. We noticed a significant speedup when using our method: Inference in a new and unseen 3D geometry took less than 5 [s] including preprocessing, while CFD simulation took up to 15 [min]. The use of the same GCN architecture on two distinctly different datasets shows that our method does not require extensive task-specific fine-tuning and highlights its flexibility.
Our prediction results are based on two datasets which were synthesised with carefully chosen parameters to achieve a high level of realism. In future work, we aim to enrich our training and validation datasets with patient-specific artery geometries. This raises challenges regarding generalisation and uncertainty quantification, also with respect to the sensitivity of the proposed method to different meshing of the same lumen wall. One limitation of our experiments is that we consider steady blood flow, whereas in real applications one might be interested in time-averaged WSS over a cardiac cycle with pulsatile blood flow. This may require the incorporation of patient-specific boundary conditions. In the current experiments, boundary conditions were fixed across samples of both the single artery and bifurcating artery models and thus inherently encoded in the datasets. In future work, we will investigate to what extent the networks can be conditioned on user-supplied boundary conditions.
In conclusion, GCNs and mesh convolutional networks are a feasible approach to CFD surrogate modelling with potential applications in personalised hemodynamic estimation.
This work is funded in part by the 4TU Precision Medicine programme supported by High Tech for a Sustainable Future, a framework commissioned by the four Universities of Technology of the Netherlands.
-  (2021) Gauge equivariant mesh CNNs: anisotropic convolutions on geometric graphs. In ICLR, Cited by: §1, §2.1.
-  (2021) Deep learning framework for real-time estimation of in-silico thrombotic risk indices in the left atrial appendage. Front. Physiol. 12. Cited by: §1.
-  (2019) Fast graph representation learning with PyTorch Geometric. In ICLR Workshop on Representation Learning on Graphs and Manifolds, Cited by: §4.
-  (2020) Deep learning for time averaged wall shear stress prediction in left main coronary bifurcations. In IEEE International Symposium on Biomedical Imaging (ISBI), Vol. 17. Cited by: §1, §5.
-  (2017) Inductive representation learning on large graphs. In NeurIPS, Vol. 30. Cited by: §2.1.
-  (2019) Machine learning for assessment of coronary artery disease in cardiac CT: a survey. Front. Cardiovasc. Med. 6. Cited by: §1.
-  (2019) Multidirectional wall shear stress promotes advanced coronary plaque development: comparing five shear stress metrics. Cardiovasc. Res. 116. Cited by: §1.
-  (2016) A machine learning approach for computation of fractional flow reserve from coronary computed tomography. J. Appl. Physiol. 121. Cited by: §1, §5.
-  (2018) A re-engineered software interface and workflow for the open-source SimVascular cardiovascular modeling package. J. Biomech. Eng. 140 (2). Cited by: §3.
-  (2018) A deep learning approach to estimate stress distribution: a fast and accurate surrogate of finite-element analysis. J. R. Soc. Interface 15. Cited by: §1.
-  (2020) A feasibility study of deep learning for predicting hemodynamics of human thoracic aorta. J. Biomech. 99. Cited by: §1.
-  (2017) A study of coronary bifurcation shape in a normal population. J. Cardiovasc. Transl. Res. 10. Cited by: §3.2.
-  (2021) Graph convolutional regression of cardiac depolarization from sparse endocardial maps. In Statistical Atlases and Computational Models of the Heart. M&Ms and EMIDEC Challenges, Cited by: §1.
-  (2021) Learning mesh-based simulation with graph networks. In ICML, Cited by: §1.
-  (2011) Coronary artery wall shear stress is associated with progression and transformation of atherosclerotic plaque and arterial remodeling in patients with coronary artery disease. Circulation 124. Cited by: §1, §3.2.
-  (2020) Generating wall shear stress for coronary artery in real-time using neural networks: feasibility and initial results based on idealized models. Comput. Biol. Med. 126. Cited by: §1, §3.1, §4.1, §5.
-  (2013) Computational fluid dynamics applied to cardiac computed tomography for noninvasive quantification of fractional flow reserve. J. Am. Coll. Cardiol. 61 (22). Cited by: §1.
-  (2018) FeaStNet: feature-steered graph convolutions for 3D shape analysis. In CVPR, Cited by: §2.1.
-  (2019) Graph convolutional networks for coronary artery segmentation in cardiac CT angiography. In Graph Learning in Medical Imaging, Cited by: §1.