1 Introduction
Numerical simulations of cardiac electromechanics are gaining momentum in the context of cardiovascular research and computational medicine [smith2004multiscale, vigmond2008effect, trayanova2011whole, fink2011cardiac, washio2015ventricular, quarteroni2019cardiovascular, dede2021mathematical]. Cardiac in silico models are based on an anatomically detailed and biophysically accurate representation of the human heart, consisting of multiscale mathematical models of several physical processes, from the cellular to the organ scale, described by systems of differential equations. However, the clinical exploitation of cardiac numerical simulations is seriously hampered by their overwhelming computational cost. As a matter of fact, the simulation of a single heartbeat for an anatomically accurate patientspecific model may require several hours of computational time even on a supercomputer platform.
A promising approach to address this issue is to replace the computationally expensive cardiac electromechanical model, say the fullorder model (FOM), with a reduced version of it, called reducedorder model (ROM), to be called any time new parameters come in. In the socalled offline phase, the ROM is built from a database of numerical simulations that are previously obtained by solving the FOM itself. Then, in the online phase, the ROM is used as a surrogate of the FOM, at a dramatically reduced computational cost, for any new instance of the parameters. Clearly, this procedure pays off if the computational gain of the online phase outweighs the cost of the offline phase, or whenever the online phase requires realtime execution, which is often the case in the clinical practice, where timeliness is a key factor.
Recently, this framework has been applied in the context of cardiac modeling, primarily by using machine learning algorithms, including Gaussian Process emulators (GPEs), Artificial Neural Networks (ANNs), decision tree algorithms such as eXtreme Gradient Boosting (XGBoost) and KNearest Neighbor (KNN)
[di2018gaussian, dabiri2019prediction, Longobardi2020, Cai2021, coveney2021bayesian]. These emulators are trained to fit the map that links the model parameters with a set of scalar outputs of interest, known as quantities of interest (QoIs), which represent clinically meaningful biomarkers. This map is learned from a collection of numerical simulations, obtained by sampling the parameter space, which can then be used as FOM surrogate.Replacing a FOM with a ROM is particularly advantageous in the socalled manyquery settings, i.e. when the forward model has to be solved many times for different parameter values. This is the case, for example, of parameter calibration [brooks1998markov, hoffman2013stochastic] and sensitivity analysis [plischke2013global, sobol1990sensitivity, xu2008uncertainty, kucherenko2012estimation], fundamental tools for a reliable use of computational models in the clinical practice. In fact, only a few of the many parameters feeding cardiac electromechanical models can be directly measured, while most of them must be estimated through indirect (possibly noninvasive) measurements, by solving suitable (typically illposed) inverse problems [aster2018parameter]. It is therefore important to quantify how much each parameter affects the outputs of the model and, vice versa, how uncertainty on measured data reverberates on parameter uncertainty. These requirements are at the basis of the verification and validation under uncertainty (VVUQ) paradigm [oberkampf2004verification, hu20162014] and are often necessary to comply with standards issued by health agencies and policy makers, such as the ASME V&V40 standard [asme2018assessing] recognized by the FDA [fda].
In this paper, we propose a machine learning method to build ROMs of cardiac electromechanical models. Our approach relies on the ANNbased method that we proposed in [regazzoni2019modellearning], which can learn a timedependent differential equation from a collection of inputoutput pairs. With respect to existing approaches, we only surrogate the timedependent pressurevolume relationship of a cardiac chamber, while we do not reduce the model describing external circulation. The latter is indeed either a low dimensional 0D windkessel or closedloop circulation model comprised of few state variables (up to two dozens), which does not require further reduction. In other terms, we derive an ANNbased ROM for the computationally demanding 3D components, while we leave in FOM version the lightweight ones. Unlike emulators, for which the online phase consists in evaluations of the map linking model parameters to QoIs, with our approach the online phase consists instead in numerical simulations, in which the ANNbased ROM of the electromechanical model is coupled with the circulation model, at a very low computational cost. As a matter of fact, these numerical simulations can be performed in realtime on a standard laptop.
We consider two different test cases to prove the efficacy of our mathematical approach. We perform variancebased global sensitivity analysis on both electromechanical and hemodynamic parameters and Bayesian estimation by means of the Markov Chain Monte Carlo (MCMC) method to infer two parameters from noisy measurements of two scalar outputs.
This paper is organized as follows. In Sec. 2 we present the proposed method and the models used to produce the numerical results, which are presented and commented in Sec. 3. Then, in Sec. 4, we critically discuss the obtained results and the pros and cons of the proposed method, with respect to other methods available in the literature. Finally, we draw our conclusions and final remarks in Sec. 5.
This manuscript is accompanied by https://github.com/FrancescoRegazzoni/cardioEMlearning, a public repository containing the codes and the datasets necessary to reproduce the presented results.
2 Methods
We introduce our method in an abstract formulation for cardiac electromechanics. As a matter of fact, our approach, thanks to its nonintrusive nature, can be applied to different electromechanical models that are available in literature.
2.1 The fullorder model (FOM)
Let us consider a generic model of cardiac electromechanics, that is a set of differential equations describing physical processes involved in the heart function. We introduce the state vector
, collecting the state variables associated with this multiphysics system. These may include the transmembrane potential, gating variables, ionic concentrations, protein states, tissue displacement, or simply phenomenological variables. In this paper, we focus on the singlechamber case of the human heart (e.g. the left ventricle, that we call from now on LV), as the generalization to multiple chambers is straightforward. By introducing a nonlinear differential operator that encodes the differential equations and boundary conditions associated with the electromechanical model, the latter reads(1) 
where denotes the LV endocardial pressure (here seen as an input), are the model parameters (possibly including, e.g., electrical conductivities, cell membrane conductances, protein binding affinities, contractility, passive tissue properties) and is the initial state. We denote by the space of parameters such that , being the number of parameters. We notice that the righthand side of (1) depends on , as heartbeats are paced by externally applied stimuli that we assume to have a period of duration , which is fixed a priori.
The 3D cardiac electromechanical model (1), henceforth denoted by , must be coupled with a closure relationship assigning the pressure . One possible option is to couple the model with a 0D model for the external circulation (see e.g. [hirschvogel2017monolithic, regazzoni2021em_circulation_pt1, augustin2020physiologically]), thus obtaining a system in the form of
(2) 
where are the state variables of the circulation model (pressures, volumes and fluxes in the circulatory network) and is a vector of parameters (e.g. vascular resistances and conductances). The mechanical and hemodynamic models are coupled through the geometric consistency relationship , where the lefthand and righthand sides represent the LV volume predicted by the and by the models, respectively. The LV pressure is determined as a Lagrange multiplier that enforces the consistency relationship. An alternative approach is to adopt different closure relationships in the different phases of the heartbeat [levrero2020sensitivity, Gerbi2018monolithic] with suitable preload and afterload models, such as windkessel models [westerhof2009arterial]. These relationships link the changes in LV pressure with its volume, obtained as . In both cases, should the be coupled to a closedloop circulation model or to an afterloadpreload relationship, we denote by the resulting coupled model. In Fig. 1 we show an model in which consists of a the lumpedparameter closedloop model of [regazzoni2021em_circulation_pt1], which is employed to produce the numerical results of Sec. 3.
We remark that the closure relationships never directly involve the state of the model , but only the LV volume . This is a key observation since it suggests that a ROM that is able to surrogate the relationship between and , albeit agnostic of the state , can replace the model in its coupling to the model. In what follows, we present our strategy to build an ANNbased ROM of the model, denoted by , which can be coupled with the model resulting in the coupled model, that will surrogate the model.
2.2 The reducedorder model (ROM)
To setup a ROM surrogating the model of Eq. (1), we employ the machine learning method that we proposed in [regazzoni2019modellearning]. This method is designed to learn a differential equation from timedependent inputoutput pairs, by training an Ordinary Differential Equation (ODE) model, whose righthand side is represented by an ANN. In this work, we define our ANNbased ROM as
(3) 
where is the reduced state and is a fully connected ANN. The ANN input consists indeed of state variables, scalar parameters, the pressure , and the two periodic inputs and (whose role will be clarified later), for a total of
input neurons. The vector
encodes the weights and biases of the ANN, that need to be suitably trained. We remark that, among the arguments of the ANN, we have not introduced the time variable , but rather and , that are the coordinates of a point cyclically moving along a circumference with period . In this way, the ROM encodes by construction the periodicity associated with the heartbeat pacing. This expedient allows for the use of the ROM also for time spans longer than those shown during the training phase. Moreover, by introducing the parametric dependence (i.e. on ) within the ANN, the latter is not specific to a particular parameter setting.In this work, according to [regazzoni2019modellearning], we adopt an outputinsidethestate approach, that is we train the model so that the LV volume coincides with the first state variable. More precisely, the LV volume predicted by the model is by definition , where is the first element of the canonical basis of . Therefore, the first ROM state variable has a clear physical interpretation (i.e. it coincides with the LV volume), while the other ROM states are latent variables with no immediate physical interpretation, providing however a compact representation of the fullorder state . Coherently with this choice, we define the initial state as . The remaining initial states are set, without loss of generality, to zero (this choice does not reduce the space of candidate models, as proved in [regazzoni2019modellearning]).
Here we need to determine the optimal value of the weights , such that the model reproduces the outputs of the model as accurately as possible. With this goal, we generate a training set, by sampling the parameter space with sample points. For each sample, we perform a simulation with the model until time , and we record the LV pressure and volume transients. The training set is thus given by
We remark that, due to the nonintrusive nature of our method, there is no need to retain the FOM states . Finally, we train the ANN weights by minimizing the discrepancy between the training data and the model outputs, that is by considering the following constrained optimization problem
(4) 
where
is a regularization hyperparameter. We remark that the optimization problem (
4) is not a standard machine learning problem of data fitting. Indeed, the ANN appears at the righthand side of a differential equation that acts as a constraint under which the loss function is minimized. To train this model, we use the algorithm proposed in
[regazzoni2019modellearning], which envisages approximating the differential equation by the Forward Euler method, the loss function by the trapezoidal method, and then computing the gradients by solving the adjoint equations. The parameters are then optimized by means of the LevenbergMarquardt method [nocedal2006numopt]. The training pipeline is summarized in Fig. 2.Once the ANN has been trained (that is, the optimal weights have been determined), the model can be used as a surrogate of the model, also for different combinations of the parameters than those contained in the training set. Moreover, it can be coupled with the closure relationships , thus obtaining the model. For example, by considering the case of a closedloop circulation model, as in (2), its reduced counterpart reads
(5) 
As matter of fact, the model has the same inputoutput structure of the model being surrogated, that is . Hence, the latter can be employed in replacement of the former in approximating the outputs associated with a given set of parameters , as shown in Fig. 3.
2.3 Hyperparameters tuning
Like any machine learning algorithm, our method depends on a set of hyperparameters, that is variables that are not trained (they are not part of the vector ), but are used to control the training process. These include ANN architecture hyperparameters (namely the number of layers and the number of neurons per layer ), the regularization factor and the number of reduced states . To tune the hyperparameters, we rely on a fold crossvalidation procedure. Specifically, after splitting the training set into nonoverlapping subsets, we cyclically train the model by excluding one subset that is used as validation set. Finally, we evaluate the average validation errors and we select the hyperparameters setting that attains the lowest validation error and better generalization properties (see Appendix A.4 for more details).
2.4 Global sensitivity analysis
To assess how much each parameter contributes to the determination of a QoI, e.g. a biomarker of clinical interest, we perform a global sensitivity analysis. This is typically done by sampling the parameter space and by computing suitable indicators, such as Borgonovo indices [plischke2013global], Sobol indices [sobol1990sensitivity, homma1996importance], Morris elementary effects [morris1991factorial], ANCOVA indices [xu2008uncertainty] and Kucherenko indices [kucherenko2012estimation]. In this work, we perform a variancebased sensitivity analysis, which relies on a probabilistic approach. We compute Sobol indices to quantify the sensitivity of a QoI (say ) with respect to a parameter (say ). Specifically, the socalled firstorder Sobol index (denoted by ) indicates the impact on the th QoI (i.e. ) of the th parameter (i.e. ) when the latter varies alone, according to the definition
where indicates the set of all parameters excluding the th one. The firstorder Sobol index , however, only accounts for the variations of the parameter alone, averaged over variations in the other parameters, and thus does not account for the interactions among parameters. Conversely, it is possible to assess the importance of a parameter in determining a QoI, also accounting for the interactions among parameters, by resorting to the socalled totaleffect Sobol index , defined as
The latter index quantifies the impact of a given parameter when it varies alone or together with other parameters [sobol1990sensitivity].
To compute an estimate of the Sobol indices, we use the Saltelli method [homma1996importance, saltelli2002making], that makes use of Sobol quasirandom sequences to approximate the integrals that need to be computed. In practice, this method requires evaluating the model for a large number of parameters, and then processing the obtained QoIs to provide an estimate of the Sobol indices.
In this paper we perform a variancebased global sensitivity analysis simultaneously with respect to the circulation model parameters and the electromechanical model parameters (that is, we set ). To this end, we use model as a surrogate for model to perform the evaluations required by the Saltelli method. As we will see in Sec. 3, using instead of entails a huge computational gain. In addition, for each parameter setting we simulate a certain number of heartbeats to achieve a limit cycle (i.e. a periodic solution), and we calculate QoIs with respect to the last cycle, which is the most significant one (as it removes the effects of incorrect initializations of the state variables). We remark that the need to simulate a certain number of heartbeats to overpass the transient phase (that typically lasts from 5 to 15 cycles, see Appendix A.2) makes the use of a reduced cardiac electromechanical model even more necessary.
2.5 Parameter estimation under uncertainty
The patientspecific personalization of a cardiac electromechanical model requires, besides the usage of a geometry derived from imaging data, the estimation of the parameters associated with the mathematical model (or at least the most important ones), starting from clinical measurements. Very often, only a few scalar quantities are available for this purpose; moreover, the resolution of this inverse problem (i.e. estimating from ) should account for the noise that unavoidably affects the measurement of and that reflects in uncertainty on .
Bayesian methods, such as MCMC [brooks1998markov] and Variational Inference [hoffman2013stochastic], permit to address these issues within a rigorous statistical framework, by providing the likelihood
, expressed as a probability distribution of the parameters values, given the observed QoIs (denoted by
). Unlike methods that provide point estimates, either based on gradientdescent [nocedal2006numopt] or genetic [holland1992genetic]optimization algorithms, Bayesian methods are able to provide a probability distribution on the parameter space, encoding the
credibility of each parameter combination. The credibility computation accounts for the uncertainty on measurements (associated with measurement noise and encoded in the noise covariance matrix ), as well as a prior distribution on parameters (denoted by ), that is previous knowledge or belief about the parameters. By denoting the parameterstoQoIs map by , we have , wheredenotes the measurement error (that we assume for simplicity to be distributed as a Gaussian random variable). Bayes’ theorem states that the
posterior distribution of parameters, that is the degree of belief of their value after having observed , is given bywhere the normalization constant is defined as
In practice, the computation of may be challenging from the computational viewpoint, because of the need to approximate the integral that defines . The MCMC method permits to approximate the distribution with a relatively small computational effort. Similarly to the Saltelli method that we use for sensitivity analysis, also the MCMC method requires a large number of model evaluations, for different parameter values. Moreover, this method is non intrusive, that is it only requires evaluations of the map . Therefore, we can employ for this purpose the model as a surrogate for the model, which drastically reduces the necessary computational time.
Moreover, we remark that the Bayesian framework permits to rigorously account for the approximation error introduced by replacing the high fidelity model by its surrogate . Indeed, if we denote by the approximated parameterstoQoIs map represented by the surrogate model , we have , where is the ROM approximation error. It follows , where is the experimental measurement error. Since the two sources of error can be assumed independent, the covariance of the total error satisfies , where is the ROM approximation error covariance (which can be estimated by evaluating the ROM on a testing set) and is the experimental error covariance (that depends on the measurement protocol at hand). This permits to take into account in the estimation process the error introduced by the surrogate model.
To assess the capability of our ROM to accelerate the estimation of parameters for multiscale cardiac electromechanical models, we perform the following test. First, we perform a simulation with the model, from which we derive a set of QoIs (). Then, by employing the model as a surrogate of the model, we obtain a Bayesian estimate of the parameters, that we validate against the values used to generate .
2.6 The cardiac electromechanical model
As mentioned above, due to its nonintrusive nature, our method is not limited to a specific cardiac electromechanical model, but can be applied to virtually any electromechanical model as long as pressure and volume transients are available for the training procedure. In this section, we briefly introduce the specific model used to produce the numerical results of this paper.
We consider a LV geometry processed from the Zygote 3D heart model [zygote] endowed with a fiber architecture generated by means of the BayerBlakePlankTrayanova algorithm [bayer2012novel, piersanti2020modeling]. Before starting the simulations, we recover the reference unstressed configuration through the algorithm that we proposed in [regazzoni2021em_circulation_pt1]. To model the action potential propagation, we employ the monodomain equation [collifranzone2014book], coupled with the ten TusscherPanfilov ionic model [ten2006alternans]. We model the microscale generation of active force through the biophysically detailed RDQ20MF model [regazzoni2020biophysically], that is coupled, within an active stress approach, with the elastodynamics equations describing tissue mechanics. On the other hand, the passive behavior of the tissue is modeled through a quasiincompressible exponential constitutive law [usyk2002computational]. We model the interaction with the pericardium by means of springdamper boundary conditions at the epicardium, while we adopt energyconsistent boundary conditions [regazzoni2019morsarcomeres] to model the interaction with the part of the myocardium beyond the artificial ventricular base. To model blood circulation, that is , we rely on the 0D closedloop model presented in [regazzoni2021em_circulation_pt1], consisting of a compartmental description of the cardiac chambers, systemic and pulmonary, arterial and venous circulatory networks, based on an electrical analogy. The different compartments are modeled as RLC (resistance, inductance, capacitance) circuits, while cardiac valves are described as diodes.
Among the parameters associated with the model, in this work we focus on the four ones reported in Tab. 1. Similarly, we report in Tab. 2 the parameters associated with the model.
Parameter  Baseline  Unit  Description 

160.0  Cardiomyocytes contractility  
76.43  Electrical conductivity along fibers  
60.0  degrees  Fibers angle rotation  
0.88  Passive stiffness 
Parameter  Baseline  Unit  Description 

400.15  Initial blood pool volume of the heart  
, ,  0.07, 0.06, 0.55  LA/RA/RV active elastance  
, ,  0.18, 0.07, 0.05  LA/RA/RV passive elastance  
, ,  0.14, 0.14, 0.20  LA/RA/RV contraction time  
, ,  0.14, 0.14, 0.32  LA/RA/RV relaxation time  
, ,  4.0, 4.0, 16.0  LA/RA/RV reference volume  
,  0.16, 0.16  Left/Right atrioventricular delay  
,  0.0075, 75006.2  Valve minimum/maximum resistance  
,  0.64, 0.32  Systemic arterial/venous resistance  
,  0.032, 0.036  Pulmonary arterial/venous resistance  
,  1.2, 60.0  Systemic arterial/venous capacitance  
,  10.0, 16.0  Pulmonary arterial/venous capacitance  
,  0.005, 0.0005  Systemic arterial/venous inductance  
,  0.0005, 0.0005  Pulmonary arterial/venous inductance 
We report in Tab. 3 the list of all the QoIs, computed from the solution of the model, and outputs of interest that are used through this paper. The last column of the table indicates which variables are used for crossvalidation during the training phase (see Sec. 2.3), those that are used for sensitivity analysis purposes (see Sec. 2.4) and those that are used to run Bayesian parameter estimations (see Sec. 2.5).
Parameter  Unit  Description  Usage 

Left Atrium  
Endsystolic and enddiastolic volume  GSA  
Minimum and maximum pressure  GSA  
Left ventricle  
Volume transient  Xvalidation  
Pressure transient  Xvalidation  
Endsystolic and enddiastolic volume  Xvalidation, GSA  
Minimum and maximum pressure  Xvalidation, GSA  
Stroke volume ()  Xvalidation, GSA  
Right Atrium  
Endsystolic and enddiastolic volume  GSA  
Minimum and maximum pressure  GSA  
Right ventricle  
Endsystolic and enddiastolic volume  GSA  
Minimum and maximum pressure  GSA  
Stroke volume ()  GSA  
Systemic arterial circulation  
Minimum and maximum pressure  GSA, MCMC 
To numerically approximate this multiphysics and multiscale model, we adopt the segregated approach proposed in [regazzoni2021em_circulation_pt2], by which the subproblems are solved sequentially. For space discretization, we rely on bilinear Finite Elements defined on hexahedral meshes, adopting a different spatial resolution for the electrophysiological and the mechanical variables. For time discretization, we employ a staggered scheme, where different time steps are used according to the specific subproblem. Moreover, to avoid the numerical oscillations arising from the mechanical feedback on force generation (that are commonly cured by recurring to a monolithic scheme [levrero2020sensitivity, pathmanathan2010cardiac]), we use the stabilizedstaggered scheme that we proposed in [regazzoni2020oscillation]. This numerical model requires, on a 32 cores computer platform, nearly 4 hours of computational time to simulate a heartbeat. More details on the numerical discretization and the computational platform employed to generate the training data used in this paper are available in Appendix A.1.
2.7 Software libraries
The cardiac electromechanics simulations considered in this paper are performed by means of the life^{x} library^{1}^{1}1https://lifex.gitlab.io, a highperformance C++ platform developed within the iHEART project^{2}^{2}2iHEART  An Integrated Heart Model for the simulation of the cardiac function, European Research Council (ERC) grant agreement No 740132, P.I. Prof. A. Quarteroni
. To train the ANNbased models, we employ the open source MATLAB library
modellearning^{3}^{3}3https://modellearning.readthedocs.io/, that implements the machine learning method proposed in [regazzoni2019modellearning] and used in this paper. Sensitivity analysis is carried out through the open source Python library SALib^{4}^{4}4https://salib.readthedocs.io/ [Herman2017]. Finally, for the MCMC based Bayesian parameter estimation we rely on the open source Python library UQpy^{5}^{5}5https://uqpyproject.readthedocs.io/ [UQpy2020]. To employ the ANNbased models trained with the MATLAB library modellearning within the Python environment of SALib and UQpy, we exploit pyModelLearning, a Python wrapper for the modellearning library, which is available in its GitHub repository^{6}^{6}6https://github.com/FrancescoRegazzoni/modellearning.Both the MATLAB codes that are used to train the ANNbased ROMs and the datasets that permit to reproduce the numerical results are publicly available in the online repository accompanying this manuscript^{7}^{7}7https://github.com/FrancescoRegazzoni/cardioEMlearning.
3 Results
3.1 Trained models
To generate the pressure and volume transients needed to train an ANNbased ROM, we sample the parameter space with a Monte Carlo approach, even if more sophisticated sampling strategies – such as Latin Hypercube Design – can be considered as well. For simplicity, in this stage we only consider a subset of the parameters , selected as the most significant ones, on the basis of a preliminary variancebased global sensitivity analysis, obtained with a version of the closedloop model in which the LV is also represented by a 0D circuit element (see Appendix A.3).
We consider two scenarios. First, we study the variability with respect to a single parameter, namely the active contractility. Thus, we set . Under this setting, we generate 30 numerical simulations through the model to train a ROM, henceforth denoted by . For this ROM, the remaining parameters (i.e. , and ) are kept constant (more precisely, equal to the baseline values of Tab. 1). Then, we consider the full parametric variability (that is, we set ), we generate 40 training samples and we train a second ROM, denoted by . All the numerical simulations included in the training set are 5 heartbeats long. The optimal sets of hyperparameters, tuned through the algorithm of Sec. 2.3, are reported in Tab. 4. In both the considered cases, training an ANNbased model takes approximately 18 hours on a single core standard laptop.
Trained model  Parameters  Training set size  Hyperparameters  

30  2  1  8  0  
40  1  1  12 
Once trained, the two ROMs ( and ) can be coupled with the circulation model , thus obtaining the models and , according to Eq. (5). These two models represent two surrogates of the model, capable of approximating its output at a dramatically reduced computational cost. As a matter of fact, numerical simulations with either the or the model take nearly one second of computational time per heartbeat on a single core standard laptop.
To test the accuracy of the and models with respect to the model, we consider a testing dataset, by taking unobserved samples in the parameter space . For both models, we consider 15 testing simulations of the same duration of the ones included in the training set. In addition, to test the reliability of the ROMs over a longer time horizon than the one considered in the training set, we include in the testing set 5 simulations of double length (i.e. 10 heartbeats). Then, we compare the simulations obtained with the two ROMs ( and ) with the ones obtained with the model for the same parameters and .
The accuracy of the two ROMs is summarized Tabs. 5 and 6. Specifically, in Tab. 5 we report metrics regarding the ability of the ROMs to correctly predict the function of the LV, that is the chamber surrogated by the ANNbased model. Specifically, we report the relative errors in norm (i.e. the mean square errors) associated with pressure and volume transients ( and ) and the relative errors on some biomarkers of clinical interest (maximum and minimum pressures and volumes). For the latter, we also report the coefficient of determination . We notice that both model and model are able to surrogate model with remarkably good accuracy. Model features slightly larger errors than model ; this is not surprising since model explores a much larger parametric space than model (four parameters instead of one). Interestingly, the errors obtained over a long time horizon are very similar to those obtained for simulations of the same duration as those used to train the ANNs. This demonstrates the reliability of the ROMs in longterm simulations. Similarly, in Tab. 6 we report the errors and coefficients associated with the RV output (for simplicity, we here consider only 5 heartbeats long simulations). The accuracy obtained in reproducing the RV function is even better than that for the LV, coherently with the fact that the RV is included in the model and thus it is not surrogated by the ANNbased model.
5 heartbeats  

vs  relative error  0.0336  0.0090  0.0097  0.0046  0.0139  0.0035 
R^{2}  99.691  99.864  99.896  99.948  
vs  relative error  0.0620  0.0285  0.0517  0.0272  0.0471  0.0127 
R^{2}  94.370  95.302  95.942  97.061  
10 heartbeats  
vs  relative error  0.0293  0.0071  0.0113  0.0037  0.0096  0.0031 
R^{2}  99.924  99.980  99.851  99.944  
vs  relative error  0.0631  0.0265  0.0442  0.0147  0.0382  0.0122 
R^{2}  92.227  99.957  99.229  99.063 
5 heartbeats  

vs  relative error  0.0048  0.0035  0.0015  0.0027  0.0028  0.0004 
R^{2}[%]  100.000  99.993  99.998  100.000  
vs  relative error  0.0069  0.0040  0.0029  0.0079  0.0072  0.0069 
R^{2}[%]  99.994  99.807  99.819  99.997 
In Figs. 4 and 5 we compare 10 heartbeats long transients obtained with the and models, respectively, to those obtained with the model. All these transients were not used to train the ANNbased models (that is, they belong to the testing set). In Fig. 6 we show the LV biomarkers predicted by the and models versus those predicted by the model.
3.2 Coupling the electromechanical reducedorder model with different circulation models
To generate the data used to train the model, we employ the coupled model (that is, we couple the model with the circulation model ). However, the ANNbased ROM surrogates the model regardless of its coupling with a specific circulation model. In fact, thanks to Eq. (5), the trained model can be coupled with circulation models that are different from the one used to generate the training data. We remind that predictions are reliable only if pressure and volume values are inside the ranges explored during the training phase.
We demonstrate the flexibility of our approach by coupling the model with a pressurevolume closure relationship that is different from the closedloop circulation model used during the training phase (see Sect. 2.6). Specifically, we consider a circulation model with a windkessel type relationship during the ejection phase and a linear pressure ramp during the filling phase [regazzoni2019morsarcomeres]. In spite of the different nature of the two circulation models (the one used during the training and the one used during the testing), the ANNbased ROM trained with the model proves to be reliable also when it is coupled with the model. Indeed, as shown in Tab. 7, the results obtained by the coupled model approximate those of the coupled model with an excellent accuracy. These errors are indeed comparable to the ones obtained by surrogating the model with the model (see Tab. 5).
vs  relative error  0.0459  0.0518  0.0065  0.0004  0.0080  0.0009 

vs  relative error  0.0510  0.0219  0.0116  0.0027  0.0110  0.0063 
3.3 Global sensitivity analysis
Once we have verified that models and are able to reproduce the outputs of model with high accuracy, we use them to perform a global senstivity analysis. The aim is to determine which of the parameters of the circulation model () and the electromechanical model () contribute the most to the determination of a list of outputs of interest, the socalled QoIs (see Tab. 3). For this purpose, we compute the Sobol indices and , as described in Sec. 2.4. When replacing model with a ROM, we can only estimate sensitivity indices with respect to the parameters that were considered during the training phase. On the other hand, we can compute sensitivity indices with respect to all parameters of the circulation model, even those that were not varied during training. It would even be possible – at least in principle – to consider a circulation model different from the one used to generate the training data. In fact, as pointed out in Sec. 2.2, the model and surrogate the models independently of the circulation model to which it is coupled with.
We only report the results obtained by means of the most complete ROM, that is . The results are shown in Figs. 7 and 8, respectively. First, we notice that the and indices have only small differences from each other. This means that the interaction among the different parameters is less significant, in the determination of the QoIs, than the variation of the individual parameters. Furthermore, we note that, as expected, the QoIs associated with a given chamber or compartment are mostly determined by the parameters associated with the same region of the cardiovascular network. However, there are some important exceptions. Indeed, the venous resistance of the systemic circulation has a strong impact on the right heart (RA and RV), i.e. the part of the network located immediately upstream. In addition, the systemic arterial resistance significantly impacts the maximum LV pressure. In fact, this parameter is known to contribute in the determination of the socalled afterload [tortora2008]. We also note that total circulating blood volume has a major impact on almost all biomarkers. Conversely, the parameters associated with the pulmonary circulation network have a minimal impact. In addition, parameters describing the resistance of opened and closed valves also have a very little impact. This is an interesting result, since it shows that these parameters, chosen as a very low and very high value respectively (since for reasons of numerical stability they cannot be set equal to zero and infinity), have virtually no impact (at least within the ranges considered here) on the output quantities of biomechanical interest.
Regarding individual compartments, we note that variability can be explained by only a few parameters. Specifically, atria are mainly influenced by passive stiffness and atrioventricular delay, whereas for the RV the most relevant parameters are active and passive stiffness. As expected, the biomarkers associated with the LV – the only chamber included in the model – are mainly influenced by the parameters . Among these, the most influential one is the active contractility , followed the fibers orientation and, to a lesser extent, the passive stiffness . Finally, the electrical conductivity has a minimal impact on the biomarkers under consideration.
It is important to note that Sobol indices are affected by the amplitude of the ranges in which the parameters are varied. In particular, the wider the range associated with a parameter, the greater the associated Sobol indices will be, as the parameter in question potentially generates greater variability in the QoI. Therefore, the results shown here are valid for the specific ranges we used, which are reported in Appendix A.5.
3.4 Bayesian parameter estimation
In this section we present a further practical use of the cardiac electromechanics ROM presented in this paper. In particular, we show that the ROM can be used to enable Bayesian parameter estimation for the model, which, due to the prohibitive computational cost, would not be affordable without the use of a ROM.
First, we consider a prescribed value for the parameter vector (specifically, we employ the values reported in Tabs. 1 and 2) and we run a simulation using the model. Based on the output of this simulation, we consider a couple of QoIs, consisting of minimum and maximum arterial pressure (i.e. we set ). The choice of these two QoIs is motivated by the fact that they are two variables that can be measured non invasively, and in fact they are often monitored in clinical routine. We reconstruct the value of a couple of parameters, namely the active contractility and the systemic arterial resistance , assuming known and keeping fixed the values of the remaining parameters. Indeed, we aim at demonstrating that the ROM we propose is suitable for estimating parameters of both the model (such as ) and of the model (such as ). Specifically, in this section we rely on the model.
To mimic the presence of measurement errors, we artificially add to the exact values of the QoIs and a synthetic noise, with increasing magnitude. Specifically, we add an artificial noise by sampling from independent Gaussian variables with zero mean and with variance . We consider three cases: (i.e. no noise), and .
As described in Sec. 2.5
, we use the MCMC method to derive a Bayesian estimate of the parameters value from (possibly noisy) measurements of the minimum and maximum arterial pressure. For both unknown parameters we employ a noninformative prior, that is a uniform distribution on the ranges used to train the ROM (
and ). According to Sec. 2.5, we set , where the experimental measurement error covariance is given by (being the 2by2 identity matrix) and where the ROM approximation error covariance is estimated from its statistical distribution on the validation set as
. More details on the MCMC setup are available in Appendix A.6.In Fig. 9 we show the posterior distribution on the parameters pair (,
) obtained for the three noise levels considered. We identify with a red line the 90% credibility region, that is the region in the parameter space with largest posterior probability such that it covers 90% of
. We notice that for each value of noise, the credibility region contains the exact value of the parameters (namely and ), represented by a red star. As expected, for larger values of noise, the size of the credibility region increases (that is, the estimate is more uncertain). As a matter of fact, an advantage of Bayesian parameter estimation methods, compared to deterministic methods, is their ability of quantifying the uncertainty associated with the parameters estimate. A further feature of Bayesian methods stands in capturing correlations among the estimated parameters. Indeed, this aspect emerges clearly because of the oblique shape of the credibility regions. This is due to the fact that an increase in or a decrease in leads to similar changes in terms of the measured QoIs ( and ), thus making their posterior distributions highly correlated.4 Discussion
In this paper, we develop a machine learning method to build ANNbased ROMs of 3D cardiac electromechanical models. Thanks to the reducedorder differential equations learned through our algorithm, it is possible to approximate the cardiac dynamics in terms of pressure and volume transients with great fidelity and with a huge computational saving. As a matter of fact, once trained, the ANNbased ROM permits to simulate a heartbeat virtually in real time (about one second of numerical simulation for a heartbeat on a standard laptop), when the original electromechanical model requires about four hours per heartbeat on a supercomputer with 32 cores. By taking into account the number of cores, our ANNbased ROM brings the impressive speedup of 460’000x (see Fig. 10). Clearly, a fair evaluation of the computational saving should also consider the time required to generate the training dataset and to train the model. To this aim, we consider two test cases, corresponding to the examples of global sensitivity analysis and Bayesian parameter estimation presented in Secs. 2.4 and 2.5, respectively. In both cases, we consider 5 computer nodes with 32 cores each available, and we compare the total computational times using either the or the model with these computational resources.
The global sensitivity analysis presented in Sec. 2.4 requires the numerical simulation of 74’000 parameter cases. On average, the limit cycle is reached in 10 heartbeats, for a total of 740’000 heartbeats to be simulated. Running this huge amount of numerical simulations with the FOM model would not be feasible, as it would require about 68 years of uninterrupted use of the 5 nodes endowed with 32 cores. On the contrary, by virtue of our machine learning method, we are able to perform a global sensitivity analysis, albeit with a small approximation (see Tabs. 5 and 6) in the numerical results, in 7.5 days (6.7 days to generate the training dataset, 18 hours to train the model and 1 hour and 17 minutes to perform the sensitivity analysis using the model). Therefore, taking into account the time required to build the ROM, our approach yields a 3’300x speedup (see Fig. 11). Bayesian parameter estimation, on the other hand, requires the numerical simulation of approximately 960’000 heartbeats. In this case, despite using only 20 cores to run the MCMC, we are able to obtain a result in just 6 days and 8 hours (including generation of the training dataset), when with the FOM model it would take more than 87 years. We thus obtain an overall speedup of 5’000x (see Fig. 12). Moreover, we notice that in case we need to execute a Bayesian calibration for different data, it is not necessary to repeat the training of the ANNbased model, but it is sufficient to rerun only the MCMC algorithm, which takes – thanks to our ROM – only 13 hours and 20 minutes. Finally, we remark that the cost required to train the ANN or to perform numerical simulations with the model does not depend on the specific
model at hand, as it is only based on the generated pressure and volume transients. In particular, it is not expected to raise if the biophysical detail or the number of degrees of freedom of the computational mesh increase.
The ROM proposed in this paper allows, as shown in Sec. 3, to tackle manyquery problems, which would otherwise be unaffordable due to their computational cost. Indeed, sensitivity analysis on cardiac models have mainly addressed, at present, single cell models [sher2013local, tondel2015quantifying, pathmanathan2019comprehensive]. Sensitivity studies on 3D models of cardiac electromechanics have been mainly limited to simplified geometries [Hurtado2017], or to a few parameters, varying one parameter at a time (that is, local sensitivity analysis) [land2012analysis, Marx2020]. To the best of our knowledge, the only global sensitivity study on a 3D electromechanical model was only recently performed in [levrero2020sensitivity]; however, the huge computational cost associated with the numerical approximation of the forward model dictated the use of an idealized ellipsoidal geometry and of single heartbeat simulations, thus without letting the system reaching a periodic limit cycle. This leads to results with limited accuracy. Furthermore, parameter estimation under uncertainty using 3D models of cardiac electromechanics is hard to attain, due to the computational cost that would be required by Bayesian methods such as MCMC. Indeed, parameters are typically estimated sequentially by using minimization methods [land2012analysis, wang2009modelling, finsberg2018estimating]
or ad hoc developed heuristics
[Marx2020, costa2013automatic]. An alternative family of methods, able to provide an estimate of uncertainty, is that of sequential methods of the Kalman filter type (see e.g.
[xi2011myocardial, Marchesseau2013]), which, however, would require an invasive implementation in the electromechanical model.Recently, a number of surrogate models of cardiac electromechanics have been built by using machine learning algorithms and they are often called emulators [di2018gaussian, dabiri2019prediction, Longobardi2020, Cai2021]. These emulators are based on a collection of precomputed numerical simulations obtained by sampling the parameter space, similarly to what has been done in this work. However, the approach behind these emulators is very different from the one we follow. These emulators are in fact maps that fit the parameterstoQoIs map () in a static manner. On the contrary, with our approach, the ROM makes it possible to perform a real numerical simulation of the cardiac function, since the circulation model is kept in its fullorder form, while only the computationally demanding part, i.e. the 3D electromechanical model , is surrogated. This has a number of advantages:

The ROM is independent of the circulation model to which is coupled and it can also be coupled to models different from those used during training, or with different values of its parameters.

Unlike the emulators in [di2018gaussian, dabiri2019prediction, Longobardi2020, Cai2021], our ROM allows for time extrapolation, i.e. reliable predictions even over longer time spans than those used during training, as demonstrated by our numerical results. This observation is very important since, while for emulators that fit the parameterstoQoIs map the simulations contained in the training set must have reached a limit cycle (otherwise the associated QoIs would be meaningless), with our approach it is possible to defer the reaching of the limit cycle to the ROM, during the online phase. This permits to lighten the computational cost associated with the creation of the training set, which typically represents the main component of the total cost (see use cases cited above).

The output of the simulations obtained with our ROM is not limited to a list of QoIs. Indeed, it contains the transient of pressures and volumes of the surrogate heart chamber and of the compartments of the circulation model. We notice that the latter is not hidden as a blackbox in the parameterstoQoIs map, as for emulators, but is rather represented explicitly.

With our approach, the ROM only needs to learn the variability with respect to since the circulation model (involving the parameters ) remains explicitly represented. Conversely, emulators based on a parameterstoQoIs map must capture the dependence on both and and thus the training set must be large enough to accurately represent their statistical variability. Indeed, we obtain accurate results with only 30–40 samples, a very low number compared to the ones typically required to construct emulators (e.g. 825 samples in [Longobardi2020], 9000 samples in [Cai2021]).
A limitation of our ANNbased ROM is that the online phase may be slower than the one of emulators, which do not require to approximate a differential equation but only need to evaluate a function. However, our ANNbased ROM enables realtime simulations and can be readily applied in a number of use cases, such as global sensitivity analysis, parameter estimation and uncertainty quantification. Furthermore, as highlighted above, most of the computational cost is not due to the evaluation of the ROM, but rather to the construction of the training set, which our approach allows to keep very small (30–40 samples in our test cases). Therefore, we conclude that keeping the circulation model in its fullorder form and surrogating only the computationally intensive electromechanical model allows for a very favorable tradeoff between what is reduced (variability that must be explored during the offline phase) and what is not reduced (variability that must be account for in the online phase).
A different type of emulator is the one we proposed in [regazzoni2021cardioemulator], that permits  similarly to what is done in this paper  to couple a reduced version of the 3D electromechanical model to a circulation model, thus enabling for realtime numerical simulations. However, the emulator of [regazzoni2021cardioemulator] is built for a fixed value of the parameters
, or (through a linear interpolation) for the variability of a single parameter at most. On the other hand, its construction only requires one or two numerical simulations performed through the FOM. Therefore, the emulator of
[regazzoni2021cardioemulator] is advantageous when one needs to surrogate the model for a given parametrization (e.g. to quickly converge to a limit cycle, or to perform sensitivity analysis on only); conversely, when one needs to explore the parametric dependence of an electromechanical model, the approach proposed in this paper turns out to be favorable.The ROM proposed in this paper features several differences with respect to projectionbased ROMs, which have been used in the cardiovascular field as well (see e.g. [QMN16, bonomi2017matrix, pagani2017reduced, pagani2021enabling]
), or deep learning based approximations of the parameterstosolution map (see e.g.
[fresca2020deep, fresca2021]). The latter families of ROMs indeed provide a richer output than our ROM, as they allow for the approximation of spatially varying outputs of the 3D model, such as transmembrane potential and tissue displacement. However, the online phase of projectionbased ROMs is typically more computationally demanding than the one of our ROM [bonomi2017matrix, pagani2021enabling]. Moreover, unlike our proposed method, projectionbased ROMs are intrusive in the equations to be reduced and suitable hyperreduction techniques should be addressed to handle the nonlinearities of cardiac models. The deep learning based method of [fresca2020deep, fresca2021] features a training phase that is more computationally demanding than the one of the method proposed in this paper, due to the larger size of the outputs. Conversely, in applications in which the knowledge of pressures, volumes and blood fluxes associated with the cardiac chambers are sufficient, our ROM permits to accurately approximate the outputs of the FOM at a very reduced computational cost and in a nonintrusive manner (only pressure and volume recordings are required from the FOM).We remark that the proposed method is limited to electromechanical models that feature a periodic behavior. The ROM is indeed periodic by construction, due to the presence on the sine and cosine terms, as highlighted in Eq. (3). Therefore, the ROM is not suitable, e.g., to simulate an irregular electrophysiological behavior, such as ventricular arrhythmias, and other electric dysfunctions.
5 Conclusions
We presented a machine learning method to build ANNbased ROMs of cardiac electromechanical models. Indeed, on the basis of pressure and volume transients generated with the FOM, we are capable of learning a system of differential equations that approximates the dynamics of the cardiac chamber to be surrogated. This system of differential equations, linking pressure and volume of a cardiac chamber, is coupled with lumpedparameter models of cardiac hemodynamics, thus allowing for the simulation of the cardiac function at a dramatically reduced computational cost with respect to the original FOM. As a matter of fact, our ROM permits to perform numerical simulations virtually in realtime. Moreover, thanks to its nonintrusive nature, the proposed algorithm can be easily applied to other electromechanical models besides the one considered in this work.
We presented two test cases in which we employ the ANNbased ROM. We carried out a global sensitivity analysis to assess the influence of the parameters of the electromechanical and hemodynamic models on a list of outputs of clinical interest. Then, we performed a Bayesian estimation of a couple of parameters (belonging to the electromechanical and hemodynamic models, respectively), starting from the noisy measurement of a couple of scalar quantities (namely maximum and minimum arterial pressure). In both the cases, performing through the FOM the large number of numerical simulations needed would not have been feasible, due to their high computational cost (it would in fact have taken tens of years on a supercomputer computational platform). Replacing the FOM with its ANNbased surrogate allowed us to obtain an approximate solution in a few hours of computation. By taking into account that the generation of the numerical simulations contained in the training set required less than 7 days on the same computational platform, our ANNbased ROM allowed us to reduce the total computational time by more than 3’000 times.
Acknowledgements
This project has received funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No 740132, iHEART  An Integrated Heart Model for the simulation of the cardiac function, P.I. Prof. A. Quarteroni).
Appendix A Numerical settings
In this appendix we report the main numerical approaches that we employ to develop and test the ROM of cardiac electromechanics presented in this paper.
a.1 Electromechanical model
We use the same choices in terms of space and time discretization for all the numerical simulations of this paper. Specifically, we employ a finer mesh for cardiac electrophysiology, which consists of 258’415 DOFs and 240’864 elements (with an average mesh size of ), and a coarser one for cardiac mechanics, which is made by 35’725 DOFs and 30’108 elements ( ). To advance the electrophysiological variables we use a timestep length of and a five times larger timestep to advance the mechanical variables. The numerical simulations were run by using one cluster node endowed with 32 cores (four Intel Xeon E54610 v2, 2.3 GHz) which is available at MOX, Dipartimento di Matematica.
a.2 Convergence to the limit cycle
To determine when a simulation reached a limit cycle (i.e. a periodic solution), we employ a criterion based on the increment between successive cycles. Specifically, we consider the limit cycle to be reached when maximum difference between two consecutive cycles in the pressures and volumes of all four chambers is less than and , respectively. This criterion is typically satisfied in 5 to 15 cycles. In any case, we always perform a minimum of 5 cycles.
Comments
There are no comments yet.