A machine learning method for real-time numerical simulations of cardiac electromechanics

10/25/2021 ∙ by Francesco Regazzoni, et al. ∙ Politecnico di Milano 0

We propose a machine learning-based method to build a system of differential equations that approximates the dynamics of 3D electromechanical models for the human heart, accounting for the dependence on a set of parameters. Specifically, our method permits to create a reduced-order model (ROM), written as a system of Ordinary Differential Equations (ODEs) wherein the forcing term, given by the right-hand side, consists of an Artificial Neural Network (ANN), that possibly depends on a set of parameters associated with the electromechanical model to be surrogated. This method is non-intrusive, as it only requires a collection of pressure and volume transients obtained from the full-order model (FOM) of cardiac electromechanics. Once trained, the ANN-based ROM can be coupled with hemodynamic models for the blood circulation external to the heart, in the same manner as the original electromechanical model, but at a dramatically lower computational cost. Indeed, our method allows for real-time numerical simulations of the cardiac function. We demonstrate the effectiveness of the proposed method on two relevant contexts in cardiac modeling. First, we employ the ANN-based ROM to perform a global sensitivity analysis on both the electromechanical and hemodynamic models. Second, we perform a Bayesian estimation of two parameters starting from noisy measurements of two scalar outputs. In both these cases, replacing the FOM of cardiac electromechanics with the ANN-based ROM makes it possible to perform in a few hours of computational time all the numerical simulations that would be otherwise unaffordable, because of their overwhelming computational cost, if carried out with the FOM. As a matter of fact, our ANN-based ROM is able to speedup the numerical simulations by more than three orders of magnitude.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 15

page 16

page 20

page 21

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Numerical simulations of cardiac electromechanics are gaining momentum in the context of cardiovascular research and computational medicine [smith2004multiscale, vigmond2008effect, trayanova2011whole, fink2011cardiac, washio2015ventricular, quarteroni2019cardiovascular, dede2021mathematical]. Cardiac in silico models are based on an anatomically detailed and biophysically accurate representation of the human heart, consisting of multiscale mathematical models of several physical processes, from the cellular to the organ scale, described by systems of differential equations. However, the clinical exploitation of cardiac numerical simulations is seriously hampered by their overwhelming computational cost. As a matter of fact, the simulation of a single heartbeat for an anatomically accurate patient-specific model may require several hours of computational time even on a supercomputer platform.

A promising approach to address this issue is to replace the computationally expensive cardiac electromechanical model, say the full-order model (FOM), with a reduced version of it, called reduced-order model (ROM), to be called any time new parameters come in. In the so-called offline phase, the ROM is built from a database of numerical simulations that are previously obtained by solving the FOM itself. Then, in the online phase, the ROM is used as a surrogate of the FOM, at a dramatically reduced computational cost, for any new instance of the parameters. Clearly, this procedure pays off if the computational gain of the online phase outweighs the cost of the offline phase, or whenever the online phase requires real-time execution, which is often the case in the clinical practice, where timeliness is a key factor.

Recently, this framework has been applied in the context of cardiac modeling, primarily by using machine learning algorithms, including Gaussian Process emulators (GPEs), Artificial Neural Networks (ANNs), decision tree algorithms such as eXtreme Gradient Boosting (XGBoost) and K-Nearest Neighbor (KNN)

[di2018gaussian, dabiri2019prediction, Longobardi2020, Cai2021, coveney2021bayesian]. These emulators are trained to fit the map that links the model parameters with a set of scalar outputs of interest, known as quantities of interest (QoIs), which represent clinically meaningful biomarkers. This map is learned from a collection of numerical simulations, obtained by sampling the parameter space, which can then be used as FOM surrogate.

Replacing a FOM with a ROM is particularly advantageous in the so-called many-query settings, i.e. when the forward model has to be solved many times for different parameter values. This is the case, for example, of parameter calibration [brooks1998markov, hoffman2013stochastic] and sensitivity analysis [plischke2013global, sobol1990sensitivity, xu2008uncertainty, kucherenko2012estimation], fundamental tools for a reliable use of computational models in the clinical practice. In fact, only a few of the many parameters feeding cardiac electromechanical models can be directly measured, while most of them must be estimated through indirect (possibly non-invasive) measurements, by solving suitable (typically ill-posed) inverse problems [aster2018parameter]. It is therefore important to quantify how much each parameter affects the outputs of the model and, vice versa, how uncertainty on measured data reverberates on parameter uncertainty. These requirements are at the basis of the verification and validation under uncertainty (VVUQ) paradigm [oberkampf2004verification, hu20162014] and are often necessary to comply with standards issued by health agencies and policy makers, such as the ASME V&V40 standard [asme2018assessing] recognized by the FDA [fda].

In this paper, we propose a machine learning method to build ROMs of cardiac electromechanical models. Our approach relies on the ANN-based method that we proposed in [regazzoni2019modellearning], which can learn a time-dependent differential equation from a collection of input-output pairs. With respect to existing approaches, we only surrogate the time-dependent pressure-volume relationship of a cardiac chamber, while we do not reduce the model describing external circulation. The latter is indeed either a low dimensional 0D windkessel or closed-loop circulation model comprised of few state variables (up to two dozens), which does not require further reduction. In other terms, we derive an ANN-based ROM for the computationally demanding 3D components, while we leave in FOM version the lightweight ones. Unlike emulators, for which the online phase consists in evaluations of the map linking model parameters to QoIs, with our approach the online phase consists instead in numerical simulations, in which the ANN-based ROM of the electromechanical model is coupled with the circulation model, at a very low computational cost. As a matter of fact, these numerical simulations can be performed in real-time on a standard laptop.

We consider two different test cases to prove the efficacy of our mathematical approach. We perform variance-based global sensitivity analysis on both electromechanical and hemodynamic parameters and Bayesian estimation by means of the Markov Chain Monte Carlo (MCMC) method to infer two parameters from noisy measurements of two scalar outputs.

This paper is organized as follows. In Sec. 2 we present the proposed method and the models used to produce the numerical results, which are presented and commented in Sec. 3. Then, in Sec. 4, we critically discuss the obtained results and the pros and cons of the proposed method, with respect to other methods available in the literature. Finally, we draw our conclusions and final remarks in Sec. 5.

This manuscript is accompanied by https://github.com/FrancescoRegazzoni/cardioEM-learning, a public repository containing the codes and the datasets necessary to reproduce the presented results.

2 Methods

We introduce our method in an abstract formulation for cardiac electromechanics. As a matter of fact, our approach, thanks to its non-intrusive nature, can be applied to different electromechanical models that are available in literature.

2.1 The full-order model (FOM)

Let us consider a generic model of cardiac electromechanics, that is a set of differential equations describing physical processes involved in the heart function. We introduce the state vector

, collecting the state variables associated with this multiphysics system. These may include the transmembrane potential, gating variables, ionic concentrations, protein states, tissue displacement, or simply phenomenological variables. In this paper, we focus on the single-chamber case of the human heart (e.g. the left ventricle, that we call from now on LV), as the generalization to multiple chambers is straightforward. By introducing a nonlinear differential operator that encodes the differential equations and boundary conditions associated with the electromechanical model, the latter reads

(1)

where denotes the LV endocardial pressure (here seen as an input), are the model parameters (possibly including, e.g., electrical conductivities, cell membrane conductances, protein binding affinities, contractility, passive tissue properties) and is the initial state. We denote by the space of parameters such that , being the number of parameters. We notice that the right-hand side of (1) depends on , as heartbeats are paced by externally applied stimuli that we assume to have a period of duration , which is fixed a priori.

The 3D cardiac electromechanical model (1), henceforth denoted by , must be coupled with a closure relationship assigning the pressure . One possible option is to couple the model with a 0D model for the external circulation (see e.g. [hirschvogel2017monolithic, regazzoni2021em_circulation_pt1, augustin2020physiologically]), thus obtaining a system in the form of

(2)

where are the state variables of the circulation model (pressures, volumes and fluxes in the circulatory network) and is a vector of parameters (e.g. vascular resistances and conductances). The mechanical and hemodynamic models are coupled through the geometric consistency relationship , where the left-hand and right-hand sides represent the LV volume predicted by the and by the models, respectively. The LV pressure is determined as a Lagrange multiplier that enforces the consistency relationship. An alternative approach is to adopt different closure relationships in the different phases of the heartbeat [levrero2020sensitivity, Gerbi2018monolithic] with suitable preload and afterload models, such as windkessel models [westerhof2009arterial]. These relationships link the changes in LV pressure with its volume, obtained as . In both cases, should the be coupled to a closed-loop circulation model or to an afterload-preload relationship, we denote by the resulting coupled model. In Fig. 1 we show an model in which consists of a the lumped-parameter closed-loop model of [regazzoni2021em_circulation_pt1], which is employed to produce the numerical results of Sec. 3.

Figure 1: The model. The parameters and (left) are associated with the and respectively. The two mathematical models are coupled via the variables and . Their union constitutes the model . The model that we consider to produce the numerical results of this paper is shown in the center of the figure. For more details on the model and for the definition of the parameters and , see Sec. 2.6. The 3D Finite Element model requires a computational mesh, obtained from the patient’s heart geometry (top right). The output of the numerical simulations (bottom right) consists in spatially distributed fields (such as activation time, active tension and tissue displacement) and in 0D transients (such as pressures and volumes).

We remark that the closure relationships never directly involve the state of the model , but only the LV volume . This is a key observation since it suggests that a ROM that is able to surrogate the relationship between and , albeit agnostic of the state , can replace the model in its coupling to the model. In what follows, we present our strategy to build an ANN-based ROM of the model, denoted by , which can be coupled with the model resulting in the coupled model, that will surrogate the model.

2.2 The reduced-order model (ROM)

To setup a ROM surrogating the model of Eq. (1), we employ the machine learning method that we proposed in [regazzoni2019modellearning]. This method is designed to learn a differential equation from time-dependent input-output pairs, by training an Ordinary Differential Equation (ODE) model, whose right-hand side is represented by an ANN. In this work, we define our ANN-based ROM as

(3)

where is the reduced state and is a fully connected ANN. The ANN input consists indeed of state variables, scalar parameters, the pressure , and the two periodic inputs and (whose role will be clarified later), for a total of

input neurons. The vector

encodes the weights and biases of the ANN, that need to be suitably trained. We remark that, among the arguments of the ANN, we have not introduced the time variable , but rather and , that are the coordinates of a point cyclically moving along a circumference with period . In this way, the ROM encodes by construction the periodicity associated with the heartbeat pacing. This expedient allows for the use of the ROM also for time spans longer than those shown during the training phase. Moreover, by introducing the parametric dependence (i.e. on ) within the ANN, the latter is not specific to a particular parameter setting.

In this work, according to [regazzoni2019modellearning], we adopt an output-inside-the-state approach, that is we train the model so that the LV volume coincides with the first state variable. More precisely, the LV volume predicted by the model is by definition , where is the first element of the canonical basis of . Therefore, the first ROM state variable has a clear physical interpretation (i.e. it coincides with the LV volume), while the other ROM states are latent variables with no immediate physical interpretation, providing however a compact representation of the full-order state . Coherently with this choice, we define the initial state as . The remaining initial states are set, without loss of generality, to zero (this choice does not reduce the space of candidate models, as proved in [regazzoni2019modellearning]).

Here we need to determine the optimal value of the weights , such that the model reproduces the outputs of the model as accurately as possible. With this goal, we generate a training set, by sampling the parameter space with sample points. For each sample, we perform a simulation with the model until time , and we record the LV pressure and volume transients. The training set is thus given by

We remark that, due to the non-intrusive nature of our method, there is no need to retain the FOM states . Finally, we train the ANN weights by minimizing the discrepancy between the training data and the model outputs, that is by considering the following constrained optimization problem

(4)

where

is a regularization hyperparameter. We remark that the optimization problem (

4

) is not a standard machine learning problem of data fitting. Indeed, the ANN appears at the right-hand side of a differential equation that acts as a constraint under which the loss function is minimized. To train this model, we use the algorithm proposed in

[regazzoni2019modellearning], which envisages approximating the differential equation by the Forward Euler method, the loss function by the trapezoidal method, and then computing the gradients by solving the adjoint equations. The parameters are then optimized by means of the Levenberg-Marquardt method [nocedal2006numopt]. The training pipeline is summarized in Fig. 2.

Figure 2: Training pipeline of the model. First, we sample the parameter space (top left figure) and, for each parameter instance , we simulate some heartbeats through the model (center figure). Finally, from the training set obtained by collecting the resulting pressure and volume transients (top right figure), we train the ANN-based model (bottom right figure), according to Eq. (4).

Once the ANN has been trained (that is, the optimal weights have been determined), the model can be used as a surrogate of the model, also for different combinations of the parameters than those contained in the training set. Moreover, it can be coupled with the closure relationships , thus obtaining the model. For example, by considering the case of a closed-loop circulation model, as in (2), its reduced counterpart reads

(5)

As matter of fact, the model has the same input-output structure of the model being surrogated, that is . Hence, the latter can be employed in replacement of the former in approximating the outputs associated with a given set of parameters , as shown in Fig. 3.

Figure 3: Parameters-to-QoIs computation using either the or the model. Given a parameter instance , either the or the model can be coupled with the model to obtain pressure and volume transients, from which a set of QoIs are extracted. See Tabs. 1, 2 and 3 for the definition of , and , respectively.

2.3 Hyperparameters tuning

Like any machine learning algorithm, our method depends on a set of hyperparameters, that is variables that are not trained (they are not part of the vector ), but are used to control the training process. These include ANN architecture hyperparameters (namely the number of layers and the number of neurons per layer ), the regularization factor and the number of reduced states . To tune the hyperparameters, we rely on a -fold cross-validation procedure. Specifically, after splitting the training set into non-overlapping subsets, we cyclically train the model by excluding one subset that is used as validation set. Finally, we evaluate the average validation errors and we select the hyperparameters setting that attains the lowest validation error and better generalization properties (see Appendix A.4 for more details).

2.4 Global sensitivity analysis

To assess how much each parameter contributes to the determination of a QoI, e.g. a biomarker of clinical interest, we perform a global sensitivity analysis. This is typically done by sampling the parameter space and by computing suitable indicators, such as Borgonovo indices [plischke2013global], Sobol indices [sobol1990sensitivity, homma1996importance], Morris elementary effects [morris1991factorial], ANCOVA indices [xu2008uncertainty] and Kucherenko indices [kucherenko2012estimation]. In this work, we perform a variance-based sensitivity analysis, which relies on a probabilistic approach. We compute Sobol indices to quantify the sensitivity of a QoI (say ) with respect to a parameter (say ). Specifically, the so-called first-order Sobol index (denoted by ) indicates the impact on the -th QoI (i.e. ) of the -th parameter (i.e. ) when the latter varies alone, according to the definition

where indicates the set of all parameters excluding the -th one. The first-order Sobol index , however, only accounts for the variations of the parameter alone, averaged over variations in the other parameters, and thus does not account for the interactions among parameters. Conversely, it is possible to assess the importance of a parameter in determining a QoI, also accounting for the interactions among parameters, by resorting to the so-called total-effect Sobol index , defined as

The latter index quantifies the impact of a given parameter when it varies alone or together with other parameters [sobol1990sensitivity].

To compute an estimate of the Sobol indices, we use the Saltelli method [homma1996importance, saltelli2002making], that makes use of Sobol quasi-random sequences to approximate the integrals that need to be computed. In practice, this method requires evaluating the model for a large number of parameters, and then processing the obtained QoIs to provide an estimate of the Sobol indices.

In this paper we perform a variance-based global sensitivity analysis simultaneously with respect to the circulation model parameters and the electromechanical model parameters (that is, we set ). To this end, we use model as a surrogate for model to perform the evaluations required by the Saltelli method. As we will see in Sec. 3, using instead of entails a huge computational gain. In addition, for each parameter setting we simulate a certain number of heartbeats to achieve a limit cycle (i.e. a periodic solution), and we calculate QoIs with respect to the last cycle, which is the most significant one (as it removes the effects of incorrect initializations of the state variables). We remark that the need to simulate a certain number of heartbeats to overpass the transient phase (that typically lasts from 5 to 15 cycles, see Appendix A.2) makes the use of a reduced cardiac electromechanical model even more necessary.

2.5 Parameter estimation under uncertainty

The patient-specific personalization of a cardiac electromechanical model requires, besides the usage of a geometry derived from imaging data, the estimation of the parameters associated with the mathematical model (or at least the most important ones), starting from clinical measurements. Very often, only a few scalar quantities are available for this purpose; moreover, the resolution of this inverse problem (i.e. estimating from ) should account for the noise that unavoidably affects the measurement of and that reflects in uncertainty on .

Bayesian methods, such as MCMC [brooks1998markov] and Variational Inference [hoffman2013stochastic], permit to address these issues within a rigorous statistical framework, by providing the likelihood

, expressed as a probability distribution of the parameters values, given the observed QoIs (denoted by

). Unlike methods that provide point estimates, either based on gradient-descent [nocedal2006numopt] or genetic [holland1992genetic]

optimization algorithms, Bayesian methods are able to provide a probability distribution on the parameter space, encoding the

credibility of each parameter combination. The credibility computation accounts for the uncertainty on measurements (associated with measurement noise and encoded in the noise covariance matrix ), as well as a prior distribution on parameters (denoted by ), that is previous knowledge or belief about the parameters. By denoting the parameters-to-QoIs map by , we have , where

denotes the measurement error (that we assume for simplicity to be distributed as a Gaussian random variable). Bayes’ theorem states that the

posterior distribution of parameters, that is the degree of belief of their value after having observed , is given by

where the normalization constant is defined as

In practice, the computation of may be challenging from the computational viewpoint, because of the need to approximate the integral that defines . The MCMC method permits to approximate the distribution with a relatively small computational effort. Similarly to the Saltelli method that we use for sensitivity analysis, also the MCMC method requires a large number of model evaluations, for different parameter values. Moreover, this method is non intrusive, that is it only requires evaluations of the map . Therefore, we can employ for this purpose the model as a surrogate for the model, which drastically reduces the necessary computational time.

Moreover, we remark that the Bayesian framework permits to rigorously account for the approximation error introduced by replacing the high fidelity model by its surrogate . Indeed, if we denote by the approximated parameters-to-QoIs map represented by the surrogate model , we have , where is the ROM approximation error. It follows , where is the experimental measurement error. Since the two sources of error can be assumed independent, the covariance of the total error satisfies , where is the ROM approximation error covariance (which can be estimated by evaluating the ROM on a testing set) and is the experimental error covariance (that depends on the measurement protocol at hand). This permits to take into account in the estimation process the error introduced by the surrogate model.

To assess the capability of our ROM to accelerate the estimation of parameters for multiscale cardiac electromechanical models, we perform the following test. First, we perform a simulation with the model, from which we derive a set of QoIs (). Then, by employing the model as a surrogate of the model, we obtain a Bayesian estimate of the parameters, that we validate against the values used to generate .

2.6 The cardiac electromechanical model

As mentioned above, due to its non-intrusive nature, our method is not limited to a specific cardiac electromechanical model, but can be applied to virtually any electromechanical model as long as pressure and volume transients are available for the training procedure. In this section, we briefly introduce the specific model used to produce the numerical results of this paper.

We consider a LV geometry processed from the Zygote 3D heart model [zygote] endowed with a fiber architecture generated by means of the Bayer-Blake-Plank-Trayanova algorithm [bayer2012novel, piersanti2020modeling]. Before starting the simulations, we recover the reference unstressed configuration through the algorithm that we proposed in [regazzoni2021em_circulation_pt1]. To model the action potential propagation, we employ the monodomain equation [collifranzone2014book], coupled with the ten Tusscher-Panfilov ionic model [ten2006alternans]. We model the microscale generation of active force through the biophysically detailed RDQ20-MF model [regazzoni2020biophysically], that is coupled, within an active stress approach, with the elastodynamics equations describing tissue mechanics. On the other hand, the passive behavior of the tissue is modeled through a quasi-incompressible exponential constitutive law [usyk2002computational]. We model the interaction with the pericardium by means of spring-damper boundary conditions at the epicardium, while we adopt energy-consistent boundary conditions [regazzoni2019mor-sarcomeres] to model the interaction with the part of the myocardium beyond the artificial ventricular base. To model blood circulation, that is , we rely on the 0D closed-loop model presented in [regazzoni2021em_circulation_pt1], consisting of a compartmental description of the cardiac chambers, systemic and pulmonary, arterial and venous circulatory networks, based on an electrical analogy. The different compartments are modeled as RLC (resistance, inductance, capacitance) circuits, while cardiac valves are described as diodes.

Among the parameters associated with the model, in this work we focus on the four ones reported in Tab. 1. Similarly, we report in Tab. 2 the parameters associated with the model.

Parameter Baseline Unit Description
160.0 Cardiomyocytes contractility
76.43 Electrical conductivity along fibers
60.0 degrees Fibers angle rotation
0.88 Passive stiffness
Table 1: Parameters of the model considered in this work and associated baseline values.
Parameter Baseline Unit Description
400.15 Initial blood pool volume of the heart
, , 0.07, 0.06, 0.55 LA/RA/RV active elastance
, , 0.18, 0.07, 0.05 LA/RA/RV passive elastance
, , 0.14, 0.14, 0.20 LA/RA/RV contraction time
, , 0.14, 0.14, 0.32 LA/RA/RV relaxation time
, , 4.0, 4.0, 16.0 LA/RA/RV reference volume
, 0.16, 0.16 Left/Right atrioventricular delay
, 0.0075, 75006.2 Valve minimum/maximum resistance
, 0.64, 0.32 Systemic arterial/venous resistance
, 0.032, 0.036 Pulmonary arterial/venous resistance
, 1.2, 60.0 Systemic arterial/venous capacitance
, 10.0, 16.0 Pulmonary arterial/venous capacitance
, 0.005, 0.0005 Systemic arterial/venous inductance
, 0.0005, 0.0005 Pulmonary arterial/venous inductance
Table 2: Parameters of the model considered in this work and associated baseline values (LA = left atrium; RA = right atrium; RV = right ventricle).

We report in Tab. 3 the list of all the QoIs, computed from the solution of the model, and outputs of interest that are used through this paper. The last column of the table indicates which variables are used for cross-validation during the training phase (see Sec. 2.3), those that are used for sensitivity analysis purposes (see Sec. 2.4) and those that are used to run Bayesian parameter estimations (see Sec. 2.5).

Parameter Unit Description Usage
Left Atrium
End-systolic and end-diastolic volume GSA
Minimum and maximum pressure GSA
Left ventricle
Volume transient X-validation
Pressure transient X-validation
End-systolic and end-diastolic volume X-validation, GSA
Minimum and maximum pressure X-validation, GSA
Stroke volume () X-validation, GSA
Right Atrium
End-systolic and end-diastolic volume GSA
Minimum and maximum pressure GSA
Right ventricle
End-systolic and end-diastolic volume GSA
Minimum and maximum pressure GSA
Stroke volume () GSA
Systemic arterial circulation
Minimum and maximum pressure GSA, MCMC
Table 3: List of QoIs used through this paper, either for cross-validation (X-validation), global sensitivity analysis (GSA) or MCMC based Bayesian parameter estimation (MCMC).

To numerically approximate this multiphysics and multiscale model, we adopt the segregated approach proposed in [regazzoni2021em_circulation_pt2], by which the subproblems are solved sequentially. For space discretization, we rely on bilinear Finite Elements defined on hexahedral meshes, adopting a different spatial resolution for the electrophysiological and the mechanical variables. For time discretization, we employ a staggered scheme, where different time steps are used according to the specific subproblem. Moreover, to avoid the numerical oscillations arising from the mechanical feedback on force generation (that are commonly cured by recurring to a monolithic scheme [levrero2020sensitivity, pathmanathan2010cardiac]), we use the stabilized-staggered scheme that we proposed in [regazzoni2020oscillation]. This numerical model requires, on a 32 cores computer platform, nearly 4 hours of computational time to simulate a heartbeat. More details on the numerical discretization and the computational platform employed to generate the training data used in this paper are available in Appendix A.1.

2.7 Software libraries

The cardiac electromechanics simulations considered in this paper are performed by means of the lifex library111https://lifex.gitlab.io, a high-performance C++ platform developed within the iHEART project222iHEART - An Integrated Heart Model for the simulation of the cardiac function, European Research Council (ERC) grant agreement No 740132, P.I. Prof. A. Quarteroni

. To train the ANN-based models, we employ the open source MATLAB library

model-learning333https://model-learning.readthedocs.io/, that implements the machine learning method proposed in [regazzoni2019modellearning] and used in this paper. Sensitivity analysis is carried out through the open source Python library SALib444https://salib.readthedocs.io/ [Herman2017]. Finally, for the MCMC based Bayesian parameter estimation we rely on the open source Python library UQpy555https://uqpyproject.readthedocs.io/ [UQpy2020]. To employ the ANN-based models trained with the MATLAB library model-learning within the Python environment of SALib and UQpy, we exploit pyModelLearning, a Python wrapper for the model-learning library, which is available in its GitHub repository666https://github.com/FrancescoRegazzoni/model-learning.

Both the MATLAB codes that are used to train the ANN-based ROMs and the datasets that permit to reproduce the numerical results are publicly available in the online repository accompanying this manuscript777https://github.com/FrancescoRegazzoni/cardioEM-learning.

3 Results

3.1 Trained models

To generate the pressure and volume transients needed to train an ANN-based ROM, we sample the parameter space with a Monte Carlo approach, even if more sophisticated sampling strategies – such as Latin Hypercube Design – can be considered as well. For simplicity, in this stage we only consider a subset of the parameters , selected as the most significant ones, on the basis of a preliminary variance-based global sensitivity analysis, obtained with a version of the closed-loop model in which the LV is also represented by a 0D circuit element (see Appendix A.3).

We consider two scenarios. First, we study the variability with respect to a single parameter, namely the active contractility. Thus, we set . Under this setting, we generate 30 numerical simulations through the model to train a ROM, henceforth denoted by . For this ROM, the remaining parameters (i.e. , and ) are kept constant (more precisely, equal to the baseline values of Tab. 1). Then, we consider the full parametric variability (that is, we set ), we generate 40 training samples and we train a second ROM, denoted by . All the numerical simulations included in the training set are 5 heartbeats long. The optimal sets of hyperparameters, tuned through the algorithm of Sec. 2.3, are reported in Tab. 4. In both the considered cases, training an ANN-based model takes approximately 18 hours on a single core standard laptop.

Trained model Parameters Training set size Hyperparameters
30 2 1 8 0
40 1 1 12
Table 4: Optimal sets of hyperparameters for the two trained models and .

Once trained, the two ROMs ( and ) can be coupled with the circulation model , thus obtaining the models and , according to Eq. (5). These two models represent two surrogates of the model, capable of approximating its output at a dramatically reduced computational cost. As a matter of fact, numerical simulations with either the or the model take nearly one second of computational time per heartbeat on a single core standard laptop.

To test the accuracy of the and models with respect to the model, we consider a testing dataset, by taking unobserved samples in the parameter space . For both models, we consider 15 testing simulations of the same duration of the ones included in the training set. In addition, to test the reliability of the ROMs over a longer time horizon than the one considered in the training set, we include in the testing set 5 simulations of double length (i.e. 10 heartbeats). Then, we compare the simulations obtained with the two ROMs ( and ) with the ones obtained with the model for the same parameters and .

The accuracy of the two ROMs is summarized Tabs. 5 and 6. Specifically, in Tab. 5 we report metrics regarding the ability of the ROMs to correctly predict the function of the LV, that is the chamber surrogated by the ANN-based model. Specifically, we report the relative errors in norm (i.e. the mean square errors) associated with pressure and volume transients ( and ) and the relative errors on some biomarkers of clinical interest (maximum and minimum pressures and volumes). For the latter, we also report the coefficient of determination . We notice that both model and model are able to surrogate model with remarkably good accuracy. Model features slightly larger errors than model ; this is not surprising since model explores a much larger parametric space than model (four parameters instead of one). Interestingly, the errors obtained over a long time horizon are very similar to those obtained for simulations of the same duration as those used to train the ANNs. This demonstrates the reliability of the ROMs in long-term simulations. Similarly, in Tab. 6 we report the errors and coefficients associated with the RV output (for simplicity, we here consider only 5 heartbeats long simulations). The accuracy obtained in reproducing the RV function is even better than that for the LV, coherently with the fact that the RV is included in the model and thus it is not surrogated by the ANN-based model.

5 heartbeats
vs relative error 0.0336 0.0090 0.0097 0.0046 0.0139 0.0035
R2 99.691 99.864 99.896 99.948
vs relative error 0.0620 0.0285 0.0517 0.0272 0.0471 0.0127
R2 94.370 95.302 95.942 97.061
10 heartbeats
vs relative error 0.0293 0.0071 0.0113 0.0037 0.0096 0.0031
R2 99.924 99.980 99.851 99.944
vs relative error 0.0631 0.0265 0.0442 0.0147 0.0382 0.0122
R2 92.227 99.957 99.229 99.063
Table 5: Testing errors and R2coefficients on the LV outputs obtained with the two models and , for 5 heartbeats long (top) and 10 heartbeats long (bottom) simulations.
5 heartbeats
vs relative error 0.0048 0.0035 0.0015 0.0027 0.0028 0.0004
R2[%] 100.000 99.993 99.998 100.000
vs relative error 0.0069 0.0040 0.0029 0.0079 0.0072 0.0069
R2[%] 99.994 99.807 99.819 99.997
Table 6: Testing errors and R2coefficients on the RV outputs obtained with the two models and , for 5 heartbeats long simulations.

In Figs. 4 and 5 we compare 10 heartbeats long transients obtained with the and models, respectively, to those obtained with the model. All these transients were not used to train the ANN-based models (that is, they belong to the testing set). In Fig. 6 we show the LV biomarkers predicted by the and models versus those predicted by the model.

Figure 4: Pressure and volume transients obtained with the (dashed lines), compared to those obtained with the model (solid lines). The different colors correspond to different samples of the testing set. For the sake of clarity, only three samples are shown in the first row.
Figure 5: Pressure and volume transients obtained with the (dashed lines), compared to those obtained with the model (solid lines). The different colors correspond to different samples of the testing set. For the sake of clarity, only three samples are shown in the first row.

Model vs

Model vs

Figure 6: LV biomarkers obtained with the and models versus those obtained with the model in the testing set. The different marker colors are associated with 5 heartbeats and 10 heartbeats long simulations, respectively.

3.2 Coupling the electromechanical reduced-order model with different circulation models

To generate the data used to train the model, we employ the coupled model (that is, we couple the model with the circulation model ). However, the ANN-based ROM surrogates the model regardless of its coupling with a specific circulation model. In fact, thanks to Eq. (5), the trained model can be coupled with circulation models that are different from the one used to generate the training data. We remind that predictions are reliable only if pressure and volume values are inside the ranges explored during the training phase.

We demonstrate the flexibility of our approach by coupling the model with a pressure-volume closure relationship that is different from the closed-loop circulation model used during the training phase (see Sect. 2.6). Specifically, we consider a circulation model with a windkessel type relationship during the ejection phase and a linear pressure ramp during the filling phase [regazzoni2019mor-sarcomeres]. In spite of the different nature of the two circulation models (the one used during the training and the one used during the testing), the ANN-based ROM trained with the model proves to be reliable also when it is coupled with the model. Indeed, as shown in Tab. 7, the results obtained by the coupled model approximate those of the coupled model with an excellent accuracy. These errors are indeed comparable to the ones obtained by surrogating the model with the model (see Tab. 5).

vs relative error 0.0459 0.0518 0.0065 0.0004 0.0080 0.0009
vs relative error 0.0510 0.0219 0.0116 0.0027 0.0110 0.0063
Table 7: Testing errors on the LV outputs obtained with the two models and coupled with the model for 5 heartbeats long simulations.

3.3 Global sensitivity analysis

Once we have verified that models and are able to reproduce the outputs of model with high accuracy, we use them to perform a global senstivity analysis. The aim is to determine which of the parameters of the circulation model () and the electromechanical model () contribute the most to the determination of a list of outputs of interest, the so-called QoIs (see Tab. 3). For this purpose, we compute the Sobol indices and , as described in Sec. 2.4. When replacing model with a ROM, we can only estimate sensitivity indices with respect to the parameters that were considered during the training phase. On the other hand, we can compute sensitivity indices with respect to all parameters of the circulation model, even those that were not varied during training. It would even be possible – at least in principle – to consider a circulation model different from the one used to generate the training data. In fact, as pointed out in Sec. 2.2, the model and surrogate the models independently of the circulation model to which it is coupled with.

We only report the results obtained by means of the most complete ROM, that is . The results are shown in Figs. 7 and 8, respectively. First, we notice that the and indices have only small differences from each other. This means that the interaction among the different parameters is less significant, in the determination of the QoIs, than the variation of the individual parameters. Furthermore, we note that, as expected, the QoIs associated with a given chamber or compartment are mostly determined by the parameters associated with the same region of the cardiovascular network. However, there are some important exceptions. Indeed, the venous resistance of the systemic circulation has a strong impact on the right heart (RA and RV), i.e. the part of the network located immediately upstream. In addition, the systemic arterial resistance significantly impacts the maximum LV pressure. In fact, this parameter is known to contribute in the determination of the so-called afterload [tortora2008]. We also note that total circulating blood volume has a major impact on almost all biomarkers. Conversely, the parameters associated with the pulmonary circulation network have a minimal impact. In addition, parameters describing the resistance of opened and closed valves also have a very little impact. This is an interesting result, since it shows that these parameters, chosen as a very low and very high value respectively (since for reasons of numerical stability they cannot be set equal to zero and infinity), have virtually no impact (at least within the ranges considered here) on the output quantities of biomechanical interest.

Figure 7: First-order Sobol indices computed by exploiting the model. Each row corresponds to a parameter of either the electromechanical model (i.e. , see Tab. 1) or the circulation model (i.e. , see Tab. 2). Each column corresponds to a QoI (i.e. , see Tab. 3). Both parameters and QoIs are split into a number of groups, separated by a black solid line. Specifically, from left to right, we list QoIs referred to LA, LV, RA, RV and systemic circulation. Similarly, from top to bottom, we list parameters associated with LA, LV, RA, RV, systemic circulation, pulmonary circulation, valves and blood total volume.
Figure 8: Total-effect Sobol indices computed by exploiting the model. For a description of the figure see caption of Fig. 7.

Regarding individual compartments, we note that variability can be explained by only a few parameters. Specifically, atria are mainly influenced by passive stiffness and atrioventricular delay, whereas for the RV the most relevant parameters are active and passive stiffness. As expected, the biomarkers associated with the LV – the only chamber included in the model – are mainly influenced by the parameters . Among these, the most influential one is the active contractility , followed the fibers orientation and, to a lesser extent, the passive stiffness . Finally, the electrical conductivity has a minimal impact on the biomarkers under consideration.

It is important to note that Sobol indices are affected by the amplitude of the ranges in which the parameters are varied. In particular, the wider the range associated with a parameter, the greater the associated Sobol indices will be, as the parameter in question potentially generates greater variability in the QoI. Therefore, the results shown here are valid for the specific ranges we used, which are reported in Appendix A.5.

3.4 Bayesian parameter estimation

In this section we present a further practical use of the cardiac electromechanics ROM presented in this paper. In particular, we show that the ROM can be used to enable Bayesian parameter estimation for the model, which, due to the prohibitive computational cost, would not be affordable without the use of a ROM.

First, we consider a prescribed value for the parameter vector (specifically, we employ the values reported in Tabs. 1 and 2) and we run a simulation using the model. Based on the output of this simulation, we consider a couple of QoIs, consisting of minimum and maximum arterial pressure (i.e. we set ). The choice of these two QoIs is motivated by the fact that they are two variables that can be measured non invasively, and in fact they are often monitored in clinical routine. We reconstruct the value of a couple of parameters, namely the active contractility and the systemic arterial resistance , assuming known and keeping fixed the values of the remaining parameters. Indeed, we aim at demonstrating that the ROM we propose is suitable for estimating parameters of both the model (such as ) and of the model (such as ). Specifically, in this section we rely on the model.

To mimic the presence of measurement errors, we artificially add to the exact values of the QoIs and a synthetic noise, with increasing magnitude. Specifically, we add an artificial noise by sampling from independent Gaussian variables with zero mean and with variance . We consider three cases: (i.e. no noise), and .

As described in Sec. 2.5

, we use the MCMC method to derive a Bayesian estimate of the parameters value from (possibly noisy) measurements of the minimum and maximum arterial pressure. For both unknown parameters we employ a non-informative prior, that is a uniform distribution on the ranges used to train the ROM (

and ). According to Sec. 2.5, we set , where the experimental measurement error covariance is given by (

being the 2-by-2 identity matrix) and where the ROM approximation error covariance is estimated from its statistical distribution on the validation set as

. More details on the MCMC setup are available in Appendix A.6.

In Fig. 9 we show the posterior distribution on the parameters pair (,

) obtained for the three noise levels considered. We identify with a red line the 90% credibility region, that is the region in the parameter space with largest posterior probability such that it covers 90% of

. We notice that for each value of noise, the credibility region contains the exact value of the parameters (namely and ), represented by a red star. As expected, for larger values of noise, the size of the credibility region increases (that is, the estimate is more uncertain). As a matter of fact, an advantage of Bayesian parameter estimation methods, compared to deterministic methods, is their ability of quantifying the uncertainty associated with the parameters estimate. A further feature of Bayesian methods stands in capturing correlations among the estimated parameters. Indeed, this aspect emerges clearly because of the oblique shape of the credibility regions. This is due to the fact that an increase in or a decrease in leads to similar changes in terms of the measured QoIs ( and ), thus making their posterior distributions highly correlated.

Figure 9: Output of the Bayesian estimation presented in Sec. 3.4. We depict the posterior distribution , estimated by means of the MCMC method, for (left), (middle) and (right). The red lines show the 90% credibility regions, while the red stars represent the exact value of the unknown parameters and .

4 Discussion

In this paper, we develop a machine learning method to build ANN-based ROMs of 3D cardiac electromechanical models. Thanks to the reduced-order differential equations learned through our algorithm, it is possible to approximate the cardiac dynamics in terms of pressure and volume transients with great fidelity and with a huge computational saving. As a matter of fact, once trained, the ANN-based ROM permits to simulate a heartbeat virtually in real time (about one second of numerical simulation for a heartbeat on a standard laptop), when the original electromechanical model requires about four hours per heartbeat on a supercomputer with 32 cores. By taking into account the number of cores, our ANN-based ROM brings the impressive speedup of 460’000x (see Fig. 10). Clearly, a fair evaluation of the computational saving should also consider the time required to generate the training dataset and to train the model. To this aim, we consider two test cases, corresponding to the examples of global sensitivity analysis and Bayesian parameter estimation presented in Secs. 2.4 and 2.5, respectively. In both cases, we consider 5 computer nodes with 32 cores each available, and we compare the total computational times using either the or the model with these computational resources.

Figure 10: Summary of the computational times associated with FOM and ROM numerical simulations and with training of the ANN-based ROM.

The global sensitivity analysis presented in Sec. 2.4 requires the numerical simulation of 74’000 parameter cases. On average, the limit cycle is reached in 10 heartbeats, for a total of 740’000 heartbeats to be simulated. Running this huge amount of numerical simulations with the FOM model would not be feasible, as it would require about 68 years of uninterrupted use of the 5 nodes endowed with 32 cores. On the contrary, by virtue of our machine learning method, we are able to perform a global sensitivity analysis, albeit with a small approximation (see Tabs. 5 and 6) in the numerical results, in 7.5 days (6.7 days to generate the training dataset, 18 hours to train the model and 1 hour and 17 minutes to perform the sensitivity analysis using the model). Therefore, taking into account the time required to build the ROM, our approach yields a 3’300x speedup (see Fig. 11). Bayesian parameter estimation, on the other hand, requires the numerical simulation of approximately 960’000 heartbeats. In this case, despite using only 20 cores to run the MCMC, we are able to obtain a result in just 6 days and 8 hours (including generation of the training dataset), when with the FOM model it would take more than 87 years. We thus obtain an overall speedup of 5’000x (see Fig. 12). Moreover, we notice that in case we need to execute a Bayesian calibration for different data, it is not necessary to repeat the training of the ANN-based model, but it is sufficient to re-run only the MCMC algorithm, which takes – thanks to our ROM – only 13 hours and 20 minutes. Finally, we remark that the cost required to train the ANN or to perform numerical simulations with the model does not depend on the specific

model at hand, as it is only based on the generated pressure and volume transients. In particular, it is not expected to raise if the biophysical detail or the number of degrees of freedom of the computational mesh increase.

Figure 11: Summary of the computational times associated with global sensitivity analysis.
Figure 12: Summary of the computational times associated with Bayesian parameter estimation.

The ROM proposed in this paper allows, as shown in Sec. 3, to tackle many-query problems, which would otherwise be unaffordable due to their computational cost. Indeed, sensitivity analysis on cardiac models have mainly addressed, at present, single cell models [sher2013local, tondel2015quantifying, pathmanathan2019comprehensive]. Sensitivity studies on 3D models of cardiac electromechanics have been mainly limited to simplified geometries [Hurtado2017], or to a few parameters, varying one parameter at a time (that is, local sensitivity analysis) [land2012analysis, Marx2020]. To the best of our knowledge, the only global sensitivity study on a 3D electromechanical model was only recently performed in [levrero2020sensitivity]; however, the huge computational cost associated with the numerical approximation of the forward model dictated the use of an idealized ellipsoidal geometry and of single heartbeat simulations, thus without letting the system reaching a periodic limit cycle. This leads to results with limited accuracy. Furthermore, parameter estimation under uncertainty using 3D models of cardiac electromechanics is hard to attain, due to the computational cost that would be required by Bayesian methods such as MCMC. Indeed, parameters are typically estimated sequentially by using minimization methods [land2012analysis, wang2009modelling, finsberg2018estimating]

or ad hoc developed heuristics

[Marx2020, costa2013automatic]

. An alternative family of methods, able to provide an estimate of uncertainty, is that of sequential methods of the Kalman filter type (see e.g.

[xi2011myocardial, Marchesseau2013]), which, however, would require an invasive implementation in the electromechanical model.

Recently, a number of surrogate models of cardiac electromechanics have been built by using machine learning algorithms and they are often called emulators [di2018gaussian, dabiri2019prediction, Longobardi2020, Cai2021]. These emulators are based on a collection of pre-computed numerical simulations obtained by sampling the parameter space, similarly to what has been done in this work. However, the approach behind these emulators is very different from the one we follow. These emulators are in fact maps that fit the parameters-to-QoIs map () in a static manner. On the contrary, with our approach, the ROM makes it possible to perform a real numerical simulation of the cardiac function, since the circulation model is kept in its full-order form, while only the computationally demanding part, i.e. the 3D electromechanical model , is surrogated. This has a number of advantages:

  1. The ROM is independent of the circulation model to which is coupled and it can also be coupled to models different from those used during training, or with different values of its parameters.

  2. Unlike the emulators in [di2018gaussian, dabiri2019prediction, Longobardi2020, Cai2021], our ROM allows for time extrapolation, i.e. reliable predictions even over longer time spans than those used during training, as demonstrated by our numerical results. This observation is very important since, while for emulators that fit the parameters-to-QoIs map the simulations contained in the training set must have reached a limit cycle (otherwise the associated QoIs would be meaningless), with our approach it is possible to defer the reaching of the limit cycle to the ROM, during the online phase. This permits to lighten the computational cost associated with the creation of the training set, which typically represents the main component of the total cost (see use cases cited above).

  3. The output of the simulations obtained with our ROM is not limited to a list of QoIs. Indeed, it contains the transient of pressures and volumes of the surrogate heart chamber and of the compartments of the circulation model. We notice that the latter is not hidden as a black-box in the parameters-to-QoIs map, as for emulators, but is rather represented explicitly.

  4. With our approach, the ROM only needs to learn the variability with respect to since the circulation model (involving the parameters ) remains explicitly represented. Conversely, emulators based on a parameters-to-QoIs map must capture the dependence on both and and thus the training set must be large enough to accurately represent their statistical variability. Indeed, we obtain accurate results with only 30–40 samples, a very low number compared to the ones typically required to construct emulators (e.g. 825 samples in [Longobardi2020], 9000 samples in [Cai2021]).

A limitation of our ANN-based ROM is that the online phase may be slower than the one of emulators, which do not require to approximate a differential equation but only need to evaluate a function. However, our ANN-based ROM enables real-time simulations and can be readily applied in a number of use cases, such as global sensitivity analysis, parameter estimation and uncertainty quantification. Furthermore, as highlighted above, most of the computational cost is not due to the evaluation of the ROM, but rather to the construction of the training set, which our approach allows to keep very small (30–40 samples in our test cases). Therefore, we conclude that keeping the circulation model in its full-order form and surrogating only the computationally intensive electromechanical model allows for a very favorable trade-off between what is reduced (variability that must be explored during the offline phase) and what is not reduced (variability that must be account for in the online phase).

A different type of emulator is the one we proposed in [regazzoni2021cardioemulator], that permits - similarly to what is done in this paper - to couple a reduced version of the 3D electromechanical model to a circulation model, thus enabling for real-time numerical simulations. However, the emulator of [regazzoni2021cardioemulator] is built for a fixed value of the parameters

, or (through a linear interpolation) for the variability of a single parameter at most. On the other hand, its construction only requires one or two numerical simulations performed through the FOM. Therefore, the emulator of

[regazzoni2021cardioemulator] is advantageous when one needs to surrogate the model for a given parametrization (e.g. to quickly converge to a limit cycle, or to perform sensitivity analysis on only); conversely, when one needs to explore the parametric dependence of an electromechanical model, the approach proposed in this paper turns out to be favorable.

The ROM proposed in this paper features several differences with respect to projection-based ROMs, which have been used in the cardiovascular field as well (see e.g. [QMN16, bonomi2017matrix, pagani2017reduced, pagani2021enabling]

), or deep learning based approximations of the parameters-to-solution map (see e.g.

[fresca2020deep, fresca2021]). The latter families of ROMs indeed provide a richer output than our ROM, as they allow for the approximation of spatially varying outputs of the 3D model, such as transmembrane potential and tissue displacement. However, the online phase of projection-based ROMs is typically more computationally demanding than the one of our ROM [bonomi2017matrix, pagani2021enabling]. Moreover, unlike our proposed method, projection-based ROMs are intrusive in the equations to be reduced and suitable hyper-reduction techniques should be addressed to handle the nonlinearities of cardiac models. The deep learning based method of [fresca2020deep, fresca2021] features a training phase that is more computationally demanding than the one of the method proposed in this paper, due to the larger size of the outputs. Conversely, in applications in which the knowledge of pressures, volumes and blood fluxes associated with the cardiac chambers are sufficient, our ROM permits to accurately approximate the outputs of the FOM at a very reduced computational cost and in a non-intrusive manner (only pressure and volume recordings are required from the FOM).

We remark that the proposed method is limited to electromechanical models that feature a periodic behavior. The ROM is indeed periodic by construction, due to the presence on the sine and cosine terms, as highlighted in Eq. (3). Therefore, the ROM is not suitable, e.g., to simulate an irregular electrophysiological behavior, such as ventricular arrhythmias, and other electric dysfunctions.

5 Conclusions

We presented a machine learning method to build ANN-based ROMs of cardiac electromechanical models. Indeed, on the basis of pressure and volume transients generated with the FOM, we are capable of learning a system of differential equations that approximates the dynamics of the cardiac chamber to be surrogated. This system of differential equations, linking pressure and volume of a cardiac chamber, is coupled with lumped-parameter models of cardiac hemodynamics, thus allowing for the simulation of the cardiac function at a dramatically reduced computational cost with respect to the original FOM. As a matter of fact, our ROM permits to perform numerical simulations virtually in real-time. Moreover, thanks to its non-intrusive nature, the proposed algorithm can be easily applied to other electromechanical models besides the one considered in this work.

We presented two test cases in which we employ the ANN-based ROM. We carried out a global sensitivity analysis to assess the influence of the parameters of the electromechanical and hemodynamic models on a list of outputs of clinical interest. Then, we performed a Bayesian estimation of a couple of parameters (belonging to the electromechanical and hemodynamic models, respectively), starting from the noisy measurement of a couple of scalar quantities (namely maximum and minimum arterial pressure). In both the cases, performing through the FOM the large number of numerical simulations needed would not have been feasible, due to their high computational cost (it would in fact have taken tens of years on a supercomputer computational platform). Replacing the FOM with its ANN-based surrogate allowed us to obtain an approximate solution in a few hours of computation. By taking into account that the generation of the numerical simulations contained in the training set required less than 7 days on the same computational platform, our ANN-based ROM allowed us to reduce the total computational time by more than 3’000 times.

Acknowledgements

This project has received funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No 740132, iHEART - An Integrated Heart Model for the simulation of the cardiac function, P.I. Prof. A. Quarteroni).

                 

Appendix A Numerical settings

In this appendix we report the main numerical approaches that we employ to develop and test the ROM of cardiac electromechanics presented in this paper.

a.1 Electromechanical model

We use the same choices in terms of space and time discretization for all the numerical simulations of this paper. Specifically, we employ a finer mesh for cardiac electrophysiology, which consists of 258’415 DOFs and 240’864 elements (with an average mesh size of ), and a coarser one for cardiac mechanics, which is made by 35’725 DOFs and 30’108 elements ( ). To advance the electrophysiological variables we use a timestep length of and a five times larger timestep to advance the mechanical variables. The numerical simulations were run by using one cluster node endowed with 32 cores (four Intel Xeon E5-4610 v2, 2.3 GHz) which is available at MOX, Dipartimento di Matematica.

a.2 Convergence to the limit cycle

To determine when a simulation reached a limit cycle (i.e. a periodic solution), we employ a criterion based on the increment between successive cycles. Specifically, we consider the limit cycle to be reached when maximum difference between two consecutive cycles in the pressures and volumes of all four chambers is less than and , respectively. This criterion is typically satisfied in 5 to 15 cycles. In any case, we always perform a minimum of 5 cycles.

a.3 Training algorithm

To generate the training datasets, we employ a Monte Carlo based sampling of the parameter space . As mentioned, we consider a subset of the parameters , made by

To define the sampling spaces and , we consider 50% to 200% ranges of the baseline values reported in Tabs. 1 and 2 for , , , , ,