## 1 Introduction

During the recent decades, the interest in numerical simulation of blood flow has been growing continuously. Main reasons for this development are the increase of computational power, the design of efficient numerical algorithms and the improvement of imaging techniques combined with elaborated reconstruction techniques yielding data on the geometry of interesting objects as well as important modelling parameters like densities of a fluid or tissue [6, 23, 55]. The motivation for putting more and more effort into these developments has been evoked by the fact that computational tools enable clinical doctors and physiologists to obtain some insight into cardiovascular diseases in a non-invasive way. By this, the risk of infections and other dangers can be remarkably reduced. Using numerical models, it is, e.g., possible to make predictions on how a stenosis affects the blood supply of organs, in particular one wants to find out to what extent a vessel can be occluded without reducing the blood flow significantly. In this context, it is also of great interest whether blood vessel systems like the Circle of Willis [3, 57] or small interarterial connections [60, 63] can help to restore the reduced blood flow. Furthermore, the simulation of fundamental regulation mechanisms like vasodilation, arteriogenesis and angiogenesis help to understand how the impact of an occlusion on blood flow can be reduced [2, 16, 34]. Further important application areas are the stability analysis of an implanted stent or an aneursysm. Thereby, it is crucial to compute realistic pressure values, mass fluxes and wall shear stresses in an efficient way such that these data can be evaluated as fast as possible.

However, the simulation of flow through a cardiovascular system is a very complex matter. Since it is composed of a huge number of different vessels that are connected in a very complex way, it is usually not possible to resolve large parts of the cardiovascular systems due to an enormous demand for computational power and data volume. In addition to that there are many different kinds of vessels covering a large range of radii, wall thicknesses and lengths [23, Chap. 1, Tab. 1.1, Tab. 1.2]. The vessel walls of arteries are, e.g., much thicker than those of veins, due to the fact that they have to transport blood at high pressure. As the heart acts like a periodic pump, blood flow in the larger arteries and venes having an elastic vessel wall is pulsatile and exhibits high Reynolds numbers [23, Chap. 1, Tab. 1.7]. This requires the usage of FSI algorithms and discretisation techniques for convection dominated flows [8, 11, 12, 71]. Contrary to that flows in small vessels with respect to diameter or length [23, Chap. 1, Tab. 1.7]

exhibit small Reynolds numbers. Moreover, the walls of such vessels are not significantly deformed and therefore they can be modelled as quasi-rigid tubes. Vessels having these properties can be typically classified as arterioles, venoles or capillaries. Taking all these facts into account, it becomes obvious that to this part of the cardiovascular system totally different models and methods have to be applied

[13, Chap. 6.2][45, 56].Due to the variety of different vessel geometries and types of flows, it is unavoidable to consider a coupling of different kinds of models such that realistic blood flow simulations within the entire or within a part of the cardiovascular system can be performed. In order to reduce the complexity of the numerical model mixed-dimension models have been introduced [4, 5, 21, 39, 41, 42, 43, 48]. Thereby, subnetworks of larger vessels are modelled by three-dimensional (3-D) or one-dimensional (1-D) PDEs in space. At the inlets and outlets of these networks, the corresponding models are coupled with one-dimensional (1-D) PDEs or zero-dimensional (0-D) models (systems of ODEs) incorporating e.g. the windkessel effect of the omitted vessels and the pumping of the heart [9, 31, 46]. An alternative to these open-loop models for arterial or venous subnetworks of the systemic circulation are closed-loop models linking the inlets and the outlets of a subnetwork by a sequence of 0-D models for the organs, pulmonary circulation and the heart [37, 38, 44].

Besides the usage of dimensionally reduced or multiscale models a further method of reducing the complexity of blood flow simulations has been established in the recent years [40, 52, 53]. This approach is called Reduced Basis method (RB method) and is based on the idea to solve in an offline phase parameterised PDEs for a few parameters (see e.g. [26]). These so-called solution snapshots are then used as basis of a low-dimensional space, and projecting the PDE to this space results in a low-dimensional problem. This reduced system can be solved in a so-called online phase, where solutions for multiple parameters can be efficiently computed. Within relevant biomedical application areas these parameters determine usually the shape of a bifurcation, a stenosis, a bypass or an inflow profile [51, Chap. 8], [52, 53].

In this paper, we want to investigate the performance of a different type of surrogate model obtained via machine learning techniques, and in particular with kernel methods

[19, 25, 27, 62, 75]. Contrary to the RB method, the surrogate model is in this case represented by a linear combination of kernel functions like the Gaussian or the Wendland kernel [74, 78]. The coefficients in the linear combination and the parameters for the kernel functions are obtained from a training and validation process which is performed in an offline or training phase [28]. These methods have the advantage of constructing nonlinear, data-dependent surrogates that can reach significant degrees of accuracy, while not needing an excessively large amount of data, as is instead the case for different machine learning techniques.Using this method we want to simulate the impact of a stenosis on blood flow, in particular we consider a peripheral arterial stenosis. This type of stenosis is of high interest for physiologists, since peripheral arteries supply organs that are located far away from the heart. It is obvious that peripheral organs and tissues are affected by a potential risk of undersupply and therefore an occlusion of peripheral arteries is extremely critical. For our simulations, we place a stenosis in the right tibial artery located in the lower right leg (see Figure 1) and study the pressure and flow rate curves, i.e., the evolution of the two quantities at different points over a complete heart-beat, in the vicinity of this stenosis.

We use for the blood flow simulations a 1D-0D coupled model, where we assign a 1-D flow model to the main arteries of the systemic circulation [2, 4, 13, 35]. At the outlets of this network 0-D models are attached to the 1-D models to incorporate the influence of the omitted vessels. The stenosis is modelled by a 0-D model consisting of an ODE [43, 64, 79]. Using the dimensionally reduced model, we can produce realistic pressure and flow rate curves in a fast way.

Although this model is already both accurate and relatively fast, it is still too slow and hence not suitable for real-time simulation or parameter estimation. To overcome this problem, we train a kernel-based surrogate model which predicts, depending on the degree of stenosis, a pressure or flow-rate curve. The surrogate is trained in a data-dependent way by computing pressure and flow rate curves for different degrees of stenosis, which are used as training data for the kernel method to construct an accurate surrogate model. The intention of this modelling approach is that a combination of dimensionally reduced models and kernel methods allows us to simulate the impact of a stenosis for an arbitrary degree of narrowing in a very short time. Simulation techniques of this kind might support clinical doctors and researchers with some important information after a relative short time, such that their diagnostic process can be optimised.

This combination of techniques is relatively new, and the results presented in this paper demonstrate its effectiveness. Moreover, although we concentrate here on the prediction of pressure and flow rate curves, the same technique can be easily adapted to construct surrogates of other relevant quantities of the blood flow simulation.

We remark that the present approach has potentially different advantages over the RB method. Indeed, the RB method typically requires the computation of several time snapshots in the offline phase to simulate the time evolution, and also requires a time integration, with mostly the same timestep used in the full model, during the online phase. On the other hand, kernel based surrogates only require the time evolution of the quantities of interest in the desired time interval as training data, and in the online phase they can directly predict it for a new parameter, without the need of any time integrations. Moreover, it is well known that the RB method may perform poorly when applied to transport problems, especially in the presence of moving structures or discontinuities evolution. This behavior is reflected in a slowly decreasing Kolmogorov width, as it is discussed e.g. in [26, Example 3.4].

The remainder of our work is structured as follows: In Section 2 we outline all the details of the 1D-0D coupled model and the model for the stenosis. In addition to that some comments on the numerical methods are presented. The first subsection of the following Section 3 contains some information on the fundamentals of kernel methods. This first part presents results that are mainly already discussed in the cited literature. Nevertheless, since we aim to address researcher of both the blood flow and machine learning communities, we include it to provide a clearer explanation for the interested reader, which may be acquainted on only one of the two fields. The second and third subsection describe how kernel methods can be used to compute flow variables in dependence of the degree of stenosis. By means of the models and methods from Section 2 and 3, we perform in Section 4 some numerical tests illustrating the accuracy of the surrogate kernel model. Moreover it is shown how the surrogate kernel model can be used to solve a state estimation problem. The paper is concluded by Section 5, in which we summarise the main results and make some comments on possible future work.

## 2 Simulation of arterial blood flow by dimensionally reduced models

Simulating blood flow from the heart to the arms and legs, we consider the arterial network, presented in [65, 73, 76]. This network consists of the main arteries including the aorta, carotid arteries, subclavian arteries and the tibial arteries (see Figure 1). Our modelling approach for simulating blood flow through this network is based on the idea to decompose in a first step the network into its single vessels. In a next step a simplified 1-D flow model is assigned to each vessel. Finally, the single models have to be coupled at the different interfaces, in order to obtain global solutions for the flow variables.

The following subsections present the basic principles of the 1-D model and the coupling conditions at bifurcations as well as at the stenosis. Furthermore, we make some comments on the numerical methods that are used to compute a suitable solution. At the inlet of the aorta (Vessel , Figure 1 left), we try to emulate the heart beats by a suitable boundary condition. In order to account for the windkessel effect of the omitted vessels, the 1-D flow models associated with the terminal vessels are coupled with ODE-systems (0-D models), which are derived from electrical science [4]. Usually the term windkessel effect is related to the ability of large deformable vessels to store a certain amount of blood volume such that a continuous supply of organs and tissue can be ensured. However, also many of the vessels that are not depicted in Figure 1 exhibit this feature to some extent. Furthermore the arterioles located beyond the outlets of this network can impose some resistance on blood flow [2, 34]. These features have to be integrated into the outflow models in order to be able to simulate realistic pressure and flow rate curves.

### 2.1 Modelling of blood flow through a single vessel

Let us suppose that the Navier-Stokes equations are defined on cylindrical domains of length and that their main axis are aligned with the -coordinate. Modelling the viscous forces, we assume that blood in large and medium sized arteries can be treated as an incompressible Newtonian fluid, since blood viscosity is almost constant within large and middle sized vessels [49]. The boundaries of the computational domain change in time due to the elasticity of the arterial vessels walls and the pulsatile flow. Thereby, it is assumed that the vessel displaces only in the radial direction and that the flow is symmetric with respect to the main axis of the vessel [21, 50]. In addition to that we postulate that the -component of the velocity flield is dominant with respect to the other components. Taking all these assumptions into account and integrating the Navier-Stokes equations across a section area perpendicular to the -axis at the place and a time , one obtains the following system of PDEs [7, 9, 30]:

(1) | ||||

(2) |

The unknowns of this system , and denote the section area of vessel , the flow rate and the averaged pressure within this vessel. Mathematically, these quantities are defined by the following integrals:

where is the 3-D pressure field. Please note that (1) is the 1-D version of the mass conservation equation, while (2) represents the 1-D version of the momentum equations. stands for the density of blood. is a resistance parameter containing the dynamic viscosity of blood [67]: . The PDE system (1) and (2) is closed by means of an algebraic equation, which can be derived from the Young-Laplace equation [46]:

(3) |

where is the Young modulus, stands for the section area at rest, is the vessel thickness and is the Poisson ratio. Due to the fact that biological tissue is practically incompressible, is chosen as follows: . Equation (3) assumes that the vessel wall is instantaneously in equilibrium with the forces acting on it. Effects like wall inertia and viscoelasticity could be incorporated using a differential pressure law [15, 44, 72]. However, neglecting the viscoelasticity, maintains the strict hyperbolicity of the above PDE system [1]. Therefore, the interaction between the blood flow and the elastic vessel walls (3) is accounted for by using (3). Assuming that and are constant, the PDE-system (1)-(3) can be represented in a compact form:

(4) |

For , the flux function and the source function are given by:

This system may be written in a quasilinear form:

where is the Jacobian matrix of the flux function

, having the eigenvalues

and . Denoting by the fluid velocity and the characteristic wave velocity of vessel , it can be shown that: and . Under physiological conditions, it can be observed that [22]:(5) |

Therefore it holds for the eigenvalues: and and the above PDE-system is hyperbolic. Exploiting the fact that the Jacobian matrix

is diagonalisable, there is an invertible matrix

such that it can be decomposed as follows: , where is a diagonal matrix that has the eigenvalues and on its diagonal. By this, the PDE-system (4) can written in its characteristic variables :(6) |

The characteristic variables and are related by the following equation:

(7) |

An integration of (7) yields that the characteristic variables and can be expressed by the primary variables and as follows:

(8) | ||||

(9) |

Based on condition (5) and the signs of and , one can prove that is a backward and is a forward travelling wave [9] [13, Chap. 2]. Furthermore, it can be shown that and are moving on characteristic curves defined by two ODEs:

(10) |

These insights are crucial for a consistent coupling of the submodels at the different interfaces, since it reveals that at each inlet and outlet of a vessel exactly one coupling or boundary condition has to be imposed. The other condition is obtained from the outgoing characteristic variable. At the inlet , the variable is leaving the computational domain, whereas at the variable is the outgoing characteristic variable.

### 2.2 Numerical solution techniques

According to standard literature [54, Chap. 13], the main difficulties that arise in terms of numerical treatment of hyperbolic PDEs are to minimise dissipation and dispersion errors, in order to avoid an excessive loss of mass and a phase shift for the travelling waves. A standard remedy for these problems is to apply higher order discretisation methods in both space and time [65] such that the numerical solution is as accurate as possible. However, higher order discretisation methods tend to create oscillations in the vicinity of steep gradients or sharp corners, which can be removed by some additional postprocessing [24, 35, 36]. Moreover, time stepping methods of higher order require small time steps to be able to resolve the dynamics of a fast and convection dominated flow and to fulfill a CFL-condition, if they are explicit.

Considering all these features, we use in this work the numerical method of characteristics (NMC), which is explicit and of low approximation order (first order in space and time [1, Theorem 1]), leading to large dissipation and dispersion errors. This drawback can be circumvented by using a fine grid in space and sufficiently small timesteps. Since we deal in this work with 1-D problems, a fine grid in space is affordable with respect to computational effort. On the other hand, a fine grid might force an explicit method to exert very small time steps. However, for the NMC it can be proven that its time stepsize is not restricted by a condition of CFL type [1, Prop. 2]. This means that the NMC can use a fine grid in space and time stepsizes that are small enough to capture the convection dominated blood flow and large enough to have an acceptable number of timesteps.

Let us suppose now that the interval for Vessel is discretised by a grid having a meshsize and grid nodes . Here, is the index of the last grid node. In a time step , the NMC iterates over all the grid nodes. At each grid node there are two characteristic curves and for and (see Figure 2). Both curves are linearized in and . In a next step, the resulting tangents are traced back to the previous time point , where the corresponding intersection points are denoted by and , respectively (see Figure 2).

This procedure is equivalent to solving for every the final value problem [1][Equation (22)], which can be derived from (10):

Setting we have by a first order approximation:

Restricting the PDE-system (6) to the characteristic curves , we have to solve the following ODEs in order to determine approximations for and :

(11) |

An explicit first order discretisation of (11) yields the following extrapolation formula for at :

(12) |

At the old time step , the values

can be interpolated using the precomputed values at the grid nodes

. For large time steps, it may happen that (see Figure 2).In these cases, the values can not be interpolated from the spatial values. Therefore, we require a temporal interpolation at a time point at which the linearised characteristic curves leave the computational domain. At the boundaries and these time points can be computed by:

After computing the ingoing characteristic variables and for the new time step by means of an external model (see Subsection 2.3-2.6), we use a linear interpolation to provide a surrogate value for the missing characteristic variable:

(13) |

### 2.3 Modelling of heart beats

At the inlet of the aorta, we couple the corresponding 1-D model with a lumped parameter model (0-D model) for the left ventricle of the heart. By means of this model and the outgoing characteristic variable the missing ingoing variable can be determined. In order to compute the pressure in the left ventricle, we consider the following elastance model [21, 38]:

(14) |

where is the volume of the left ventricle and is the flow rate from the left ventricle into the aorta. is the dead volume of the left ventricle and denotes the viscoelasticity coefficient of the cardiac wall. For simplicity, we assume that depends linearly on : . The time dependent elasticity parameter is given by [38]:

represents the length of the heart cycle. and are the maximal and minimal elasticity parameters, while and refer to the durations of the ventricular contraction and relaxation. The flow rate through the aortic valve is governed by the Bernoulli law [69] incorporating the viscous resistance and inertia of blood:

(15) |

The parameters and quantify the viscous effects, flow separation and inertial effects. Finally, the pressure drop is computed as follows: , where is the pressure at the root of the aorta.

During the systolic phase of the heart cycle, it holds: and we use (14)–(15) to compute . This value serves as a Dirichlet boundary value at for the 1-D model in vessel . Based on , Equation (8) and (9) and an approximation of by (12) the ingoing variable can be determined.

Within the diastolic phase of the heart cycle, begins to exceed the pressure in the left ventricle . As a result the aortic valve is closing and we have no flux or a very little flux between the left ventricle and the aorta and therefore we set . Since we simulate only the left ventricle without taking into account the filling process by the left atrium, we reactivate the model at the begin of every heart cycle [21]. Thereby, at the end of each heart cycle, the volume of the left ventricle is set to its maximal value: .

### 2.4 Modelling of bifurcations

In order to decrease the flow velocity and to cover the whole body with blood vessels, the arterial system exhibits several levels of branchings. Therefore, it is very important to simulate blood flow through a bifurcation as exact as possible. Bifurcations and their mathematical modelling have been the subject of many publications [10, 20, 29, 35, 66]. Coupling conditions for systems linked at a bifurcation can be derived by the principles of mass conservation and continuity of the total pressure. The total pressure for Vessel is defined by:

Indexing the vessels at a bifurcation by , we obtain the following three coupling conditions:

(16) |

The remaining equations are obtained by the characteristics entering the bifurcation (see Figure 3). According to Subsection 2.1 we have at each bifurcation three characteristics moving from the vessels into the bifurcation. The outgoing characteristic variables can be determined by tracing back the corresponding characteristic curves (see Subsection 2.2, Equation (12) and (2.2)). Using the characteristic variables and inserting (8) and (9) into (16), we obtain a non-linear system of equations for the three unknown ingoing characteristic variables , and .

### 2.5 Modelling of the peripheral circulation

At the outlet of a terminal vessel ,
the reflections of the pulse waves at the subsequent vessels have to be incorporated to simulate a realistic pressure decay.
For this purpose we assign to each terminal vessel a reflection
parameter , where is the resistance parameter of and is the equivalent resistance parameter
for all the vessels which are connected to but not contained in the 1-D network. A third parameter quantifies the compliance of the omitted
vessels and, therefore, it is a measure for the ability of these vessels to store a certain blood volume. These parameters form a triple
that is referred to as a *three-element Windkessel* model in literature
[4] [23, Chap. 10]. In order to describe the dynamics of a Windkessel model the following ODE has
been derived using averaging techniques and an analogy from electrical science [3, 4, 41]:

(17) |

and denote the pressure and flow rate at the outlet of a terminal vessel , respectively. is an averaged pressure in the venous system. Combining (17) with (3), (8) and (9) yields an equation depending on the characteristic variables. By means of this equation and the given outgoing characteristic variable , the missing ingoing characteristic variable can be computed, by solving for each time point of interest a non-linear equation. Having and at and for a time point at hand, the boundary values and can be computed for each using (8) and (9). Further information on the derivation of lumped parameter models for the peripheral circulation can be found in [47].

### 2.6 Modelling the influence of a stenosis on blood flow

A blood vessel containing a stenosis is split into three parts: A proximal part , the stenosis itself and a distal part . In a next step, we assign to and the 1-D blood flow model from Subsection 2.1, while the part of that is covered by the stenosis is lumped to a node (see Figure 4).

The degree of stenosis is represented by a parameter , where corresponds to the healthy state and stands for the case of a completely occluded blood vessel. For convenience, it is assumed that both vessel parts have the same section area as well as the same elasticity parameters. The lengths of and are denoted by and . At the boundaries and that are adjacent to the stenosis, two characteristic variables and are moving towards the stenosis (see Figure 4 and Subsection 2.1). Using these characteristic variables and an appropriate -D model, we have enough equations to compute the boundary conditions for the two 1-D models. Modelling the stenosis, we consider an ODE containing several parameters of physical relevance to couple both parts of the affected vessels. According to [64, 68, 79] the flow rate through a stenosis and the pressure drop across a stenosis are related to each other by the following ODE:

(18) |

and refer to the section areas of the normal and stenotic segments, while and denote the corresponding diameters [38]. is the length of the stenosis. The remaining parameters are empirical coefficients, which are given by [38]:

Solving (18), we use the solution value for each time point as a boundary condition at and . Together with the extrapolated characteristic variables and as well as (8) and (9), we have a system of equations for the boundary conditions adjacent to the stenosis.

In the case of a full occlusion, i.e. , we multiply (18) by . Considering the limit , we obtain an algebraic equation that gives . This is equivalent to a full reflection of the ingoing characteristics and .

## 3 Kernel-based surrogate models

In this section we introduce kernel methods for surrogate modeling. First, we present the general ideas of kernel methods applied to the approximation of an arbitrary continuous function , where is a given input parameter domain and are the input and output dimensions, which can be potentially large (e.g., the present setting will lead to ). Then we concentrate on the present field of application and discuss how this general method can be used to produce cheap surrogates of the full model described in the previous sections.

Our goal is to construct a surrogate function such that on , while the evaluation of for any input value is considerably cheaper than evaluating for the same input. This approximation is produced in a data-dependent fashion, i.e., a finite set of snapshots, obtained with the full simulation, is used to train the model to provide a good prediction of the exact result for any possible input in . The computationally demanding construction of the snapshots is performed only once and in an offline phase, while the online computation of the prediction for a new input parameter uses the cheap surrogate model.

### 3.1 Basic concepts of kernel methods

We introduce here only the basic tools needed for our analysis. For an extensive treatment of kernel-based approximation methods we refer to the monographs [17, 19, 75], while a detailed discussion of kernel-based sparse surrogate models can be found in [28]. Nevertheless, we recall that this technique has several advantages over other approximation methods, namely it allows for large input and output dimensions, it works with scattered data, it allows fast and sparse solutions through greedy methods and has a notable flexibility related to the choice of the particular kernel.

As recalled before, we aim at the reconstruction of a function , . We assume to have a dataset given by pairwise distinct inputs, i.e., a set of points in (the data points) and corresponding function evaluations (the data values).

The construction of makes use of a positive definite kernel on . We recall that a function is a strictly positive kernel on if it is symmetric and, for any and any set of pairwise distinct points , the kernel matrix is positive definite. Many strictly positive definite kernels are known in explicit form, and notable examples are e.g. the Gaussian (where is a tunable parameter) and the Wendland kernels [74], which are radial and compactly supported kernels of piecewise polynomial type and of finite smoothness. Given a kernel , the surrogate model is constructed via the ansatz

(19) |

with unknown coefficient vectors

. The coefficients are obtained by the vectorial interpolation conditions(20) |

i.e., the surrogate model is required to predict the same value of the full model when computed on each of the data points contained in the dataset. Putting together the ansatz (19) and the interpolation conditions (20), one obtains the set of equations

which can be formulated as a linear system with

(21) |

Since the kernel is chosen to be strictly positive definite, the matrix is positive definite for any , thus the above linear system possesses a unique solution . In other terms, the model (19) satisfying interpolation conditions (20) is uniquely defined for arbitrary pairwise distinct data points and data values .

This interpolation scheme can be generalised by introducing a regularisation term, which reduces possible oscillations in the surrogate at the price of a non exact interpolation of the data. We remark that in principle this does not reduce the accuracy, since a parameter can be used to tune the influence of the regularization term, and a zero value can be used when no regularization is needed.

To explain this in details, we first recall that, associated with a strictly positive definite kernel there is a uniquely defined Hilbert space of functions from to . For the sake of simplicity, we discuss the case , i.e., scalar valued functions, while the generalisation to vectorial functions will be sketched at the end of this section. The space contains in particular all the functions of the form (19), and their squared norm can be computed as . This means that a surrogate with small -norm is defined by coefficients with small magnitude.

With these tools, and again for and a regularisation parameter , a different surrogate can be defined as the solution of the optimisation problem

(22) |

which is a regularised version of the interpolation conditions (20), where exact interpolation is replaced by square error minimization, and the surrogate is requested to have a small norm. When a strictly positive definite kernel is used, the Representer Theorem [61] guarantees that the problem (22) has a unique solution, that this solution is of the form (19), and that the coefficients are defined as the solutions of the linear system

where is the identity matrix, and where now and are column vectors. It is now clear that pure interpolation can be obtained by letting although a positive improves the conditioning of the linear system, reducing possible oscillations in the solution.

These Hilbert spaces, the corresponding error analysis, and the formulation of the regularised interpolant can be extended to the case of vector valued functions just by applying the same theory to each of the components. The fundamental point here, to have an effective method to be used in surrogate modeling, is to avoid having different surrogates, one for each component. This would result in independent sets of centers, hence many kernel evaluations to compute a point value . To reduce the overall number of centers, one can make the further assumption that a common set of centers is used for all components. From the point of view of the actual computation of the interpolant, this is precisely equivalent to the solution of the linear system (21), where in the regularised case also the term is included. We remark that more sophisticated approaches are possible to treat vector valued functions, but the approach presented in this work yields already satisfactory results.

### 3.2 Sparse approximation

So far, we have shown that kernel interpolation is well defined for arbitrary data, and that the corresponding interpolant has certain approximation properties. Although the method can deal with arbitrary pairwise distinct inputs , the resulting surrogate model is required to be fast to evaluate. From formula (19), it is clear that the computational cost of the evaluation of on a new input parameter is essentially related to the number of elements in the sum. It is thus desirable to find a sparse expansion of the form (19), i.e., one where most of the coefficient vectors are zero. This sparsity structure can be obtained by selecting a small subset of the data points, and computing the corresponding surrogate.

An optimal selection of these points is a combinatorial problem, which is too expensive with respect to the computational effort. Instead, we employ greedy methods (see [70] for a general treatment, and [14, 59] for the case of kernel approximation). Such methods select a sequence of data points starting with the empty set , and, at iteration , they update the set as by adding a suitable selected point . The selection of is done here with the -Vectorial Kernel Orthogonal Greedy Algorithm (-VKOGA, [77]), which works as follows. At each iteration, a partial surrogate can be constructed as

where , are the matrix and vector of (21) restricted to the points . To evaluate the quality of the partial surrogate, one can check the residual vector

for all . The -VKOGA takes precisely , i.e., it includes in the model the data point where the error is currently largest. By checking the size of the residual, one can stop the iteration with , while potentially , i.e., the new surrogate model is much cheaper to evaluate but it retains the same accuracy of . More precisely, it has been proven that, under smoothness assumptions on the target function , the VKOGA algorithm, with the - or similar selection strategies, can attain algebraic or even spectral convergence rates [58, 77].

Finally, we remark that the partial surrogates can be efficiently updated when adding a new point, i.e., can be obtained from by computing only a new coefficient in the expansion, while the already computed ones are not modified. We point to the paper [28] for a more in depth explanation of this efficient computational process.

Observe that this greedy method results in the selection of a small subspace , and is computed as the projection, thus best approximation, of into . The selection of the points via the -greedy selection strategy makes use of the values of on all the points . In this sense, the procedure is similar to a least square approximation, where a small set of points is used to generate an accordingly small approximation space. Nevertheless, it is not clear in the least square setting how these few points should be selected, whereas the present approach allows an incremental selection of points and an efficient update of the approximant, which can be stopped when a tolerance criterion is reached. Moreover, by solving equation (22) we are indeed constructing an approximant that minimizes a least squares accuracy term combined with a regularization term.

### 3.3 Simulating blood flow in the vicinity of a peripheral stenosis by means of kernel methods

Coming back to the blood flow simulation, we define in details the target functions which will be approximated by the kernel method. These functions will represent the maps from an input stenosis degree to the resulting pressure or flow-rate curve for different vessels, as computed by the full simulation of Section 2.2. The definition of is described in the following.

Since the numerical simulation is expected to have a transient phase before reaching an almost-periodic behaviour, the system is first simulated with the method of Section 2.2 in the time interval for the healthy state . The state reached at time is then used as initial value for the subsequent simulations. At the time the stenosis is activated with a degree and the system is simulated until for various values of .

From this set of simulations for different values of , we keep the pressure and flow-rate curves of the last heart beat, i.e., in the time interval . This means that for each point in the spatial grid we have the time evolution of the pressure and flow rate, which are represented as a -dimensional vector for each space point, where depends on the actual time discretization step.

In order to study the effect of the stenosis, we concentrate on the vessels number , which are the ones surrounding the stenosis (see Figure 1), and for each of those we select a reference space point. Putting all together, for each of the four vessels we have one reference point located in the middle of these vessels. For each point we have the -dimensional time discretization of the pressure and flow rate curve in the time interval . The maps from an input stenosis degree to these vectors define functions (for the pressure) and (for the flow rate).

Figure 5 and 6 show examples of pressure and flow rate curves, for both healthy state and for . It can be observed that in the case the flow rate in Vessel and is remarkably reduced, while the flow rate in Vessel is slightly enlarged. The pressures in Vessel , and are increased, which may lead to the formation of an aneursysm, if the vessel walls are weakend in this region. Concerning the healthy state , one has to note that the pressure values are within a physiological reasonable range, i.e., for the diastolic pressure and for the systolic pressure. Similar pressure curves have also been published in other works [4, 38].

With respect to the general setting introduced in the previous section, we have here , , . We can then train a kernel model for each of the eight functions (corresponding to pressure and flow-rate for each of the four vessels) using snapshot-based datasets. In particular, the data points are a set of pairwise distinct stenosis degrees, and is the set of snapshots obtained by the full model run on the input parameters .

Those models can then be used to predict the output of the simulation for an input stenosis not present in the dataset. For example, for a given value , the evaluation is a -dimensional vector which approximates the pressure curve in the time interval in vessel with stenosis degree . We remark that this setting can be easily modified to approximate different aspects of the full simulation, although the present ones yield interesting insights into the behaviour of the system.

Before we present the numerical tests, two remarks on the data are in order. First, the current time step produces samples per second, which means that we have , i.e., we are approximating functions . Second, the data obtained with are removed from the datasets and replaced with the one with , since the ODE model (18) is meaningful only for a strictly positive value of the stenosis degree, while the quadratic term vanishes for , thus leading to a different model. This restriction is not relevant from an application point of view, since the a value can be considered to effectively represent the healthy state.

In the training of each model, the parameters and are chosen within a range of possible values by -fold cross validation. This means that the training data are randomly permuted and divided into disjoint subsets of approximately the same size and, for each pair of possible parameters, a model is trained on the union of subsets and tested on the remaining one. This operation is repeated times changing in all possible ways the sets used for training. The average of the error obtained by these tests is assigned as the error score of the parameter pair, and the best pair is chosen as the one yielding the smallest error score. The actual model is then trained on the whole training set using these parameters. In more details, we use here -fold cross validation and test logarithmic equally spaced values and logarithmic equally spaced values . The error measure to sort the parameters is the maximum absolute error.

## 4 Numerical tests

This section is concerned with the training of the surrogate models and analyses them under different perspectives. We train different surrogates using different training sets of increasing size to analyse the number of full model runs needed to have an accurate surrogate. They are obtained each with equally spaced stenosis degrees in , with .

A further dataset of equally spaced stenosis degrees is used as a test set, i.e., the surrogates computed with the various training datasets are evaluated on this set of input stenosis degrees, and the results are compared with the full model computations. We remark that the values of this test set are not contained in the training sets (except for and ), so the results are reliable assessments of the models’ accuracy. For every value in the test set we consider the absolute and relative errors