Reconstructing Haemodynamics Quantities of Interest from Doppler Ultrasound Imaging

by   Felipe Galarce, et al.

The present contribution deals with the estimation of haemodynamics Quantities of Interest by exploiting Ultrasound Doppler measurements. A fast method is proposed, based on the PBDW method. Several methodological contributions are described: a sub-manifold partitioning is introduced to improve the reduced-order approximation, two different ways to estimate the pressure drop are compared, and an error estimation is derived. A test-case on a realistic common carotid geometry is presented, showing that the proposed approach is promising in view of realistic applications.



There are no comments yet.


page 5

page 21


Information tracking approach to segmentation of ultrasound imagery of prostate

The size and geometry of the prostate are known to be pivotal quantities...

Magnetic Navigation of a Rotating Colloidal Swarm Using Ultrasound Images

Microrobots are considered as promising tools for biomedical application...

Neural Network Kalman filtering for 3D object tracking from linear array ultrasound data

Many interventional surgical procedures rely on medical imaging to visua...

A novel dowscaling procedure for compositional data in the Aitchison geometry with application to soil texture data

In this work, we present a novel downscaling procedure for compositional...

Deep learning in ultrasound imaging

We consider deep learning strategies in ultrasound systems, from the fro...

Solvability for Photoacoustic Imaging with Idealized Piezoelectric Sensors

Most reconstruction algorithms for photoacoustic imaging assume that the...

SetSketch: Filling the Gap between MinHash and HyperLogLog

MinHash and HyperLogLog are sketching algorithms that have become indisp...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

In biomedical engineering, most realistic applications have to deal with data assimilation. The problem to be solved consists in providing predictions on Quantities of Interest (QoI) given observations of the system which are often partial and noisy. The present work is a contribution to this topic and focuses on the reconstruction of haemodynamics QoI by exploiting Doppler Ultrasound Imaging. The proposed methodology is however general and can easily be extended to a broad set of other applications. Doppler Ultrasound, in its different modes, is one of the most used, clinically available technologies to monitor blood flows in the heart cavity and in several segments of the vascular tree. Its main advantages are that it is fast, non-invasive, and cheap. Its main drawback lies in the space resolution: the observed quantity (more precisely defined in Section 2.2) amounts to the noisy average in some voxels of one or two components of the velocity field.

In several applications related to the cardiovascular system, the QoI to be predicted are:

  1. The complete 3D velocity field and some quantities associated to it, say, for instance, the maximal velocity ([1, 2]).

  2. Pressure and pressure drop ([3, 4, 5, 6, 7]): this is particularly relevant, since it is one of the main indicators of the severity of stenoses and eventual arterial blockages. The direct measure of a pressure (or even a pressure drop) could be performed by implanting a catheter, hence in a rather invasive way.

  3. The vorticity ([8, 9, 10, 11]): this quantity is monitored especially in the heart cavity and around cardiac valves. A too large vorticity could induce, for instance, haemolysis.

  4. The wall shear stress ([12, 13, 14]): this is related to the mechanical stress that the blood exerts on the endothelial cells, of paramount importance in aneurysms and plaque formation.

The QoI listed have to be reliably estimated in vivo, with the additional constraint of being estimated fast, ideally in real time. For this, two main approaches are available. The first consists in a purely data-driven strategy where learning techniques are used to build an approximation of the observable-to-QoI map given a sufficiently large dataset. The second consists in using an a priori

description of the physics involved by means of a mathematical model, often given in the form of a Partial Differential Equation (PDE), and then solve an inverse problem. On the one hand, since we are dealing with space fields estimation, the pure data-driven learning approach will in general need an exceedingly large data set to meet the accuracy constraints of the application. On the other hand, discretising the system of Partial Differential Equations and solving the inverse problem will in general result in a prohibitive computational cost, thus leading to unacceptable computing times. These facts motivate the use of mixed approaches combining an

a priori knowledge coming from an available, potentially inexact physical model of the system, and the a posteriori knowledge coming from the data. One recent example in this direction is [15]

, where a physics-informed machine learning approach to estimate pressure in blood vessels from MRI was proposed. In this work, we use a different methodology based on reduced-order modelling of parametrized PDEs.

Our contribution is to propose a systematic methodology to estimate the above five QoI in close to real time involving reduced modelling techniques, and to assess its feasibility in non trivial numerical examples involving the carotid artery. However, due to our lack of real ultrasound images, our experiments present certain limitations: we have worked with synthetically generated images and have used an admittedly simple Gaussian modelling of the ultrasound noise (Doppler ultrasound images present a very involved space-time structure which is not the main topic of our work and we refer to [16, 17, 18] for further details on this matter). The PDE model considered to describe the haemodynamics is the system of incompressible Navier-Stokes equations, which is generally acknowledged to be accurate for large vessels such as the carotid artery. We therefore assume that there is no model error and that the true system is governed by these equations. Note in addition that this assumption also comes from the fact that it is not possible to study the impact of the model error without real measurements.

The structure of the paper is as follows. In Section 2 we describe the reconstruction method which we use. The method is very general and its main mathematical foundations have been established in previous works (see [19, 20, 21, 22, 23, 24, 25]). We make a presentation that alternates between a summary of the general mathematical theory, and its particular application to our problem of interest. One relevant point to remark is that so far the methodology has mainly focused on reconstructing spatial fields from observable quantities. In our case, this concerns the reconstruction of the 3D velocity field. One relevant novelty with respect to previous contributions is that we show that it is possible to reconstruct unobserved quantities such as the pressure field or the pressure drop in our problem. This is possible by making a joint reconstruction of observable and unobservable fields, which are velocity and pressure in our case. We explain this idea in 3.1. The reconstruction of the wall shear stress and the vorticity are discussed in Sections 3.2.2 and 3.2.3 respectively. The numerical experiments on a carotid bifurcation are given in Section 4 for the case of noiseless images. We examine the effect of noise in Section 5.

2 Reconstruction methods

In this section we present the reconstruction methods that we use in this work. We alternate between abstract mathematical statements and their translation to our specific problem of interest in order to make the presentation as pedagogical as possible. To simplify the discussion, the presentation is done for noiseless measurements. At the end of the paper, we will discuss how to take noise into account.

2.1 State estimation and recovery algorithms: abstract setting

Our problem enters into the following setting, for which solid mathematical foundations have been developed in recent years. The relevant references will be cited throughout the discussion.

Let be a domain of for a given dimension . We work on a Hilbert space defined over which is relevant for the problem under consideration. As we will see in the following, we may change depending on our needs. However, once the space is fixed, the subsequent developments have to remain consistent with this choice. The space is endowed with an inner product and induced norm .

Our goal is to recover an unknown function from measurement observations


where the are linearly independent linear forms over . In many applications, each models a sensor device which is used to collect the measurement data . In our particular application, the observations come in the form of an image and each models the response of the system in a given pixel as Figure 2 illustrates. We denote by the Riesz representers of the . They are defined via the variational equation

Since the are linearly independent in , so are the in and they span an -dimensional space

The observations are thus equivalent to knowing the orthogonal projection


In this setting, the task of recovering from the measurement observation can be viewed as building a recovery algorithm

such that is a good approximation of in the sense that is small.

Recovering from the measurements is a very ill-posed problem since is generally a space of very high or infinite dimension so, in general, there are infinitely many such that . It is thus necessary to add some a priori information on in order to recover the state up to a guaranteed accuracy. In the following, we work in the setting where is a solution to some parameter-dependent PDE of the general form

where is a differential operator and

is a vector of parameters that describes some physical property and lives in a given set

. Therefore, our prior on is that it belongs to the set


which is sometimes referred to as the solution manifold. The performance of a recovery mapping is usually quantified in two ways:

  • If the sole prior information is that belongs to the manifold , the performance is usually measured by the worst case reconstruction error

  • In some cases

    is described by a probability distribution

    on supported on . This distribution is itself induced by a probability distribution on

    that is assumed to be known. When no information about the distribution is available, usually the uniform distribution is taken. In this Bayesian-type setting, the performance is usually measured in an average sense through the mean-square error


    and it naturally follows that .

2.2 Instantiation to the application of interest

In our case, is a portion of a human carotid artery as given in Figure 1. The boundary is the union of the inlet part where the blood is entering the domain, the outlets and where the blood is exiting the domain after a bifurcation, and the walls .

Figure 1: Domain used in the simulations. Note the small stenosis in the upper part of the bifurcation.

Our goal is to reconstruct for all time on an interval (with ) the full 3D velocity field and pressure in the whole carotid . Additionally, we also want to reconstruct related quantities like the wall-shear stress, the vorticity and the pressure drop between the inlet and the outlets. The Doppler ultrasound device gives images with a certain time frequency. Each image contains partial information on the blood velocity on a subdomain of the carotid. Depending on the ultrasound technology, we are either given the projection of the velocity along the direction of the ultrasound probe (CFI mode), or along a plane (VFI mode). Figure 2 illustrates both imaging techniques.

(a) Color flow image (CFI)
(b) Vector flow image (VFI)
Figure 2: Velocity image of the common carotid bifurcation.

Due to our lack of real images, our experiments are fully synthetic, and we work with an idealized version of CFI images to generate measurements. For each time , a given image is a local average in space of the velocity projected into the direction in which the ultrasound probe is steered. More specifically, we consider a partition of into disjoint subdomains (voxels) . Then, from each CFI image we collect


where is a unitary vector giving the direction of the ultrasound beam. According to what has been exposed in section 2.1, the are linear functionals from a certain Hilbert space which, in our case, is yet to be defined.

Note that, in this setting, pressure is an unobserved, invisible quantity so it cannot be recovered directly from the measurements. In other words, we cannot build a reconstruction mapping from the velocity observations to the pressure and provide good recovery guarantees in terms of the recovery error (2.4) or (2.5). However, we prove in the following that it is actually possible to build a mapping to reconstruct jointly the couple with good recovery guarantees.

As it seems natural, the joint reconstruction strategy requires necessarily some physical modelling to help reduce the ill-posedness in , and especially the one in . This important ingredient comes, in our case, from the following incompressible Navier-Stokes equations defined on . For a fluid with density and dynamic viscosity , we search for all the couple of velocity and pressure such that


The equations are closed by prescribing an initial condition and boundary conditions. We defer their detailed description to section 4.1 for the sake of brevity in the current discussion. At this point, the essential information is that a weak formulation of this equation makes the problem be well posed when we seek in

which is the space in which we work and measure errors later on.

Our Navier-Stokes model involves some parameters , e.g., the heart rate. An important detail is that we consider time as a parameter so will be one of the coordinates of . A manifold is generated by the variations of the parameters

2.3 Optimal reconstruction algorithms

In general, one would like to use an algorithm that is optimal in the sense of minimizing

However, as discussed in [26], optimal algorithms are difficult to compute and even to characterize for general sets . In this respect, the following is known:

  • The problem of finding an algorithm that minimizes is called optimal recovery. It has been extensively studied for convex sets that are balls of smoothness classes. This is however not the case in the current setting since the solution manifold introduced in (2.3) usually has a complex geometry. We know that as soon as is bounded there is a simple mathematical description of an optimal algorithm in terms of Chebyshev centers of certain sets. However, this algorithm cannot be easily computed due to the geometry and high dimensionality of the manifold.

  • The problem of finding an algorithm that minimizes falls into the scope of Bayesian or learning problems. As explained in [26], if the probability distribution on the manifold

    is Gaussian, the optimal algorithm can easily be characterized and computed. However, the assumption on a Gaussian distribution is very strong and will not hold in general so finding a computable optimal algorithm in the mean-square sense is also an open problem.

These theoretical difficulties motivate the search for suboptimal yet fast and good recovery algorithms. One vehicle for this has been to build linear recovery algorithms (see [20, 21, 26]). However, since in general it is not clear that linear algorithms will be optimal, we use in this work piecewise linear reconstruction algorithms as a trade-off between optimality and computational feasibility and rapidity.

2.4 Piecewise linear reconstruction algorithms using reduced modeling

Reduced models are a family of methods that produce each a hierarchy of spaces that approximate the solution manifold well in the sense that


decays rapidly as grows for certain classes of PDEs. The term denotes the distance from to the space , which is given by its projection error onto ,

Several methods exist to build spaces such that or decay fast. Some families are the reduced basis method (see [27]

), the (Generalized) Empirical Interpolation Method (see

[28, 19, 22]), Proper Orthogonal Decomposition (POD, [29, 30]) and low-rank methods (see [31, 32]).

Linear reconstruction algorithms that make use of reduced spaces are the Generalized Empirical Interpolation Method (GEIM) introduced in [19] and further analyzed in [20, 22] and the Parametrized Background Data-Weak Approach (PBDW) introduced in [21] and further analyzed in [23]. Some extensions have been proposed to address measurement noise (see, e.g., [24, 25]) and other recovery algorithms involving reduced modelling have also been recently proposed (see [33]).

2.4.1 PBDW, a linear recovery algorithm

In this work, we take PBDW as a starting point for our recovery algorithm. Given a measurement space and a reduced model with , the PBDW algorithm

gives for any a solution of

This optimization problem has a unique minimizer


as soon as and , which is an assumption to which we adhere in the following. For any pair of closed subspaces of , is defined as


As proven in appendix A, an explicit expression of is




where, for any pair of closed subspaces of , is the orthogonal projection into restricted to . The invertibility of the operator is guaranteed under the above conditions.

Formula (2.11) shows that is a bounded linear map from to . For any , the reconstruction error is bounded by


Depending on whether is built to address the worst case or mean square error, the reconstruction performance over the whole manifold is bounded by




Note that can be understood as a stability constant. It can also be interpreted as the cosine of the angle between and . The error bounds involve the distance of to the space which provides slightly more accuracy than the reduced model alone. This term is the reason why it is sometimes said that the method can correct model error to some extend. In the following, to ease the reading we will write errors only with the second type of bounds that do not involve the correction part on .

An important observation is that for a fixed measurement space (which is the setting in our numerical tests), the error functions

reach a minimal value for a certain dimension and as the dimension varies from 1 to . This behavior is due to the trade-off between:

  • the improvement of the approximation properties of as grows ( and as grows)

  • the degradation of the stability of the algorithm, given here by the decrease of to 0 as . When , .

As a result, the best reconstruction performance with PBDW is given by

We finish this section with two remarks:

  1. As already brought up, we do not consider model error in this work. However, the PBDW algorithm can correct to some extent the model misfit (through the term , see Appendix A).

  2. Note that in the present setting the measurement space is fixed and we will adhere to this assumption in the rest of the paper. In our application, this is reasonable since the nature and location of the sensors is fixed by the technology of the device and by the position of the probe which the medical doctor considers best. A different, yet related problem, would be to optimize the choice of the measurement space . Two works on this topic involving greedy algorithms are [22, 34]. They have been done under the same setting involving reduced modelling that is presented in this work. More generally, the problem of optimal sensor placement has been extensively studied since the 1970’s in control and systems theory (see, e.g. [35, 36, 37]). One common feature with [22, 34] is that the criterion to be minimized by the optimal location is nonconvex, which leads to potential difficulties when the number of sensors is large.

2.4.2 Piecewise linear reconstruction algorithm

In our application, we can build an improved reconstruction algorithm by exploiting the fact that we are not only given a Doppler image at the time of reconstruction, but we also know in real-time the value of some parameters like time and the heart-rate of the patient. In other words, the vector or parameters can be decomposed into a vector of parameters ranging in and a vector of unobserved parameters ranging in , with . In other words,

We can exploit this extra knowledge by building a partition of the parameter domain as follows: we first find an appropriate partition of the observed parameters into disjoint subdomains

The strategy followed to find such a partition in our case is explained in Section 4.2. This yields a partition of the whole parameter domain


This partition induces a decomposition of the manifold into different disjoint subsets such that


With this type of partition, we know in which subset we are at the time of reconstruction. We can thus build reduced models for each subset and then reconstruct with the linear or affine PBDW. Proceeding similarly as in the previous section, the reconstruction performance on subset is, for a fixed ,


The advantage of this piecewise approach is that the approximation errors or in each subdomain may decay faster than in the whole manifold (sometimes even significantly faster if each partition deals with very different physical regimes).

The best reconstruction performance for is thus

It follows that the performance in is

where .

3 Reconstruction of non-observable Quantities of Interest in fluid flows

Our task is to use Doppler velocity measurements taken from a fluid flow and to reconstruct:

  • Partially observable quantities: the full 3D velocity flow in and related quantities such as the wall shear stress and vorticity.

  • Non-observable quantities: the full 3D pressure flow in and the pressure drop.

Our strategy to address this task consists essentially in two steps:

  • We apply the piecewise reconstruction algorithm of section 2.4.2 where the key is to do a joint reconstruction of 3D velocity and pressure.

  • We then derive the related quantities of interest as a simple by-product (wall shear stress and vorticity).

3.1 Joint reconstruction of velocity and pressure

For the reasons explained in section 2.2, the couple of velocity and pressure belongs to the Cartesian product

It is assumed to be the solution to the parameter-dependent Navier-Stokes equations (2.7) for some parameter . Some elements are observed but others are not so we cannot directly solve (2.7) with the parameters set to . We therefore use the piecewise linear reconstruction of section 2.4.2. For this, it is necessary to endow with the external direct sum and product structure to build a Hilbert space. That is, for any two elements and of and any scalar ,

The inner product is defined as the sum of component-wise inner products

and it induces a norm on ,

When we are given partial information on from Doppler velocity measures, we are given the projection

where is the observation space

and the are the Riesz representers in of each voxel ,

We are now in position to apply directly the reconstruction algorithms from section 2 to do the joint reconstruction of with the current particular choice of Hilbert space and observation space . We briefly instantiate here the main steps. Let us assume that we have a reduced model

of dimension that approximates

with accuracy


and which is such that . Then, we can reconstruct with the linear PBDW method (see equation (2.9)) which, in the present case, reads


The worst and average reconstruction errors are bounded like in estimates (2.14) and (2.15), that is




If we build a partition of the manifold based on observed parameters, we can reconstruct with the piecewise linear algorithm of section 2.4.2.

3.2 Reconstruction of related quantities

3.2.1 Pressure drop

The pressure drop is a quantity that has traditionally been of high interest to the medical community since it serves to assess, for instance, the severity of stenosis in large vessels due to the accumulation of fat in the walls. Decomposing the domain boundary of a generic arterial bifurcation into the inlet, the wall and the outlet parts

the quantities to retrieve are


for the outlet labels .

Method 1 – From the joint reconstruction :

If we reconstruct , we can easily approximate the pressure drop by

for .

As we will see in our numerical results, the pressure drop is approximated at very high accuracy with . We next provide a theoretical justification.

For this, we remark that we can view as a bounded linear mapping from to defined as

Thus the reconstruction error is given by

Exploiting the linearity of , one can derive the simple bound


where we have used (2.14) between the second and the third line and where

As we will see below, this estimate is too coarse to account for the high reconstruction accuracy which is observed because the values are close to zero and the product is only moderately small. It is necessary to find a sharper estimate that involves finer constants in from of to account for the good reconstruction results. For this, observing that, by construction of ,

we have

so we can derive the new estimate




As we illustrate in our numerical tests, the value of is moderate and significantly smaller than the factor of the previous estimate. As a result, the product is small, and we guarantee a reconstruction of the pressure with good accuracy.

Method 2 – From the reconstruction of and the virtual works principle:

As an alternative to the joint reconstruction strategy, we can use a method introduced in [38] called Integral Momentum Relative Pressure estimator. As a starting point, it requires to work with a reconstruction of the velocity which, in our work, will be given by the PBDW method applied only to the reconstruction of the velocity field without pressure. We then estimate the pressure drop using the Navier-Stokes equations as follows. Assuming that satisfies perfectly the momentum conservation (2.7), we test by a virtual and divergence free velocity field ,


Using Green’s identities, we can write


The current strategy requires to assume that the pressure field is constant over the inlet and outlets. Notice that, since , the following identity holds,


where is the average pressure over and is the average pressure over the i-th outlet . For , we consider a function satisfying and in . Mass conservation for incompressible regimes implies


for . As a result, it is possible to recover the mean pressure drop for each outlet by solving an system of equations , where has entries




A convenient choice for the test functions would be to ask for each one of them to solve the problems: Find and auxiliary function such that:

in (3.19)

and , which decouples the problem by making the matrix (3.17) diagonal.

In order to ensure good stability when doing the time integration of (3.13), we use a Cranck-Nicholson scheme.

We may note that this method requires the knowledge of the flow viscosity and density, and it assumes constant pressure over the inlets and outlets. This is contrast to the joint reconstruction approach which does not need these assumptions.

3.2.2 Wall shear stress

The wall shear stress (WSS) has been proposed as an index of damage in vascular endothelial cells and atherosclerosis, a disease in which the blood coagulates close to the vessel walls. The works [39], [40] or [41] can serve as a reference.

The WSS is a mapping , that returns the tangential component of the force that the blood applies on the vessel wall


In order to quantify the error in the WSS estimation we need to harmonically extend the difference between the ground truth respect to the reconstruction . This allow us to access the trace of an auxiliary vector field that satisfies


Lax-Milgram-Lions conditions are violated if we don’t add the equilibrium constraint included in the Neumann boundary with the mean of the field on (see, for instance, the Fredholm alternative in [42], chapter 6). A coherent way to evaluate the reconstruction quality of the WSS at a given time is to compute:


3.2.3 Vorticity

The vorticity is defined as . It provides clinical information about the shear layer thickness, which has been correlated with thrombus formation and hemolysis [43]. In general, vorticity is connected to the assessment of the cardiovascular function and there have been efforts to reconstruct it from magnetic resonance images (see, for instance: [44]).

The relative error in time for the vorticity reconstruction is given by


Since with (incompressible flow) and since we have the identity (see, e.g., [45])


it follows by (2.13) that


4 Noise-free numerical test in a carotid geometry

In what follows, the numerical experiments shown were computed using two softwares, one under continuous development and maintained by the COMMEDIA team at INRIA: the Finite Elements for Life Sciences and Engineering, FeLiScE, and another one implemented especially for this work: the Multi-physics and Data assimilation for Medical Applications, MDMA. In addition, tetrahedron meshing and optimization is done using Mmg (see [46]).

4.1 Sampling with the Navier-Stokes equations

We consider the incompressible Navier-Stokes equations (NSE) as given in (2.7). These equations are closed by adding a zero initial condition and the following boundary conditions:

  • No-slip for the vessel wall, that is, on .

  • The inlet boundary lies in the plane and we apply a Dirichlet condition . The component b is a function , where:

    • is an scaling factor. The function is built by interpolating experimental flow data in the common carotid area taken from [47]. Its behavior is given in Figure 3.

    • The function

      is a 2D logit-normal distribution


      where the parameter, , controls the axial symmetry of the inlet flow.

    Figure 3: The function for the inlet boundary condition.
  • For the outlet boundaries and , we use a Windkessel model (see [48] for a survey on 0-D models in haemodynamics), which gives the average pressure over each ,

    where is called distal pressure

    and is the solution to the ordinary differential equation: