1 Introduction
This review paper focuses on the Ensemble Kalman Filter applied to general inverse problems. In this context, some literature also uses the term Ensemble Kalman Inversion (EKI). The method itself belongs to the class of particle methods and it is an iterative method for solving inverse problems. The method was originally introduced in [29] for unconstrained minimization problems, and recently extended also to the presence of different types of constraints [11, 28, 2]. The original EnKF has already been introduced more than ten years ago [6, 14, 19, 22]
as a discrete time method to estimate state variables and parameters of stochastic dynamical systems. The EKI method has become popular recently, because of the fact that it does not require derivatives of the underlying model for optimization but at the same time enjoys provable convergence results. Applications have been so far, in particular, in oceanography
[21], reservoir modeling [1], weather forecasting [31], milling process [42], process control [43], geophysical applications [32, 37, 44], physics [35]and also machine learning
[25, 33, 45]. The literature on Kalman filtering is very rich and we can not review this in detail here, but refer to the reference for further details. Our focus is on the reformulation of the EnKF for solving inverse problems as outlined below, in Section 1.2.1.1 Formulation of the ensemble Kalman inversion
In order to present the mathematical formulation of the EKI method, we denote by the given (nonlinear) forward operator between finite dimensional Hilbert spaces , , and , . Consider the inverse problem or parameter identification problem of the type
(1) 
Throughout the paper is referred to as the (unknown) control, whereas represents the data measurements (that are perturbed by noise ). In applications one typically has . The perturbations due to errors in the observations is modeled by
whose distribution is explicitly known. We assume that the noise is normally distributed with given covariance matrix
, namely we write .In order to solve the inverse problem (1), the EKI considers a number of particles or ensemble members whose state is determined by an iterative update. The ensemble members are modeled as realizations of the control , in the following combined in , with , . The iteration index is denoted by and the collection of the ensemble members by , .
Then, at iteration the EKI update is given by
(2) 
for each , where is a parameter and where the ensemble update (2) depends on covariance matrices:
(3)  
where we have denoted with and the mean of and , respectively, namely
Then, it can be proven [29] that
(4) 
It is worth to mention that in the original formulation each observation or measurement is perturbed by additional additive noise at each iteration. The EKI satisfies the subspace property [29], i.e., the ensemble iterates stay in the subspace spanned by the initial ensemble. As consequence, the natural estimator for the solution of the inverse problem is provided by the mean of the ensemble.
In recent years, the EKI was also studied as technique to solve inverse problems in a Bayesian framework. For instance see the works [20, 23] and the references therein. The analysis of the method is proven to have a comparable accuracy with traditional least–squares approaches to inverse problems [29]. The method approximates a specific Bayes linear estimators and it is able to provide an approximation of the posterior measure. For a detailed discussion we refer to [4, 34]. In this work, we keep the attention on the classical approach which aims to solve the inverse problem through an optimization point–of–view, see (4). Additional properties of the EKI method, continuous–time limits [8, 7, 12, 40, 41], i.e., and mean–field limits on the number of the ensemble members [10, 17, 23, 27], i.e., have been recently been developed and will be reviewed in more detail below.
1.2 Structure of the paper
The remainder of this paper is organized as follows. In Section 2
we review the continuous formulations of the EKI method which lead to a preconditioned gradient descent system and to a Vlasov–type partial differential equation. In Section
3 and in Section 4, instead, we present two new formulations of the EKI method for multi–objective inverse problems and for globally asymptotically convergence to the target solution, respectively. Finally, we draw conclusions and perspectives in Section 5.2 Continuous limits of the ensemble Kalman inversion
The continuous in time limit reduces the discrete update to a coupled system of ordinary differential equations. This limit has been performed in different recent publications, starting from
[40] to more recent formulations, e.g. see [13] for the hierarchical EKI. In particular, in [40] it has been shown that continuous in time limit results, in case of a linear forward model , to a gradient flow structure. This gradient flow provides a solution to the inverse problem (1) by minimizing the least–squares functional(5) 
Observe, however, that in the continuous limit [40] the regularization term originally present in (4) vanishes for certain scalings. Although typically the analysis of the continuous in time EKI focuses on linear forward models, there are recent results on the EKI formulations in nonlinear settings [16].
2.1 Continuous–time limit
The continuous–time limit was firstly proposed in [40]: consider the parameter as an artificial time step for the discrete iteration, i.e. with being the maximum number of iterations and define for . Computing the limit one obtains
(6)  
with initial condition . Note that within this limit the noise is scaled with which allows for the continuous time limit. Further, the term vanishes leading to possibly unstable dynamics [27, 5].
However, in the case of linear, i.e., , with , the (6) can be reformulated in terms of the gradient as a gradient flow:
(7)  
Since is positive semi–definite we obtain
(8) 
Although the forward operator is assumed to be linear, the gradient flow is nonlinear. For further details and properties of the gradient descent equation (7) we refer to [40, 41]. In particular, we emphasize that the subspace property of the EKI also holds for the continuous dynamics and the following important result on the velocity of the collapse of the ensembles towards their mean in the large time limit, cf. Theorem 3 in [40, 7]:
2.2 Mean–field limit
By definition, the EKI method considers a finite ensemble size The behavior of the method in the limit of infinitely many ensembles can be studied via mean–field limit in analogy with the classical mean–field derivation of multi–agent systems [9, 24, 30, 39]. In the case of a linear foward model, the limit leads to a Vlasov–type gradient flow PDE.
(9) 
for a compactly supported on probability density of at time denoted by
(10) 
The initial probability density distribution is denoted by The operator
is the mean–field limit of the covariance of the ensemble and can be written in terms of moments of
as(11) 
where and are defined, respectively, as
(12) 
For the rigorous mean–field derivation and analysis of the EKI we refer to [10, 17]. Equation (9) is a nonlinear transport equation arising from non–linear gradient flow interactions and in [10, 5] it is observed that the counterpart of (8) holds at the kinetic level. In fact, for
we obtain
since is positive semidefinite. In particular, is strictly decreasing unless is a Dirac measure. Also, for for provides a steady solution of the continuous–limit formulation, but the converse is not necessarily true. In fact, all Dirac distributions, satisfy and hence provide steady solutions of (9). Convergence to the distribution has been proven to be linear in [10]: .
The mean–field interpretation of the EKI has allowed to design computationally efficient methods based on the mean–field formulation [3, 27]. In particular, it is possible to use a large number of particles which guarantees significantly better reconstructions of the unknown control, cf. Section 5 in [27].
3 Multi–objective ensemble Kalman inversion
The EKI can also be extended to treat also multiobjective optimization problems within a weighted function approach. Here, a vector of controls has to be determined for competitive models
for and given observational data:(13) 
for models and observations , where is observational noise. A solution to (13) can be obtained e.g. using a multi–objective optimization [18, 36, 38]:
(14) 
Solution in this framework is related to the notion of Pareto optimality [38] that defines a concept of minimum for the vector–valued optimization problem (14).
Definition 3.1.
A point is called Pareto optimal if and only if there exists no point such that for all and for at least one .
The set of all fulfilling Definition 3.1 is called Pareto set, while its representation in the space of objectives is called Pareto front. An approximation of can be recovered following an approach based on weighted function method [36]. Let and let be a fixed vector in the set
(15) 
Define the weighted objective function and the weighted observations as
(16) 
An approximation to the Pareto front is then obtained by , where for each
(17) 
In case of a convex problem, , see [36, Theorem 3.1.4]. In theory the previous problem (17) needs to be solved for all
Using a mean–field approach as in the last section, allows for an analysis on the dependence of on which in turn is used as adaptive grid on . The evolution of the formal sensitivity of the meanfield description of the particle distribution with respect to is given by
(18)  
for zero initial data. The set of equations (18) for is defined on the extended phase space and therefore computationally infeasible. However, the Pareto set is given as moment of where the first moment depends additionally on
(19) 
Similarly, moments of (18) can be defined leading to a set of ordinary differential equations for [26, Lemma 2.3]. This in turn allows to define an adaptive grid : Let for a fixed the corresponding optimal parameter be approximated by for some fixed and sufficiently large. Then, consider the following Taylor expansion
(20) 
Reformulating (20) allows to obtain adaptively based on the approximation on the Pareto set . It also yields an estimate on the norm of the update on an approximation of with given tolerance by
3.1 Numerical experiment
In the numerical experiment we show that the adaptive strategy leads to results that approximate the Pareto front very well with only a few discretization points . We set so that is parameterized by a single parameter , i.e. . Then, we consider two non convex functions as in [15]
(21) 
and and , for . As further parameters we use
particles sampled from the uniform distribution
, the tolerance is set , , and .Even so, the theoretical results have been proven in the linear case [26, Sec. 3], they are applied here in a nonlinear framework. We compare a naive choice for the discretization of using an equidistant grid (direct approach) with the outlined adaptive strategy.
We observe that the solution obtained with the adaptive approach covers a larger part of the Pareto front, showing additionally a relatively sharper resolution compared with the direct approach, see Figure 1.
Moreover, the approximation of the Pareto set in Figure 2 shows the expected behavior. Here, the adaptive strategy yields a cloud of points relatively close to the (analytically known) Pareto set compared with the direct approach.
4 Stabilized continuous limit of the ensemble Kalman inversion
In the continuous–time limit the term present in the discrete formulation vanish due to scaling. This consideration inspired [5], where a stability analysis of the moment system of the time–continuous EKI (7) is performed. Therein, it has been established that the system has infinitely many non–hyperbolic Bogdanov–Takens equilibria leading to several undesirable consequences. The latter are structurally unstable, i.e., sensitive to small perturbations. Since those equilibria lie on the set where the preconditioner collapse to zero, low order of convergence in time holds true. Further, numerical approximations may push the trajectory in the unfeasible region of the phase space or get the method stuck in the wrong equilibrium.
These considerations led to a modified formulation of the method is globally asymptotically stable by introducing the regularization term to the dynamics. More precisely, given symmetric and full–rank, in particular positive definite, in [5] it is proposed to consider the following general discrete dynamics for each ensemble member in the case of a linear model:
(22)  
with parameters . The choices and yield the continuous–time limit (7) for the original EKI. The modified dynamics (22) differs from (7) in the formulation of the preconditioner and in the presence of the additive term . The new preconditioner is related to an inflation of the covariance for . This modification allows to stabilize the phase space of the moments. The term , instead, has been shown to be an acceleration term for the convergence towards equilibrium. The modified dynamical system (22) has also a mean–field interpretation:
(23) 
where is the mean–field of leading to
The stability analysis of the moment equations is performed in the simplified setting where and , are identity matrices. The dynamical system of the moments of (23) is then
and its linearization at target equilibrium fulfills
For and positive definite the target equilibrium is hyperbolic. This formal presentation of the role of the parameters is mathematically rigorous in [5], where also exponentially fast convergence to the target equilibrium is proven.
4.1 Numerical experiments
We consider the inverse problem of identifying the force function of a linear elliptic equation in one spatial dimension assuming that noisy observation of the solution to the problem are available, e.g. see [29, 40, 27].
The problem is prescribed by the following one dimensional elliptic PDE for
(24) 
subject to boundary conditions . To obtain measurement data we use the continuous control . The problem is discretized using a uniform mesh with equidistant points on the interval . Denote by the evaluations of the control function on the mesh Noisy observations are obtained by
where is a first–order finite difference discretization of the PDE (24). For simplicity we assume that
is a Gaussian white noise, i.e.,
with andis the identity matrix. We are interested in recovering an approximation to the discrete control
from the noisy observations .In Figure 3 we show the solution to this problem provided by the time–continuous limit of the original EKI and the stabilized formulation proposed in [5]. Both method use ensemble members and a noise level of . We observe that the stable EKI produces a qualitatively improved reconstruction of the control and observation compared to the classical EKI. Moreover, as expected by the analysis, we observe that the stable EKI converges faster than the classical method, see Figure 4.
5 Conclusions
An overview on the EKI and its current developments has been provided. The analytical properties have been investigated and, in particular, the mean–field equation and its corresponding moment system has been presented. Two recent extensions of the EKI has been shown and discussed towards coupled inverse problems and towards numerically stable formulations. Further developments may involve a mixture between the two novelties presented and since many physical problems are subject to additional parameteric uncertainty, a suitable treatment of the then stochastic EKI might be of further interest. In case of large parameter spaces with computational issues need to be addressed as well, since e.g. grows quadratic in Furthermore, the outlined approach of timecontinuous and meanfield limit is applicable to wider range of particle methods and might serve as a starting point for future investigation into nonlinear filtering from a mathematical perspective.
Acknowledgments
The authors thank the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) for the financial support through 20021702/GRK2326, 333849990/IRTG2379, HE5386/15,181,191,221,231 and under Germany’s Excellence Strategy EXC2023 Internet of Production 390621612. The funding through HIDSS004 is acknowledged.
G.V. is member of the “National Group for Scientific Computation (GNCSINDAM)” and acknowledges support by MUR (Ministry of University and Research) PRIN2017 project number 2017KKJP4X.
References
 [1] (2009) The ensemble Kalman filter in reservoir engineering–a review. SPE J. 14 (3), pp. 393–412. External Links: Document Cited by: §1.
 [2] (2019) Ensemble Kalman methods with constraints. Inverse Probl. 35 (9), pp. 095007. External Links: Document Cited by: §1.
 [3] (2013) Binary interaction algorithms for the simulation of flocking and swarming dynamics. Multiscale Model. Simul. 11 (1), pp. 1–29. Cited by: §2.2.
 [4] (2007) Sampling the posterior: An approach to nonGaussian data assimilation. Phys. D 230 (), pp. 50–64. Cited by: §1.1.
 [5] (2022) A stabilization of a continuous limit of the ensemble kalman inversion. Note: Preprint arXiv:2006.15390 Cited by: §2.1, §2.2, Figure 4, §4.1, §4, §4, §4.
 [6] (2012) An ensemble KalmanBucy filter for continuous data assimilation. Meteorologische Zeitschrift 21 (3), pp. 213–219. External Links: Document Cited by: §1.
 [7] (2019) Well Posedness and Convergence Analysis of the Ensemble Kalman Inversion. Inverse Probl. 35 (8). External Links: Document Cited by: §1.1, §2.1.
 [8] (2018) A strongly convergent numerical scheme from ensemble Kalman inversion. SIAM J. Numer. Anal. 56 (4), pp. 2537–2562. External Links: Document Cited by: §1.1.
 [9] (2010) Mathematical modeling of collective behavior in socioeconomic and life sciences. Modeling and Simulation in Science, Engineering and Technology, pp. 297–336. External Links: Document Cited by: §2.2.
 [10] (2021) Wasserstein stability estimates for covariancepreconditioned FokkerPlanck equations. Nonlinearity 34 (4), pp. 2275. External Links: Document Cited by: §1.1, §2.2.

[11]
(2019)
On the incorporation of boxconstraints for ensemble Kalman inversion.
Foundations of Data Science
1 (26398001_2019_4_433), pp. 433. External Links: ISSN A00000002, Document Cited by: §1.  [12] (2020) Tikhonov regularization within ensemble Kalman inversion. SIAM J. Numer. Anal. 58 (2), pp. 1263–1294. External Links: Document Cited by: §1.1.
 [13] (2020) Limit analysis of hierarchical ensemble Kalman inversion. J. Inverse IllPosed Probl.. Note: In press Cited by: §2.
 [14] (2010) Parameterization techniques to improve mass conservation and data assimilation for ensemble Kalman filter. SPE Western Regional Meeting. External Links: Document Cited by: §1.

[15]
(2002)
A fast and elitist multiobjective genetic algorithm: nsgaii
.IEEE transactions on evolutionary computation
6 (2), pp. 182–197. Cited by: §3.1. 
[16]
(2021)
Ensemble Kalman inversion for nonlinear problems: weights, consistency, and variance bounds
. Found. Data Sci. 3 (3), pp. 371–411. External Links: Document Cited by: §2.  [17] (2021) Ensemble Kalman Inversion: meanfield limit and convergence analysis. Stat. Comput. 31, pp. 9. External Links: Document Cited by: §1.1, §2.2.
 [18] (2005) Multicriteria optimization. Vol. 491, Springer Science & Business Media. Cited by: §3.
 [19] (2013) Ensemble smoother with multiple data assimilation. Computers and Geosciences 55, pp. 3–15. External Links: Document Cited by: §1.
 [20] (2015) Analysis of the ensemble and polynomial chaos Kalman filters in Bayesian inverse problems. SIAM/ASA J. Uncertain. Quantif. 3 (1), pp. 823–851. Cited by: §1.1.
 [21] (1996) Assimilation of geosat altimeter data for the agulhas current using the ensemble Kalman filter with a quasigeostrophic model. Monthly Weather 128, pp. 85–96. External Links: Document Cited by: §1.
 [22] (1994) Sequential data assimilation with a nonlinear quasigeostrophic model using Monte Carlo methods to forecast error statistics. J. Geophys. Res 99, pp. 10143–10162. External Links: Document Cited by: §1.
 [23] (2020) Interacting Langevin Diffusions: Gradient Structure and Ensemble Kalman Sampler. SIAM J. Appl. Dyn. Syst. 19 (1), pp. 412–441. External Links: Document Cited by: §1.1.
 [24] (2016) On the dynamics of large particle systems in the mean field limit. In Macroscopic and large scale phenomena: coarse graining, mean field limits and ergodicity, pp. 1–144. Cited by: §2.2.

[25]
(2018)
Never look back  A modified EnKF method and its application to the training of neural networks without back propagation
. Note: Preprint arXiv:1805.08034 Cited by: §1.  [26] (2022) Filtering methods for coupled inverse problems. Note: Preprint. arXiv:2203.09841 Cited by: §3.1, §3.
 [27] (2019) Kinetic methods for inverse problems. Kinet. Relat. Models 12 (5), pp. 1109–1130. External Links: Document Cited by: §1.1, §2.1, §2.2, §4.1.
 [28] (2020) Continuous limits for constrained ensemble Kalman filter. Inverse Probl. (), pp. . External Links: Document Cited by: §1.
 [29] (2013) Ensemble Kalman methods for inverse problems. Inverse Probl. 29 (4), pp. 045001. External Links: Document Cited by: §1.1, §1.1, §1, §4.1.
 [30] (2014) A review of the mean field limits for vlasov equations. Kinetic & Related Models 7 (4), pp. 661–711. Cited by: §2.2.
 [31] (2014) Conservation of mass and preservation of positivity with ensembletype Kalman filter algorithms. Monthly Weather Review 142 (2), pp. 755–773. External Links: Document Cited by: §1.
 [32] (2021) Investigating the pilot point ensemble kalman filter for geostatistical inversion and data assimilation. Adv. Water Resour. 155. Cited by: §1.
 [33] (2019) Ensemble Kalman inversion: a derivativefree technique for machine learning tasks. Inverse Probl. 35 (9), pp. 095005. External Links: Document Cited by: §1.
 [34] (2009) Large sample asymptotics for the ensemble Kalman filter. Research Report Technical Report RR7014, INRIA. Cited by: §1.1.
 [35] (2020) An iterative ensemble kalman method for an inverse scattering problem in acoustics. Modern Physics Letters B 34 (28), pp. 2050312. Cited by: §1.
 [36] (2012) Nonlinear multiobjective optimization. Vol. 12, Springer Science & Business Media. Cited by: §3, §3.
 [37] (2019) Geometric and level set tomography using ensemble Kalman inversion. Geophysical Journal International 220 (2), pp. 967–980. Cited by: §1.
 [38] (2017) Nonconvex multiobjective optimization. Springer. Cited by: §3, §3.
 [39] (2013) Interacting Multiagent Systems. Kinetic equations and Monte Carlo methods. Oxford University Press, . Cited by: §2.2.
 [40] (2017) Analysis of the Ensamble Kalman Filter for Inverse Problems. SIAM J. Numer. Anal. 55 (3), pp. 1264–1290. External Links: Document Cited by: §1.1, §2.1, §2.1, §2, §4.1.
 [41] (2018) Convergence analysis of ensemble Kalman inversion: the linear, noisy case. Appl. Anal. 97 (1), pp. 107–123. External Links: Document Cited by: §1.1, §2.1.
 [42] (2020) Identifying trending coefficients with an ensemble Kalman filter. IFACPapersOnLine 53 (2), pp. 2292–2298. External Links: Document Cited by: §1.
 [43] (2010) On unscented Kalman filtering with state interval constraints. J. Process Contr. 20 (1), pp. 45–57. External Links: Document Cited by: §1.
 [44] (2021) Efficient multiscale imaging of subsurface resistivity with uncertainty quantification using ensemble Kalman inversion. Geophysical Journal International 225 (2), pp. 887–905. Cited by: §1.
 [45] (2020) Ensemble Kalman filter optimizing deep neural networks. In Conference on Machine Learning, Optimization and Data Science, Vol. 12514. Cited by: §1.