1 Introduction
Dynamic datadriven application systems (DDDAS [3]
) integrate computational simulations and physical measurements in symbiotic and dynamic feedback control systems. Within the DDDAS paradigm, data assimilation (DA) defines a class of inverse problems that fuses information from an imperfect computational model based on differential equations (which encapsulates our knowledge of the physical laws that govern the evolution of the real system), from noisy observations (sparse snapshots of reality), and from an uncertain prior (which encapsulates our current knowledge of reality). Data assimilation integrates these three sources of information and the associated uncertainties in a Bayesian framework to provide the posterior, i.e., the probability distribution conditioned on the uncertainties in the model and observations.
Two approaches to data assimilation have gained widespread popularity: ensemblebased estimation and variational methods. The ensemblebased methods are rooted in statistical theory, whereas the variational approach is derived from optimal control theory. The variational approach formulates data assimilation as a nonlinear optimization problem constrained by a numerical model. The initial conditions (as well as boundary conditions, forcing, or model parameters) are adjusted to minimize the discrepancy between the model trajectory and a set of timedistributed observations. In realtime operational settings the data assimilation process is performed in cycles: observations within an assimilation window are used to obtain an optimal trajectory, which provides the initial condition for the next time window, and the process is repeated in the subsequent cycles.
Large errors in some observations can adversely impact the overall solution to the data assimilation system, e.g., can lead to spurious features in the analysis [15]. Various factors contribute to uncertainties in observations. Faulty and malfunctioning sensors are a major source of large uncertainties in observations, and data quality control (QC) is an important component of any weather forecasting DA system [17]. The goal of QC is to ensure that only correct observations are used in the DA system. Erroneous observations can lead to spurious features in the resulting solution [15]. Departure of the observation from the background forecast is usually used as a measure for accepting or rejecting observations: observations with large deviations from the background forecast are rejected in the quality control step [12]. However, this process has important drawbacks. Tavolato and Isakksen present several case studies [30] that demonstrate that rejecting observations on the basis of background departure statistics leads to the inability to capture small scales in the analysis.
As an alternative to rejecting the observations which fail the QC tests, Tavolato and Isakksen [30] propose the use of Huber norm in the analysis. Their Huber norm approach adjusts error covariances of observations based on their background departure statistics. No observations are discarded; but smaller weights are given to low quality observations (that show large departures from the background). Huber norm has been used in the context of robust methods for seismic inversion [11]
and ensemble Kalman filtering
[24]. The issue of outliers has also been tackled by regularization techniques using norms [9]. To the best of our knowledge, no prior work has fully addressed the construction of robust data assimilation methods using these norms. For example [30] adjusts the observation data covariances but then applies traditional assimilation algorithms.This paper develops a systematic mathematical framework for robust data assimilation. Different data assimilation algorithms are reformulated as optimization problems using and Huber norms, and are solved via the alternating direction method of multipliers (ADMM) developed by Boyd [4], and via the halfquadratic minimization method presented in [21].
The remainder of the paper is organized as follows: Section 2 reviews the data assimilation problem and the traditional solution algorithms. Section 3 presents the new robust algorithms for 3DVar, Section 4 for 4DVar, and Section 5 develops new robust algorithms for ensemble based approaches. Numerical results are presented in Section 6. Concluding remarks and future directions are discussed in Section 7.
2 Data assimilation
Data assimilation (DA) is the process of fusing information from priors, imperfect model predictions, and noisy data, to obtain a consistent description of the true state of a physical system [8, 14, 26, 27]. The resulting best estimate is called the analysis .
The prior information encapsulates our current knowledge of the system. Prior information typically consists of a background estimate of the state , and the corresponding background error covariance matrix .
The model captures our knowledge about the physical laws that govern the evolution of the system. The model evolves an initial state at the initial time to states at future times . A general model equation is:
(1) 
Observations are noisy snapshots of reality available at discrete time instances. Specifically, measurements of the physical state are taken at times ,
(2) 
where the observation operator maps the model state space onto the observation space. The random observation errors
are assumed to have normal distributions.
Variational methods solve the DA problem in an optimal control framework. Specifically, one adjusts a control variable (e.g., model parameters) in order to minimize the discrepancy between model forecasts and observations. Ensemble based methods are sequential in nature. Generally, the error statistics of the model estimate is represented by an ensemble of model states and ensembles are propagated to predict the error statistics forward in time. When the observations are assimilated, the analysis is obtained by operating directly on the model states. It is important to note that all the approaches discussed in the following subsections aim to minimize the norm of the discrepancy between observations and model predictions.
Ensemblebased methods employ multiple model realizations in order to approximate the background error distribution of any model state. The empirical moments of the ensemble are used in order to estimate the background state and the background error covariance matrix. Since the model space is several times larger than the ensemble size, localization methods are commonly used in order to mitigate the impact of sampling errors (spurious correlations). For example, the local ensemble transform Kalman filter
[13] performs the assimilation of observations locally in order to avoid the impact of analysis innovations coming from distant components.We next review these traditional approaches.
2.1 Threedimensional variational data assimilation
The three dimensional variational (3DVar) DA approach processes the observations successively at times . The background state (i.e., the prior state estimate at time ) is given by the model forecast started from the previous analysis (i.e., the best state estimate at ):
(3) 
The discrepancy between the model state and observations at time , together with the departure of the state from the model forecast , are measured by the 3DVar cost function
(4a)  
When both the background and observation errors are Gaussian the function (4a ) equals the negative logarithm of the posterior probability density function given by Bayes’ theorem. While in principle a different background covariance matrix should be used at each assimilation time, in practice the same matrix is reused throughout the assimilation window, . The 3DVar analysis is the maximum aposteriori (MAP) estimator, and is computed as the state that minimizes (4a)  
(4b) 
2.2 Fourdimensional variational data assimilation
Strongconstraint fourdimensional variational (4DVar) DA processes simultaneously all observations at all times within the assimilation window. The control parameters are typically the initial conditions , which uniquely determine the state of the system at all future times under the assumption that the model (1) perfectly represents reality. The background state is the prior best estimate of the initial conditions , and has an associated initial background error covariance matrix . The 4DVar problem provides the estimate of the true initial conditions as the solution of the following optimization problem
(5a)  
(5b) 
The first term of the sum (5b) quantifies the departure of the solution from the background state at the initial time . The second term measures the mismatch between the forecast trajectory (model solutions ) and observations at all times in the assimilation window. The covariance matrices and need to be predefined, and their quality influences the accuracy of the resulting analysis. Weakconstraint 4DVar [26] removes the perfect model assumption by allowing a model error .
In this paper we focus on the strongconstraint 4DVar formulation (5a). The minimizer of (5a) is computed iteratively using gradientbased numerical optimization methods. Firstorder adjoint models provide the gradient of the cost function [5]
, while secondorder adjoint models provide the Hessianvector product (e.g., for Newtontype methods)
[28]. The methodology for building and using various adjoint models for optimization, sensitivity analysis, and uncertainty quantification is discussed in [27, 6]. Various strategies to improve the the 4DVar data assimilation system are described in [7]. The procedure to estimate the impact of observation and model errors is developed in [23, 22]. A framework to perform derivative free variational data assimilation using the trustregion framework is given in [25].2.3 The ensemble Kalman filter
The uncertain background state at time is assumed to be normally distributed . Given the new observations at time , the Kalman filter computes the Kalman gain matrix, the analysis state (i.e., the posterior mean), and the analysis covariance matrix (i.e., the posterior covariance matrix) using the following formulas, respectively:
(6a)  
(6b)  
(6c) 
The ensemble Kalman filter describes the background probability density at time
by an ensemble of forecast states The background mean and covariance are estimated from the ensemble:(7) 
where the matrix of state deviations from the mean is
The background observations and their deviations from the mean are defined as:
Any vector in the subspace spanned by the ensemble members can be represented as
The ensemble Kalman filter makes the approximations (7) and computes the Kalman gain matrix (6a) as follows:
(8a)  
(8b) 
The mean analysis (6b) is:
(9) 
Using we have the mean analysis in ensemble space:
(10) 
The analysis error covariance (6c) becomes:
(11) 
Ensemble square root filter (EnSRF)
Traditional ensemble Kalman filter (EnKF)
In the traditional version of EnKF the filter (9) is applied to each ensemble member:
where are perturbed observations. This leads to the analysis weights
(15) 
where the second relation comes from linearizing the observation operator about the background mean state. A comparison of (15) with (10) reveals that the classical EnKF solution (15) obtains the weights of the th ensemble member by minimizing the cost function:
(16) 
The paper develops next a systematic framework to perform robust DA using Huber and norms. This will ensure that important information coming from outliers is not rejected but used during the quality control.
3 Robust 3DVar data assimilation
3.1 norm 3DVar
Consider the 3DVar cost function in equation (4a) that penalizes the discrepancy between the model state and observations at time , together with the departure of the state from the model forecast . Function (4a
) represents the negative loglikelihood posterior probability density under the assumption that both the background and the observation errors are Gaussian. In particular, the scaled innovation vector
(17) 
which represents scaled observation errors, is assumed to have a normal distribution (each observation error is independent and has a standard normal distribution). We are interested in the case where some observation errors have large deviations. To model this we assume that the scaled observation errors (17) are independent, and each of them has a univariate Laplace distribution:
The univariate Laplace distribution models an exponential decay on both sides away from the mean . Assuming that observation errors are unbiased () and independent the probability distribution of the scaled innovation vector is
(18) 
Under these assumptions the negative loglikelihood posterior function (4a) measures the discrepancy of the model state with observations in the norm. This leads to the following revised cost function:
(19) 
In this paper we choose , which leads to a cost function (19) that is similar to (4a
). This translates in an assumed variance of each observation error component equal to eight. This is no restriction of generality: the solution process discussed here can be applied to any value of
. For example, one can choose for an error component variance of one.Following the Alternating Direction Method of Multipliers (ADMM) [4] we use the scaled innovation vector (17) to obtain the 3DVar problem:
(20) 
The augmented Lagrangian equation for (20) is given by
where is the Lagrange multiplier and is the penalty parameter (a positive number). Using the identity
(22) 
the augmented Lagrangian (3.1) can be written as
(23) 
The cost function (23) can be iteratively minimized as follows:

Initialize , , ,

Start with , , , and .

Fix , , and , and solve
(25a) where (25b) The above minimization subproblem can be solved by using the shrinkage procedure defined in Algorithm 1: (25c) 
Update :

Update :
3.2 Hubernorm 3DVar
Using the norm throughout spoils the smoothness properties near the mean. The smoothness property of norm near the mean is highly desirable and can be retained by using Huber norm. The Huber norm treats the errors using norm in the vicinity of the mean, and using norm far from the mean [16]. From a statistical perspective, small scaled observation errors (17) are assumed to have independent standard normal distributions, while large scaled observation errors are assumed to have independent Laplace distributions (18). The good properties of the traditional 3DVar are preserved when the data is reasonable, but outliers are treated with the more robust norm.
The 3DVar problem in the Huber norm reads:
(26)  
where
(27) 
Here is the th entry of the vector . The threshold
represents the number of standard deviations after which we switch from
to norm.3.2.1 ADMM solution for Huber3DVar
3.2.2 Halfquadratic optimization solution for Huber3DVar
An alternative approach to carry out the minimization (26) is to reformulate it as a multiplicative halfquadratic optimization problem [21]. We construct an augmented cost function which involves the auxiliary variable
(29a)  
where is a dual potential function determined by using the theory of convex conjugacy such that  
(29b) 
The minimizer of is calculated using alternate minimization. Let the solution at the end of iteration read . At iteration we proceed as follows.

First calculate where is fixed such, that
This amounts to finding according to
(30a) (where the function is applied to each element of the argument vector). For Huber regularization (30b) 
Next, calculate where is fixed, such that
This is achieved by solving the following minimization problem:
This minimization problem is equivalent to solving a regular 3DVar problem
(31) with a modified observation error covariance
(32) For the case where is diagonal equation (32) divides each variance by . The smaller the entry is the lower the weight given to observation in the 3DVar data assimilation procedure becomes.
4 Robust 4DVar
4.1 4DVar
Given the background value of the initial state , the covariance of the initial background errors , the observations at , and the corresponding observation error covariances (), the 4DVar problem looks for the MAP estimate of the true initial conditions by solving the following constrained optimization problem:
(33) 
The augmented Lagrangian associated with (33) reads:
where ’s are model error scaling matrices.
The problem (33) can be solved by alternating minimization of the Lagrangian (4.1), as follows:

Initialize .

Run the forward model starting from to obtain , .

Set , , for , .

Fix and , and minimize the Lagrangian to solve for :
This is equivalent to solving the traditional 4DVar problem
with the modified observation vectors and observation error covariances (24c).

Run the forward model starting from to obtain , .

Fix and , and find , as
Comments
There are no comments yet.