Robust data assimilation using L_1 and Huber norms

11/05/2015 ∙ by Vishwas Rao, et al. ∙ Hong Kong Baptist University Virginia Polytechnic Institute and State University 0

Data assimilation is the process to fuse information from priors, observations of nature, and numerical models, in order to obtain best estimates of the parameters or state of a physical system of interest. Presence of large errors in some observational data, e.g., data collected from a faulty instrument, negatively affect the quality of the overall assimilation results. This work develops a systematic framework for robust data assimilation. The new algorithms continue to produce good analyses in the presence of observation outliers. The approach is based on replacing the traditional Ł_2 norm formulation of data assimilation problems with formulations based on Ł_1 and Huber norms. Numerical experiments using the Lorenz-96 and the shallow water on the sphere models illustrate how the new algorithms outperform traditional data assimilation approaches in the presence of data outliers.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Dynamic data-driven application systems (DDDAS [3]

) integrate computational simulations and physical measurements in symbiotic and dynamic feedback control systems. Within the DDDAS paradigm, data assimilation (DA) defines a class of inverse problems that fuses information from an imperfect computational model based on differential equations (which encapsulates our knowledge of the physical laws that govern the evolution of the real system), from noisy observations (sparse snapshots of reality), and from an uncertain prior (which encapsulates our current knowledge of reality). Data assimilation integrates these three sources of information and the associated uncertainties in a Bayesian framework to provide the posterior, i.e., the probability distribution conditioned on the uncertainties in the model and observations.

Two approaches to data assimilation have gained widespread popularity: ensemble-based estimation and variational methods. The ensemble-based methods are rooted in statistical theory, whereas the variational approach is derived from optimal control theory. The variational approach formulates data assimilation as a nonlinear optimization problem constrained by a numerical model. The initial conditions (as well as boundary conditions, forcing, or model parameters) are adjusted to minimize the discrepancy between the model trajectory and a set of time-distributed observations. In real-time operational settings the data assimilation process is performed in cycles: observations within an assimilation window are used to obtain an optimal trajectory, which provides the initial condition for the next time window, and the process is repeated in the subsequent cycles.

Large errors in some observations can adversely impact the overall solution to the data assimilation system, e.g., can lead to spurious features in the analysis [15]. Various factors contribute to uncertainties in observations. Faulty and malfunctioning sensors are a major source of large uncertainties in observations, and data quality control (QC) is an important component of any weather forecasting DA system [17]. The goal of QC is to ensure that only correct observations are used in the DA system. Erroneous observations can lead to spurious features in the resulting solution [15]. Departure of the observation from the background forecast is usually used as a measure for accepting or rejecting observations: observations with large deviations from the background forecast are rejected in the quality control step [12]. However, this process has important drawbacks. Tavolato and Isakksen present several case studies [30] that demonstrate that rejecting observations on the basis of background departure statistics leads to the inability to capture small scales in the analysis.

As an alternative to rejecting the observations which fail the QC tests, Tavolato and Isakksen [30] propose the use of Huber norm in the analysis. Their Huber norm approach adjusts error covariances of observations based on their background departure statistics. No observations are discarded; but smaller weights are given to low quality observations (that show large departures from the background). Huber norm has been used in the context of robust methods for seismic inversion [11]

and ensemble Kalman filtering

[24]. The issue of outliers has also been tackled by regularization techniques using norms [9]. To the best of our knowledge, no prior work has fully addressed the construction of robust data assimilation methods using these norms. For example [30] adjusts the observation data covariances but then applies traditional assimilation algorithms.

This paper develops a systematic mathematical framework for robust data assimilation. Different data assimilation algorithms are reformulated as optimization problems using and Huber norms, and are solved via the alternating direction method of multipliers (ADMM) developed by Boyd [4], and via the half-quadratic minimization method presented in [21].

The remainder of the paper is organized as follows: Section 2 reviews the data assimilation problem and the traditional solution algorithms. Section 3 presents the new robust algorithms for 3D-Var, Section 4 for 4D-Var, and Section 5 develops new robust algorithms for ensemble based approaches. Numerical results are presented in Section 6. Concluding remarks and future directions are discussed in Section 7.

2 Data assimilation

Data assimilation (DA) is the process of fusing information from priors, imperfect model predictions, and noisy data, to obtain a consistent description of the true state of a physical system [8, 14, 26, 27]. The resulting best estimate is called the analysis .

The prior information encapsulates our current knowledge of the system. Prior information typically consists of a background estimate of the state , and the corresponding background error covariance matrix .

The model captures our knowledge about the physical laws that govern the evolution of the system. The model evolves an initial state at the initial time to states at future times . A general model equation is:

(1)

Observations are noisy snapshots of reality available at discrete time instances. Specifically, measurements of the physical state are taken at times ,

(2)

where the observation operator maps the model state space onto the observation space. The random observation errors

are assumed to have normal distributions.

Variational methods solve the DA problem in an optimal control framework. Specifically, one adjusts a control variable (e.g., model parameters) in order to minimize the discrepancy between model forecasts and observations. Ensemble based methods are sequential in nature. Generally, the error statistics of the model estimate is represented by an ensemble of model states and ensembles are propagated to predict the error statistics forward in time. When the observations are assimilated, the analysis is obtained by operating directly on the model states. It is important to note that all the approaches discussed in the following subsections aim to minimize the norm of the discrepancy between observations and model predictions.

Ensemble-based methods employ multiple model realizations in order to approximate the background error distribution of any model state. The empirical moments of the ensemble are used in order to estimate the background state and the background error covariance matrix. Since the model space is several times larger than the ensemble size, localization methods are commonly used in order to mitigate the impact of sampling errors (spurious correlations). For example, the local ensemble transform Kalman filter

[13] performs the assimilation of observations locally in order to avoid the impact of analysis innovations coming from distant components.

We next review these traditional approaches.

2.1 Three-dimensional variational data assimilation

The three dimensional variational (3D-Var) DA approach processes the observations successively at times . The background state (i.e., the prior state estimate at time ) is given by the model forecast started from the previous analysis (i.e., the best state estimate at ):

(3)

The discrepancy between the model state and observations at time , together with the departure of the state from the model forecast , are measured by the 3D-Var cost function

(4a)
When both the background and observation errors are Gaussian the function (4a

) equals the negative logarithm of the posterior probability density function given by Bayes’ theorem. While in principle a different background covariance matrix should be used at each assimilation time, in practice the same matrix is re-used throughout the assimilation window,

. The 3D-Var analysis is the maximum a-posteriori (MAP) estimator, and is computed as the state that minimizes (4a)
(4b)

2.2 Four-dimensional variational data assimilation

Strong-constraint four-dimensional variational (4D-Var) DA processes simultaneously all observations at all times within the assimilation window. The control parameters are typically the initial conditions , which uniquely determine the state of the system at all future times under the assumption that the model (1) perfectly represents reality. The background state is the prior best estimate of the initial conditions , and has an associated initial background error covariance matrix . The 4D-Var problem provides the estimate of the true initial conditions as the solution of the following optimization problem

(5a)
(5b)

The first term of the sum (5b) quantifies the departure of the solution from the background state at the initial time . The second term measures the mismatch between the forecast trajectory (model solutions ) and observations at all times in the assimilation window. The covariance matrices and need to be predefined, and their quality influences the accuracy of the resulting analysis. Weak-constraint 4D-Var [26] removes the perfect model assumption by allowing a model error .

In this paper we focus on the strong-constraint 4D-Var formulation (5a). The minimizer of (5a) is computed iteratively using gradient-based numerical optimization methods. First-order adjoint models provide the gradient of the cost function [5]

, while second-order adjoint models provide the Hessian-vector product (e.g., for Newton-type methods)

[28]. The methodology for building and using various adjoint models for optimization, sensitivity analysis, and uncertainty quantification is discussed in [27, 6]. Various strategies to improve the the 4D-Var data assimilation system are described in [7]. The procedure to estimate the impact of observation and model errors is developed in [23, 22]. A framework to perform derivative free variational data assimilation using the trust-region framework is given in [25].

2.3 The ensemble Kalman filter

The uncertain background state at time is assumed to be normally distributed . Given the new observations at time , the Kalman filter computes the Kalman gain matrix, the analysis state (i.e., the posterior mean), and the analysis covariance matrix (i.e., the posterior covariance matrix) using the following formulas, respectively:

(6a)
(6b)
(6c)

The ensemble Kalman filter describes the background probability density at time

by an ensemble of forecast states The background mean and covariance are estimated from the ensemble:

(7)

where the matrix of state deviations from the mean is

The background observations and their deviations from the mean are defined as:

Any vector in the subspace spanned by the ensemble members can be represented as

The ensemble Kalman filter makes the approximations (7) and computes the Kalman gain matrix (6a) as follows:

(8a)
(8b)

The mean analysis (6b) is:

(9)

Using we have the mean analysis in ensemble space:

(10)

The analysis error covariance (6c) becomes:

(11)
Ensemble square root filter (EnSRF)

The 3D-Var cost function (4a) is formulated with the background error covariance replaced by the forecast ensemble covariance matrix (7). It can be shown that the analysis mean weights (10) are the minimizer of [13]:

(12)

The local transform ensemble Kalman filter [13] computes the symmetric square root of matrix (8b)

(13)

and obtains the analysis ensemble weights by adding columns of the factors to the mean analysis weight:

(14)
Traditional ensemble Kalman filter (EnKF)

In the traditional version of EnKF the filter (9) is applied to each ensemble member:

where are perturbed observations. This leads to the analysis weights

(15)

where the second relation comes from linearizing the observation operator about the background mean state. A comparison of (15) with (10) reveals that the classical EnKF solution (15) obtains the weights of the -th ensemble member by minimizing the cost function:

(16)

The paper develops next a systematic framework to perform robust DA using Huber and norms. This will ensure that important information coming from outliers is not rejected but used during the quality control.

3 Robust 3D-Var data assimilation

3.1 -norm 3D-Var

Consider the 3D-Var cost function in equation (4a) that penalizes the discrepancy between the model state and observations at time , together with the departure of the state from the model forecast . Function (4a

) represents the negative log-likelihood posterior probability density under the assumption that both the background and the observation errors are Gaussian. In particular, the scaled innovation vector

(17)

which represents scaled observation errors, is assumed to have a normal distribution (each observation error is independent and has a standard normal distribution). We are interested in the case where some observation errors have large deviations. To model this we assume that the scaled observation errors (17) are independent, and each of them has a univariate Laplace distribution:

The univariate Laplace distribution models an exponential decay on both sides away from the mean . Assuming that observation errors are unbiased () and independent the probability distribution of the scaled innovation vector is

(18)

Under these assumptions the negative log-likelihood posterior function (4a) measures the discrepancy of the model state with observations in the norm. This leads to the following revised cost function:

(19)

In this paper we choose , which leads to a cost function (19) that is similar to (4a

). This translates in an assumed variance of each observation error component equal to eight. This is no restriction of generality: the solution process discussed here can be applied to any value of

. For example, one can choose for an error component variance of one.

Following the Alternating Direction Method of Multipliers (ADMM) [4] we use the scaled innovation vector (17) to obtain the -3D-Var problem:

(20)

The augmented Lagrangian equation for (20) is given by

where is the Lagrange multiplier and is the penalty parameter (a positive number). Using the identity

(22)

the augmented Lagrangian (3.1) can be written as

(23)

The cost function (23) can be iteratively minimized as follows:

  1. Initialize , , ,

  2. Start with , , , and .

  3. Fix , , and , and solve

    To solve (24) we carry out a regular -3D-Var minimization of the form (4)
    (24b)
    but with modified observations and a scaled covariance matrix:
    (24c)
  4. Fix , , and , and solve

    (25a)
    where
    (25b)
    The above minimization subproblem can be solved by using the shrinkage procedure defined in Algorithm 1:
    (25c)
  5. Update :

  6. Update :

1:procedure =LONEShrinkage()
2:     Input:
3:     Output:
4:     
5:end procedure
Algorithm 1 _Shrinkage

3.2 Huber-norm 3D-Var

Using the norm throughout spoils the smoothness properties near the mean. The smoothness property of norm near the mean is highly desirable and can be retained by using Huber norm. The Huber norm treats the errors using norm in the vicinity of the mean, and using norm far from the mean [16]. From a statistical perspective, small scaled observation errors (17) are assumed to have independent standard normal distributions, while large scaled observation errors are assumed to have independent Laplace distributions (18). The good properties of the traditional 3D-Var are preserved when the data is reasonable, but outliers are treated with the more robust norm.

The 3D-Var problem in the Huber norm reads:

(26)

where

(27)

Here is the -th entry of the vector . The threshold

represents the number of standard deviations after which we switch from

to norm.

3.2.1 ADMM solution for Huber-3D-Var

The Huber norm minimization problem (26) can be solved using the ADMM framework in a manner similar to the one described in Section 3.1. The only difference is in Step 3: after fixing , , and we do not solve the -norm problem (25a). Rather, we solve the Huber-norm problem

(28a)
where is defined in (25b). The closed-form solution of (28a) is given by the Huber shrinkage procedure defined in Algorithm 2:
1:procedure =HuberShrinkage()
2:     Input:
3:     Output:
4:     for  do
5:         if  then
6:              
7:         else
8:              
9:         end if
10:     end for
11:end procedure
Algorithm 2 Huber_Shrinkage

3.2.2 Half-quadratic optimization solution for Huber-3D-Var

An alternative approach to carry out the minimization (26) is to reformulate it as a multiplicative half-quadratic optimization problem [21]. We construct an augmented cost function which involves the auxiliary variable

(29a)
where is a dual potential function determined by using the theory of convex conjugacy such that
(29b)

The minimizer of is calculated using alternate minimization. Let the solution at the end of iteration read . At iteration we proceed as follows.

  1. First calculate where is fixed such, that

    This amounts to finding according to

    (30a)
    (where the function is applied to each element of the argument vector). For Huber regularization
    (30b)
  2. Next, calculate where is fixed, such that

    This is achieved by solving the following minimization problem:

    This minimization problem is equivalent to solving a regular -3D-Var problem

    (31)

    with a modified observation error covariance

    (32)

    For the case where is diagonal equation (32) divides each variance by . The smaller the entry is the lower the weight given to observation in the 3D-Var data assimilation procedure becomes.

4 Robust 4D-Var

4.1 -4D-Var

Given the background value of the initial state , the covariance of the initial background errors , the observations at , and the corresponding observation error covariances (), the -4D-Var problem looks for the MAP estimate of the true initial conditions by solving the following constrained optimization problem:

(33)

The augmented Lagrangian associated with (33) reads:

where ’s are model error scaling matrices.

Using the identity (22) the Lagrangian (4.1) becomes

The problem (33) can be solved by alternating minimization of the Lagrangian (4.1), as follows:

  1. Initialize .

  2. Run the forward model starting from to obtain , .

  3. Set , , for , .

  4. Fix and , and minimize the Lagrangian to solve for :

    This is equivalent to solving the traditional -4D-Var problem

    with the modified observation vectors and observation error covariances (24c).

  5. Run the forward model starting from to obtain , .

  6. Fix and , and find , as