DATeS: A Highly-Extensible Data Assimilation Testing Suite v1.0

04/19/2017 ∙ by Ahmed Attia, et al. ∙ Argonne National Laboratory Virginia Polytechnic Institute and State University 0

A flexible and highly-extensible data assimilation testing suite, named DATeS, is described in this paper. DATeS aims to offer a unified testing environment that allows researchers to compare different data assimilation methodologies and understand their performance in various settings. The core of DATeS is implemented in Python and takes advantage of its object-oriented capabilities. The main components of the package (the numerical models, the data assimilation algorithms, the linear algebra solvers, and the time discretization routines) are independent of each other, which offers great flexibility to configure data assimilation applications. DATeS can interface easily with large third-party numerical models written in Fortran or in C, and with a plethora of external solvers.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Data Assimilation (DA) refers to the fusion of information from different sources, including priors, predictions of a numerical model, and snapshots of reality, in order to produce accurate description of the state of a physical system of interest daley1993atmospheric ; Kalnay_2002_book

. DA research is of increasing interest for a wide range of fields including geoscience, numerical weather forecasts, atmospheric composition predictions, oil reservoir simulations, and hydrology. Two approaches have gained wide popularity for solving the DA problems, namely ensemble and variational approaches. The ensemble approach is rooted in statistical estimation theory and uses an ensemble of states to represent the underlying probability distributions. The variational approach, rooted in control theory, involves solving an optimization problem to obtain a single “

analysis” as an estimate of the true state of the system of concern. The variational approach does not provide an inherent description of the uncertainty associated with the obtained analysis, however it is less sensitive to physical imbalances prevalent in the ensemble approach. Hybrid methodologies designed to harnesses the best of the two worlds are an on-going research topic.

Numerical experiments are an essential ingredient in the development of new DA algorithms. Implementation of numerical experiments for DA involves linear algebra routines, a numerical model along with time integration routines, and an assimilation algorithm. Currently available testing environments for DA applications are either very simplistic or very general, many are tied to specific models, and are usually completely written in a specific language. A researcher who wants to test a new algorithm with different numerical models written in different languages might have to re-implement his/her algorithm using the specific settings of each model. A unified testing environment for DA is important to enable researchers to explore different aspects of various filtering and smoothing algorithms with minimal coding effort.

The DA Research Section (DAReS) at the National Center for Atmospheric Research (NCAR) provides DART anderson2009DART

as a community facility for ensemble filtering. The DART platform is currently the gold standard for ensemble-based Kalman filtering algorithm implementations. It is widely used in both research and operational settings, and interfaces to most important geophysical numerical models are available. DART employs a modular programming approach and adheres strictly to solid software engineering principles. DART has a long history, and is continuously well maintained; new ensemble-based Kalman filtering algorithms that appear in the literature are routinely added to its library. Moreover it gives access to practical, and well-established parallel algorithms. DART is, by design, very general in order to support operational settings with many types of geophysical models. Using DART requires a non-trivial learning overhead. The fact that DART is mainly written in Fortran makes it a very efficient testing platform, however this limits to some extent the ability to easily employ third party implementations of various components.

Matlab programs are often used to test new algorithmic ideas due to its ease of implementation. A popular set of Matlab tools for ensemble-based DA algorithms is provided by the Nansen Environmental and Remote Sensing Center (NERSC), with the code available from NERSC_EnKF_Codel . A Matlab toolbox for uncertainty quantification (UQ) is UQLab marelli2014uqlab . Also, for the newcomers to the DA filed, a concise set of Matlab codes is provided through the pedagogical applied mathematics reference law2015data . Matlab is generally a very useful environment for small-to-medium scale numerical experiments.

Python is a modern high-level programming language that gives the power of reusing existing pieces of code via inheritance, and thus its code is highly-extensible. Moreover, it is a powerful scripting tool for scientific applications that can be used to glue legacy codes. This can be achieved by writing wrappers that can act as interfaces. Building wrappers around existing C, and Fortran code is a common practice in scientific research. Several automatic wrapper generation tools, such as SWIG beazley1996swig and F2PY peterson2009f2py , are available to create proper interfaces between Python and lower level languages. While translating Matlab code to Python is a relatively easy task, one can call Matlab functions from Python using the Matlab Engine API. Moreover, Unlike Matlab, Python is freely available on virtually all Linux, MacOS, and Windows platforms, and therefore Python software is easily accessible and has excellent portability. When using Python, instead of Fortran or C, one generally trades some computational performance for programming productivity. The performance penalty in the scientific calculations is minimized by delegating computationally intensive tasks to compiled languages such as Fortran. This approach is followed by the scientific computing Python modules Numpy and Scipy, which enables writing computationally efficient scientific Python code. Moreover, Python is one of the easiest programming languages to learn, even without background knowledge about programming.

This paper presents a highly-extensible Python-based DA testing suite. The package is named DATeS, and is intended to be an open-source, extendable

package positioned between the simple typical research-grade implementations and the professional implementation of DART, but with the capability to utilize large physical models. Researchers can use it as experimental testing pad where they can focus on coding only their new ideas without worrying much about the other pieces of the DA process. Moreover, DATeS can be effectively used for educational purposes where students can use it as an interactive learning tool for DA applications. The code developed by a researcher in the DATeS framework should fit with all other pieces in the package with minimal-to-no effort, as long as the programmer follows the “

flexible” rules of DATeS. As an initial illustration of its capabilities DATeS has been used to implement and carry out the numerical experiments in attia2016clusterHMC ; moosavi2018machine ; attia_OED_Adaptive_2018 .

The paper is structured as follows. Section 2 reviews the DA problem and the most widely used approaches to solve it. Section 3 describes the architecture of the DATeS package. Section 4 takes a user-centric and example-based approach for explaining how to work with DATeS, and Section 5 demonstrates the main guidelines of contributing to DATeS. Conclusions and future development directions are discussed in Section 6.

2 Data Assimilation

This section gives a brief overview of the basic discrete-time formulations of both statistical and variational DA approaches. The formulation here is far from conclusive, and is intended only as a quick review. For detailed discussions on the various DA mathematical formulations and algorithms, see for examplee asch2016data ; evensen2009data ; law2015data .

The main goal of a DA algorithm is to give an accurate representation of the “unknown” true state of a physical system, at a specific time instant . Assuming is a discretized approximation of , the time-evolution of the physical system over the time interval is approximated by the discretized forward model:

(1)

The model-based simulations, represented by the model states, are inaccurate and must be corrected given noisy measurements of the physical system. Since the model state and observations are both contaminated with errors, a probabilistic formulation is generally followed. The prior distribution encapsulates the knowledge about the model state at time instant before additional information is incorporated. The likelihood function

quantifies the deviation of the prediction of model observations from the collected measurements. The corrected knowledge about the system, is described by the posterior distribution formulated by applying Bayes’ theorem:e

(2)

where refers to the data (observations) to be assimilated. In the sequential filtering context is a single observation, while in the smoothing context, it generally stands for several observations to be assimilated simultaneously.

In the so-called “Gaussian framework”, the prior is assumed to be Gaussian where is a prior state, e.g. a model-based forecast, and is the prior covariance matrix. Moreover, the observation errors are assumed to be Gaussian , with being the observation error covariance matrix at time instant , and observation errors are assumed to be uncorrelated from background errors. In practical applications, the dimension of the observation space is much less than the state space dimension, that is .

Consider assimilating information available about the system state at time instance , the posterior distribution follows from (2) as:

(3)

where the scaling factor is dropped. Here, is an observation operator that maps a model state into the observation space.

Applying (2) or (3), in large-scale settings, even under the simplified Gaussian assumption, is not computationally feasible. In practice, a Monte-Carlo approach is usually followed. Specifically, ensemble-based sequential filtering methods such as ensemble Kalman filter (EnKF) Tippett_2003_EnSRF ; Whitaker_2002a ; Burgers_1998_EnKF ; Houtekamer_1998a ; zupanski2008maximum ; sakov2012iterative ; Evensen_2003 ; Hamill_2001 ; Evensen_1994 ; Houtekamer_2001 ; smith2007cluster , and maximum likelihood ensemble filter (MLEF) zupanski2005maximum use ensembles of states to represent the prior, and the posterior distribution. A prior ensemble , approximating the prior distributions, is obtained by propagating analysis states from a previous assimilation cycle at time by applying  (1). Most of the ensemble based DA methodologies work by transforming the prior ensemble into an ensemble of states collected from the posterior distribution, namely the analysis ensemble. The transformation in the EnKF framework is applied following the update equations of the well-known Kalman filter Kalman_1961 ; REKalman_1960a . An estimate of the true state of the system, i.e. the analysis, is obtained by averaging the analysis ensemble, while the posterior covariance is approximated by the covariance matrix of the analysis ensemble.

The maximum a posteriori (MAP) estimate of the true state is the state that maximizes the posterior probability density function (PDF). Alternatively the MAP estimate is the minimizer of the negative logarithm (negative-log) of the posterior PDF. The MAP estimate can be obtained by solving the following optimization problem:

(4)

This formulates the three-dimensional variational (3D-Var) DA problem.Derivative-based optimization algorithms used to solve (4) require the derivative of the negative-log of the posterior PDF (4):

(5)

where is the sensitivity (e.g. the Jacobian) of the observation operator evaluated at . Unlike ensemble filtering algorithms, the optimal solution of (4) provides a single estimate of the true state, and does not provide a direct estimate of associated uncertainty.

Assimilating several observations simultaneously requires adding time as a fourth dimension to the DA problem. Let be the prior distribution of the system state at the beginning of a time window over which the observations are distributed. Assuming the observations’ errors are temporally uncorrelated, the posterior distribution of the system state at the initial time of the assimilation window follows by applying Equation (2) as:

(6)

In the statistical approach, ensemble-based smoothers such as the ensemble Kalman smoother (EnKS) are used to approximate the posterior (6

) based on an ensemble of states. Similar to the ensemble filters, the analysis ensemble generated by a smoothing algorithm can be used to provide an estimate of the posterior first-order moment. It also can be used to provide a flow-dependent ensemble covariance matrix to approximate the posterior true second order moment.

The MAP estimate of the true state at the initial time of the assimilation window can be obtained by solving the following optimization problem:

(7)

This is the standard formulation of the four-dimensional variational (4D-Var) DA problem. The solution of the 4D-Var problem is equivalent to the MAP of the smoothing posterior in the Gaussian framework. The Jacobian of the (7) with respect to the model state at the initial time of the assimilation window reads

(8)

where is the adjoint of the tangent linear model operator, and is the adjoint of the observation operator sensitivity. Similar to the 3D-Var case (4), the solution of Equation (7) provides a single best estimate (the analysis) of the system state without providing consistent description of the uncertainty associated with this estimate. The variational problem 7

is referred to as strong-constraint formulation, where a perfect-model approach is considered. In the presence of model errors, an additional term is added, resulting in a weak-constraint formulation. A general practice is to assume that the model errors follow a Gaussian distribution

, with being the model error covariance matrix at time instant . In non-perfect-model settings, an additional term characterizing state deviations is added to the variational objectives (47). The model error term depends on the approach taken to solve the weak-constraint problem, and usually involves the model error probability distribution.

In idealized settings, where the model is linear, the observation operator is linear, and the underlying probability distributions are Gaussian, the posterior is also Gaussian, however this is rarely the case in real applications. In nonlinear or non-Gaussian settings, the ultimate objective of a DA algorithm is to sample all probability modes of the posterior distribution, rather than just producing a single estimate of the true state. Algorithms capable of accommodating non-Gaussianity are too limited and have not been successfully tested in large-scale settings.

Particle filters (PF) doucet2001introduction ; gordon1993novel ; kitagawa1996monte ; van2009particle are an attractive family of nonlinear and non-Gaussian methods. This family of filters is known to suffer from filtering degeneracy, especially in large-scale systems. Despite the fact that PFs do not force restrictive assumptions on the shape of the underlying probability distribution functions, they are not generally considered to be efficient without expensive tuning. While particle filtering algorithms have not yet been used operationally, their potential applicability for high dimensional problems is illustrated for example by rebeschini2015can ; poterjoy2016localized ; llopis2017particle ; beskos2017stable ; potthast2017localised ; ades2015equivalent ; vetra2018state . Another approach for non-Gaussian DA is to employ a Markov Chain Monte-Carlo (MCMC) algorithm, to directly sample the probability modes of the posterior distribution. This however, requires an accurate representation of the prior distribution, which is generally intractable in this context. Moreover, following a relaxed, e.g. Gaussian, prior assumption in nonlinear settings might be restrictive when a DA procedure is applied sequentially over more than one assimilation window. This is mainly due to fact that the prior distribution is a nonlinear transformation of the posterior of a previous assimilation cycle. Recently, an MCMC family of fully non-Gaussian DA algorithms that works by sampling the posterior were developed in attia2015hmcfilter ; attia2015hmcsampling ; attia2015hmcsmoother ; attia2016reducedhmcsmoother ; attia2016clusterHMC ; attia2016PhD_advanced_sampling . This family follows a Hamiltonian Monte-Carlo (HMC) approach for sampling the posterior, however, the HMC sampling scheme can be easily replaced with other algorithms suitable for sampling complicated, and potentially multimodal, probability distributions in high dimensional state spaces. Relaxing the Gaussian prior assumption is addressed in attia2016clusterHMC

, where an accurate representation of the prior is constructed by fitting a Gaussian Mixture Model (GMM) to the forecast ensemble.

DATeS provides standard implementations of several flavors of the algorithms mentioned here. One can easily explore, test, or modify the provided implementations in DATeS, and add more methodologies. As discussed later, one can use existing components of DATeS, such as the implemented numerical models, or add new implementations to be used by other components of DATeS. However, it is worth mentioning that the initial version of DATeS (v1.0) is not meant to provide implementations of all state of the art DA algorithms; see for example vetra2018state . DATeS however, povides an initial seed with example implementations, those could be discussed, and enhanced by the ever-growing community of DA reserachers and experts. In the next Section, we provide a brief technical summary of the main components of DATeS v1.0

3 DATeS Implementation

DATeS seeks to capture, in an abstract form, the common elements shared by most DA applications and solution methodologies. For example, the majority of the ensemble filtering methodologies share nearly all the steps of the forecast phase, and a considerable portion of the analysis step. Moreover, all the DA applications involve common essential components such as linear algebra routines, model discretization schemes, and analysis algorithms.

Existing DA solvers have been implemented in different languages. For example, high-performance languages such as Fortran and C have been (and are still being) extensively used to develop numerically efficient model implementations, and linear algebra routines. Both Fortran and C allow for efficient parallelization because these two languages are supported by common libraries designed for distributed memory systems such as MPI, and shared memory libraries such as Pthreads and OpenMP. To make use of these available resources and implementations, one has to either rewrite all the different pieces in the same programming language, or have proper interfaces between the different new and existing implementations.

The philosophy behind the design of DATeS is that “a unified DA testing suite has to be open-source, easy to learn, and able to reuse and extend available code with minimal effort”. Such a suite should allow for easy interfacing with external third-party code written in various languages, e.g., linear algebra routines written in Fortran, analysis routines written in Matlab, or “forecast” models written in C. This should help the researchers to focus their energy on implementing and testing their own analysis algorithms. The next section details several key aspects of the DATeS implementation.

3.1 DATeS architecture

The DATeS architecture abstracts, and provides a set of modules of, the four generic components of any DA system. These components are the linear algebra routines, a forecast computer model that includes the discretization of the physical processes, error models, and analysis methodologies. In what follows, we discuss each of these building blocks in more details, in the context of DATeS. We start with an abstract discussion of each of these components, followed by technical descriptions.

Linear algebra routines

The linear algebra routines are responsible for handling the data structures representing essential entities such as model state vectors, observation vectors, and covariance matrices. This includes manipulating an instance of the corresponding data. For example, a model state vector should provide methods for accessing/slicing and updating entries of the state vector, a method for adding two state vector instances, and methods for applying specific scalar operations on all entries of the state vector such as evaluating the square root or the logarithm.

Forecast model

The forecast computer model simulates a physical phenomena of interest such as the atmosphere, ocean dynamics, and volcanoes. This typically involves approximating the physical phenomena using a gridded computer model. The implementation should provide methods for creating and manipulating state vectors, and state-size matrices. The computer model should also provide methods for creating and manipulating observation vectors and observation-size matrices. The observation operator responsible for mapping state-size vectors into observation-size vectors should be part of the model implementation as well. Moreover, simulating the evolution of the computer model in time is carried out using numerical time integration schemes. The time integration scheme can be model-specific, and is usually written in a high-performance language for efficiency.

Error models

It is common in DA applications to assume a perfect forecast model, a case where the model is deterministic rather than stochastic. However, the background and observation errors need to be treated explicitly as they are essential in the formulation of nearly all DA methodologies. We refer to the DATeS entity responsible for managing and creating random vectors, sampled from a specific probability distribution function, as the “error model”. For example a Gaussian error model would be completely set up by providing the first and second order moments of the probability distribution it represents.

Analysis algorithms

Analysis algorithms manipulate model states and observations by applying widely used mathematical operations to perform inference operations. The popular DA algorithms can be classified into filtering and smoothing categories. An assimilation algorithm, a filter or a smoother, is implemented to carry out a single DA cycle. For example, in the filtering framework, an assimilation cycle refers to assimilating data at a single observation time by applying a forecast and an analysis step. On the other hand, in the smoothing context, several observations available at discrete time instances within an assimilation window are processed simultaneously in order to update the model state at a given time over that window; a smoother is designed to carry out the assimilation procedure over a single assimilation window. For example, EnKF and 3D-Var fall in the former category, while EnKS and 4D-Var fall in the latter.

Assimilation experiments

In typical numerical experiments a DA solver is applied for several consecutive cycles to assess its long-term performance. We refer to the procedure of applying the solver to several assimilation cycles as the “assimilation process”. The assimilation process involves carrying out the forecast and analysis cycles repeatedly, creating synthetic observations or retrieving real observations, updating the reference solution when available, and saving experimental results between consecutive assimilation cycle.

DATeS layout

The design of DATeS takes into account the distinction between these components, and separate them in design following an Object-Oriented Programming (OOP) approach. A general description of DATeS architecture is given in Figure 1.

Figure 1: Diagram of the DATeS architecture.

The enumeration in Figure 1 (numbers from 1 to 4 in circles) indicates the order in which essential DATeS objects should be created. Specifically, one starts with an instance of a model. Once a model object is created, an assimilation object is instantiated, and the model object is passed to it. An assimilation process object is then instantiated, with a reference to the assimilation object passed to it. The assimilation process object iterates the consecutive assimilation cycles and save and/or output the results which can be optionally analyzed later using visualization modules.

All DATeS components are independent such as to maximize the flexibility in experimental design. However, each newly added component must comply to DATeS rules in order to guarantee interoperability with the other pieces in the package. DATeS provides base classes with definitions of the necessary methods. A new class added to DATeS, for example to implement a specific new model, has to inherit the appropriate model base class, and provide implementations of the inherited methods from that base class.

In order to maximize both flexibility and generalizability, we opted to handle configurations, inputs, and output of DATeS object, using “configuration dictionaries”. Parameters passed to instantiate an object are passed to the class constructor in the form of key-value pairs in the dictionaries. See Section 4 for examples on how to properly configure and instantiate DATeS objects.

3.2 Linear algebra classes

The main linear algebra data structures essential for almost all DA aspects are a) model state-size and observation-size vectors (also named state and observation vectors, respectively), and b) state-size and observation-size matrices (also named state and observation matrices, respectively). A state matrix is a square matrix of order equal to the model state space dimension. Similarly, an observation matrix is a square matrix of order equal to the model observation space dimension. DATeS makes a distinction between a state and observation linear algebra data structures. It is important to recall here that in large-scale applications, full state covariance matrices cannot be explicitly constructed in memory. Full state matrices should only be considered for relatively small problems, and for experimental purposes. In large-scale settings, where building state matrices is infeasible, low-rank approximations, or sparse representation of the covariance matrices could be incorporated. DATeS provides simple classes to construct sparse state and observation matrices for guidance.

Third-party linear algebra routines can have widely different interfaces and underlying data structures. For reusability, DATeS provides unified interfaces for accessing and manipulating these data structures using Python classes. The linear algebra classes are implemented in Python. The functionalities of the associated methods can be written either in Python, or in lower level languages using proper wrappers. A class for a linear algebra data structure enables updating, slicing, and manipulating an instance of the corresponding data structures. For example, a model state vector class provides methods that enable accessing/slicing and updating entries of the state vector, a method for adding two state vector instances, and methods for applying specific scalar operations on all entries of the state vector such as evaluating the square root or the logarithm. Once an instance of a linear algebra data structure is created, all its associated methods are accessible via the standard Python dot operator. The linear algebra base classes provided in DATeS are summarized in Table 1.

Linear algebra base class DATeS Implementation
state vector objects with access to all related vector operations state_vector_base.StateVectorBase
observation vector objects with related vector operations observation_vector_base.ObservationVectorBase
state matrix objects with methods implementing necessary matrix operations state_matrix_base.StateMatrixBase
observation matrix objects providing methods for related matrix operations observation_matrix_base.ObservationMatrixBase
Table 1: DA filtering routines provided by the initial version of DATeS (v1.0)

Python special methods are provided in a linear algebra class to enable iterating a linear algebra data structure entries. Examples of these special methods include__getitem__( ),  __setitem__( ),  __getslice__( ),  __setslice__( ), etc. These operators make it feasible to standardize working with linear algebra data structures implemented in different languages or saved in memory in different forms.

DATeS provides linear algebra data structures represented as Numpy nd-arrays, and a set of Numpy-based classes to manipulate them. Moreover, Scipy-based implementation of sparse matrices is provided, and can be used efficiently in conjunction with both sparse and non-sparse data structures. These classes, shown in Figure 2, provide templates for designing more sophisticated extensions of the linear algebra classes.

Figure 2: Python implementation of state vector, observation vector, state matrix, and observation matrix data structures. Both dense and sparse state and observation matrices are provided.

3.3 Forecast model classes

Each numerical model needs an associated class providing methods to access its functionality. The unified forecast model class design in DATeS provides the essential tasks that can be carried out by the model implementation. Each model class in DATeS has to inherit the model base class: models_base.ModelBase or a class derived from it. A numerical model class is required to provide access to the underlying linear algebra data structures and time integration routines. For example, each model class has to provide the method state_vector( ) that creates an instance of a state vector class, and the method integrate_state( ) that takes a state vector instance and time integration settings, and returns a trajectory (list of states) evaluated at specified future times. The base class provided in DATeS contains definitions of all the methods that need to be supported by a numerical model class. The package DATeS v1.0 includes implementations of several popular test models summarized in Table 2.

While some linear algebra and the time integration routines are model-specific, DATeS also implements general-purpose linear algebra classes and time integration routines that can be reused by newly created models. For example, the general integration class FatODE_ERK_FWD is based on FATODE FATODE_2014 explicit Runge-Kutta (ERK) forward propagation schemes.

Forecast model DATeS Implementation
the 3-variables Lorenz model lorenz1963deterministic lorenz_models.Lorenz3
Lorenz96 model lorenz1996predictability lorenz_models.Lorenz69
Cartesian shallow-water equations model Gus1971 ; navon1986gustaf cartesian_swe_mode.CartesianSWE
Quasi-geostrophic (QG) model with double-gyre wind forcing and bi-harmonic friction sakov2008deterministic written in Fortran, with a F2Py wrapper. qg_1p5_model.QG1p5
Table 2: DA filtering routines provided by the initial version of DATeS (v1.0)

3.4 Error models classes

In many DA applications the errors are additive, and are modeled by random variables normally distributed with zero mean and a given or an unknown covariance matrices. DATeS implements Numpy-based functionality for background, observation, and model errors as guidelines for more sophisticated problem-dependent error models. The Numpy-based error models in DATeS are implemented in the module

error_models_numpy. These classes are derived from the base class ErrorsModelBase and provide methodologies to sample the underlying probability distribution, evaluate the value of the density function, and generate statistics of the error variables based on model trajectories and the settings of the error model. Note that, while DATeS provides implementations for Gaussian error models, the Gaussian assumption itself is not restrictive. Following the same structure, or by inheritance, one can easily create Non-Gaussian error models with minimal efforts. Moreover, the Gaussian error models provided by DATeS support both correlated and uncorrelated errors, and it constructs the covariance matrices accordingly. The covariance matrices are stored in appropriate sparse formats, unless a dense matrix is explicitly requested. Since these covariance matrices are either state or observation matrices, they provide access to all proper linear algebra routines. This means that, the code written with access to an observation error model, and its components should work for both correlated and uncorrelated observations.

3.5 Assimilation classes

Assimilation classes are responsible for carrying out a single assimilation cycle (i.e., over one assimilation window) and optionally printing or writing the results to files. For example, an EnKF object should be designed to carry out one cycle consisting of the “forecast” and the “analysis” steps. The basic assimilation objects in DATeS are a filtering object, a smoothing object, and a hybrid object. DATeS provides the common functionalities for filtering objects in the base class filters_base.FiltersBase; all derived filtering classes should have it as a super class. Similarly, smoothing objects are to be derived from the base class smoothers_base.SmoothersBase. A hybrid object can inherit methods from both filtering and smoothing base classes.

A model object is passed to the assimilation object constructor via configuration dictionaries to give the assimilation object access to the model-based data structures and functionalities. The settings of the assimilation object, such as the observation time, the assimilation time, the observation vector, and the forecast state or ensemble, are also passed to the constructor upon instantiation, and can be updated during runtime.

Table 3 summarizes the filters implemented in the initial version of the package, that is DATeS v1.0. Each of these filtering classes can be instantiated and run with any of the DATeS model objects. Moreover, DATeS provides simplified implementations of both 3D-Var, and 4D-var assimilation schemes. The objective function, e.g. the negative log-posterior, and the associated gradient are implemented inside the smoother class, and require the tangent linear model to be implemented in the passed forecast model class. The adjoint is evaluated using FATODE following a checkpointing approach, and the optimization step is carried out using Scipy optimization functions. The settings of the optimizer can be fine-tuned via the configurations dictionaries. The 3D- and 4D-Var implementations provided by DATeS are experimental, and are provided as a proof of concept. The variational aspects of DATeS are being continuously developed and will be made available in future releases of the package.

Filtering Algorithm DATeS Implementation
standard Kalman filter equations Kalman_1961 ; REKalman_1960a KF.KalmanFilter
perturbed-observation (stochastic) EnKF Burgers_1998_EnKF ; Houtekamer_1998a EnKF.EnKF
deterministic EnKF sakov2008deterministic EnKF.DEnKF
Ensemble transform Kalman filter (ETKF) Bishop_2001_ETKF EnKF.ETKF
local least squares EnKF Anderson_2003_least-squares EnKF.LLSEnKF
Hybrid Monte-Carlo (HMC) sampling filter attia2015hmcfilter hmc_filter.HMCFilter
Family of cluster sampling filters attia2016clusterHMC multi_chain_mcmc_filter.MultiChainMCMC
A vanilla implementation of the particle filter gordon1993novel PF.PF
Table 3: DA filtering routines provided by the initial version of DATeS (v1.0)

Covariance inflation and localization are ubiquitously used in all ensemble-based assimilation systems. These two methods are used to counteract the effect of using ensembles of finite size. Specifically, covariance inflation counteracts the loss of variance incurred in the analysis step, and works by inflating the ensemble members around their mean. This is carried out by magnifying the spread of ensemble members around their mean, by a predefined inflation factor. The inflation factor could be a scalar, i.e. space-time independent, or even varied over space and/or time. Localization, on the other hand, mitigates the accumulation of long-range spurious correlations. Distance-based covariance localization is widely used in geoscientific sciences, and applications, where correlations are damped out with increasing distance between grid points. The performance of the assimilation algorithm is critically dependent on tuning the parameters of these techniques. DATeS provide basic utility functions (see Section 

3.7) for carrying out inflation and localization those can be used in different forms based on the specific implementation of the assimilation algorithms. The work in attia_OED_Adaptive_2018 reviews inflation and localization and presents a framework for adaptive tuning of the parameters of these techniques, with all implementations and numerical experiments carried out entirely in DATeS.

3.6 Assimilation process classes

A common practice in sequential DA experimental settings, is to repeat an assimilation cycle over a given timespan, with similar or different settings at each assimilation window. For example, one may repeat a DA cycle on several time intervals with different output settings; e.g. to save and print results only every fixed number of iterations. Alternatively, the DA process can be repeated over the same time interval with different assimilation settings to test and compare results. We refer to this procedure as an “assimilation process”. Examples of numerical comparisons, carried out used DATeS, can be found in attia2016clusterHMC ; attia_OED_Adaptive_2018 , and in Section 4.6.

assimilation_process_base.AssimilationProcess is the base class from which all assimilation process objects are derived. When instantiating an assimilation process object, the assimilation object, the observations and the assimilation time instances, are passed to the constructor through configuration dictionaries. As a result, the assimilation process object has access to the model and its associated data structures and functionalities through the assimilation object.

The assimilation process object either retrieves real observations, or creates synthetic observations at the specified time instances of the experiment. Figure 3 summarizes DATeS assimilation process functionality.

Figure 3: The assimilation process in DATeS.

3.7 Utility modules

Utility modules provide additional functionality, such as _utility_configs module which provides functions for reading, writing, validating, and aggregating configuration dictionaries. In DA, an ensemble is a collection of state or observation vectors. Ensembles are represented in DATeS as lists of either state, or observation vector objects. The utility modules include functions responsible for iterating over ensembles to evaluate ensemble-related quantities of interest, such as ensemble mean, ensemble variance/covariance, and covariance trace. Covariance inflation and localization are critically important for nearly all ensemble-based assimilation algorithms. DATeS abstracts tools and functions common to assimilation methods, such as inflation and localization where they can be easily imported and reused by newly developed assimilation routines. The utility module in DATeS provide methods to carry out these procedures in various modes, including state space and observation space localization. Moreover, DATeS supports space-dependent covariance localization, i.e. it allows varying the localization radii and inflation factors over both space and time.

Ensemble-based assimilation algorithms often require matrix representation of ensembles of model states. In DATeS, ensembles are represented as lists of states, rather than full matrices of size . However, it provides utility functions capable of efficiently calculating ensemble statistics, including ensemble variances, and covariance trace. Moreoveer, DATeS provides matrix-free implementations of the operations that require ensembles of states, such as a matrix-vector product, where the matrix is involved is a representation of an ensemble of states.

The module dates_utility provides access to all utility functions in DATeS. In fact, this module wraps the functionality provided by several other specialized utility routines, including the sample given in Table 4. The utility module provides other general functions such as to handling file downloading, and functions for file I/O. For a list of all functions in the utility module, see the User’s Manual DATeS_manual_Attia .

Module name Functionality provided
_utility_configs
handles configuration dictionaries, including aggregating, reading, and writing configuration dictionaries
_utility_stat
evaluate statistical quantities such as moments of an ensemble (e.g. list of model state or observation objects)
_utility_machine_learning

carry out machine learning algorithms such as fittinga Gaussian Mixture Model to an ensemble

_utility_data_assimilation
carry out general DA tasks such as ensemble inflation, covariance localization, and evaluating performance
metrics including root-mean-squared errors (RMSE), and rank histogram uniformity measures.
Table 4: A sample of the modules wrapped by the main utility module dates_utility.

4 Using DATeS

The sequence of steps needed to run a DA experiment in DATeS is summarized in Figure 4. This section is devoted to explaining these steps in the context of a working example that uses the QG-1.5 model sakov2008deterministic and carries out DA using a standard EnKF formulation.

Figure 4: The sequence of essential steps required in order to run a DA experiment in DATeS.

4.1 Step1: initialize DATeS

Initializing a DATeS run involves defining the root directory of DATeS as an environment variable, and adding the paths of DATeS source modules to the system path. This can be done by executing code Snippet in Figure 5 in DATeS root directory.

Figure 5: Initialize the DATeS run.

4.2 Step2: create a model object

QG-1.5 is a nonlinear 1.5-layer reduced-gravity QG model with double-gyre wind forcing and bi-harmonic friction sakov2008deterministic .

Quasi-geostrophic model

This model is a numerical approximation of the equations:

(9)

where and is the surface elevation. The values of the model coefficients in (9) are obtained from sakov2008deterministic , and are described as follows: , , and . The domain of the model is a [space units] square, with and is discretized by a grid of size (including boundaries). The boundary conditions used are . The dimension of the model state vector is . This is a synthetic model where the scales are not relevant, and we use generic space, time, and solution amplitude units. The time integration scheme used is the fourth-order Runge-Kutta scheme with a time step [time units]. The model forward propagation core is implemented in Fortran. The QG-1.5 model is run over model time steps, with observations made available every time steps.

Observations and observation operators

We use a standard linear operator to observe components of with observation error variance set to

 [units squared]. The observed components are uniformly distributed over the state vector length, with an offset that is randomized at each filtering cycle. Synthetic observations are obtained by adding white noise to measurements of the see height level (SSH) extracted from a reference model run with lower viscosity. To create a QG model object with these specifications, one executes code Snippet in Figure 

6.

Figure 6: Create the QG model object.

4.3 Step3: create an assimilation object

One now proceeds to create an assimilation object. We consider a deterministic implementation of EnKF (DEnKF) with ensemble size equal to , and parameters tuned optimally as suggested in sakov2008deterministic . Covariance localization is applied via a Hadamard product Houtekamer_2001 . The localization function is Gaspari-Cohn Gaspari_1999_correlation with a localization radius of grid cells. The localization is carried out in the observation space by decorrelating both and , where is the ensemble covariance matrix, and is the linearized observation operator. In the present setup, the observation operator is linear, and thus .

Ensemble inflation is applied to the analysis ensemble of anomalies at the end of each assimilation cycle of DEnKF with an inflation factor of . Code Snippet in Figure 7 creates a DEnKF filtering object with these settings.

Figure 7: Create a DEnKF filtering object.

Most of the methods associated to the DEnKF object will raise exceptions if immediately invoked at this point. This is because several keys in the filter configuration dictionary such as the observation, the forecast time, the analysis time, and the assimilation time, are not yet appropriately assigned. DATeS allows creating assimilation objects without these options to maximize flexibility. A convenient approach is to create an assimilation process object that, among other tasks, can properly update the filter configurations between consecutive assimilation cycles.

4.4 Step4: create an assimilation process

We now test DEnKF with QG model by repeating the assimilation cycle over a timespan from to with offsets of time units between each two consecutive observation/assimilation time. An initial ensemble is created by the numerical model object. An experimental timespan is set for observations and assimilation. Here, the assimilation time instances da_checkpoints are the same as the observation time instances obs_checkpoints, but they can in general be different, leading to either synchronous or asynchronous assimilation settings. This is implemented in the code Snippet in Figure 8.

Figure 8: Create a filter process object to carry out DEnKF filtering using the QG model.

Here experiment is responsible for creating synthetic observations at all time instances defined by obs_checkpoints (except the initial time). To create synthetic observations the truth at the initial time ( in this case) is obtained from the model and is passed to the filtering process object experiment, which in turn propagates it forward in time to assimilation time points.

Finally, the assimilation experiment is executed by running code Snippet in Figure 9.

Figure 9: Run the filtering experiment.

4.5 Experiment results

The filtering results are printed to screen and are saved to files at the end of each assimilation cycle as instructed by the output_configs dictionary of the object experiment. The output directory structure is controlled via the options in the output configurations dictionary output_configs of the FilteringProcess object, i.e. experiment. All results are saved in appropriate sub-directories under a main folder named Results in the root directory of DATeS. We will refer to this directory henceforth as DATeS results directory.

The default behavior of a FilteringProcess object is to create a folder named Filtering_Results in DATeS results directory, and to instruct the filter object to save/output the results every file_output_iter whenever the flag file_output is turned on. Specifically, the DEnKF object creates three directories named Filter_Statistics, Model_States_Repository, and Observations_Repository respectively. The root mean-squared (RMS) forecast and analysis errors are evaluated at each assimilation cycle, and are written to a file under Filter_Statistics directory. The output configurations of the filter object of the DEnKF class, i.e. denkf_filter, instructs the filter to save all ensemble members (both forecast and analysis) to files by setting the value of the option file_output_moment_only to False. The true solution (reference state), the analysis ensemble, and the forecast ensembles are all saved under the directory   Model_States_Repository, while the observations are saved under the directory Observations_Repository. We note that while here we illustrate the default behavior, the output directories are fully configurable.

Figure 10 shows the reference initial state of the QG model, an example of the observational grid used, and an initial forecast state. The initial forecast state in Figure 10 is the average of an initial ensemble collected from a long run of the QG model.

Figure 10: The QG-1.5 model. The truth (reference state) at the initial time (t=) of the assimilation experiment is shown in the left panel. The red dots indicate the locations of observations for one of the test cases employed. The initial forecast state, taken as the average of the initial ensemble at time t=, is shown in the right panel.

The true field, the forecast errors, and the DEnKF analyses errors at different time instances are shown in Figure 11.

Figure 11: Data assimilation results. The reference field , the forecast errors, and the analysis errors at [time units]. Here the forecast error is defined as the reference field minus the average of the forecast ensemble, and the analysis error is the reference field minus the average of the analysis ensemble.

Typical solution quality metrics in the ensemble-based DA literature include RMSE plots and Rank (Talagrand) histograms anderson1996method ; candille2005evaluation .

Upon termination of a DATeS run, executable files can be cleaned up by calling the function clean_executable_files( ) available in the utility module (see code Snippet in Figure 12).

Figure 12: Cleanup DATeS executable files.

4.6 DATeS for benchmarking

Performance metrics

In the linear settings, the performance of an ensemble-based DA filter could be judged based on two factors. Firstly, convergence explained by its ability to track the truth, and secondly by the quality of the flow-dependent covariance matrix generated given the analysis ensemble.

The convergence of the filter is monitored by inspecting the root mean squared error (RMSE), which represents an ensemble-based standard deviation of the difference between reality, or truth, and the model-based prediction. In synthetic experiments, the RMSE reads

(10)

where is the true state of the system, and is an ensemble of model states. For real applications, the states are replaced with observations. The rank (Talagrand) histogram anderson1996method ; candille2005evaluation , could be used to assess the spread of the ensemble, and its coverage to the truth. Generally speaking, the rank histogram plots the rank of the truth (or observations) compared to the ensemble members (or equivalent observations), ordered increasingly in magnitude. A nearly-uniform rank histogram is desirable, and suggests that the truth is indistinguishable from the ensemble members. A mound rank histogram indicates overdispersed ensemble, a while U-shaped histogram indicates underdispersion. However, mound rank histograms are rarely seen in practice, especially for large-scale problems.

Figure 13 (a) shows an RMSE plot of the results of the experiment presented in 4.5. The histogram of the rank statistics of the truth, compared to the analysis ensemble, is shown in Figure 13 (b).

Figure 13: Data assimilation results. In panel (a) “no assimilation” refers to the RMSE of the initial forecast (the average of the initial forecast ensemble) propagated forward in time over the cycles without assimilating observations into it. The rank histogram of where the truth ranks among analysis ensemble members is shown in panel (b). The ranks are evaluated for every variable in the state vector (past the correlation bound) after assimilation cycles.

For benchmarking, one needs to generate scalar representations of the RMSE and the uniformity of a rank histogram, of a numerical experiment. The average RMSE can be used to compare the accuracy of a group of filters. To generate a scalar representation of the uniformity of a rank histogram, we fit a Beta distribution to the rank histogram, scaled to the interval

, and evaluate the Kullback-Leibler (KL) divergence kullback1951information between the fitted distribution, and a uniform distribution111 The KL divergence between two Beta distributions , and is
where is the digamma function, i.e. the logarithmic derivative of the gamma function. Here, we set to a uniform distribution by setting .
. We consider a small, e.g. closer to , KL distance to be an indication of a nearly-uniform rank histogram, and consequently an indication of a well-dispersed ensemble. An alternative measure of rank histogram uniformity, is to average the absolute distances of bins’ heights from a uniformly distributed rank histogram Bessac_A2018 . DATeS provides several utility functions to calculate such metrics for a numerical experiment.

Figure 14 shows several rank histograms, along with uniform distribution, and fitted Beta distributions. The KL-divergence measure is indicated under each panel.

Figure 14: Rank histograms, with fitted Beta distributions. The KL-divergence measure is indicated under each panel.

Results in Figure 14, suggest that the fitted Beta distribution parameters give, in most cases, a good scalar description of the shape of the histogram. Moreover, one can infer the shape of the rank histogram from the parameters of the fitted Beta distribution. For example, if , and , the histogram has a mound shape, and is U-shaped if , and . The histogram is closer to uniformity as the parameters both approach . Table 5 shows both KL distance between fitted Beta distribution with respect to a uniform, and the average distances between histogram bins and a uniform.

Panel 1 2 3 4 5 6
0.198 0.231 0.022 0.018 0.065 0.272
Average distance to 0.085 0.038 0.005 0.011 0.008 0.010
Table 5: Meaures of uniformity of the rank histograms shown in Figure 14.

Benchmarking

The architecture of DATeS makes is easy to generate benchmarks for a new experiment. For example, one can write short scripts to iterate over a combination of settings of a filter to find the best possible results. As an example, consider the standard -variables Lorenz-96 model lorenz1996predictability described by the equations

(11)

where is the state vector, with periodic boundaries, i.e. , and the forcing parameter is set to . These settings make the system chaotic lorenz1998optimal and are widely used in synthetic settings for geoscientific applications. Adjusting the inflation factor, and the localization radius for EnKF filter is crucial. Consider the case where one is testing an adaptive inflation scheme, and would like to decide on the ensemble size, and the benchmark inflation factor to be used. As an example of benchmarking, we run the following experiment, over a time interval [units], where (11) is integrated forward in time using a fourth-order Runge-Kutta scheme with model step size [units]. Assume that synthetic observations are generated every model steps, where every other entry of the model state is observed. We test the DEnKF algorithm, with the fifth-order piecewise-rational function of Gaspari and Cohn Gaspari_1999_correlation for covariance localization. The localization radius is held constant, and is set to , while the inflation factor is varied for each experiment. The experiments are repeated for ensemble sizes . We report the results over the second two-thirds of the experiments timespan, i.e. over the interval to avoid spinup artifacts. This interval consisting of the last assimilation cycles out of , will be referred to as the “testing timespan”. Any experiment that results in an average RMSE of more than over the testing timespan, is discarded, and the filter used is seen to diverge. The numerical results are summarized in Figures 15, and 16.

Figure 15: Data assimilation results with DEnKF applied to Lorenz-96 system. RMSE results on, a log-scale, are shown in the first panel. The KL-distances between the analysis rank histogram and a uniform rank histogram are shown in the second panel. The localization radius is fixed to .

Figure 15 shows the average RMSE results, and the KL-distances between a Beta distribution fitted to the analysis rank histogram of each experiment, and a uniform distribution. These plots, give a preliminary idea of the plausible regimes of both ensemble size, and inflation factor that should be used to achieve the best performance of the filter used, under the current experimental settings. For example, for an ensemble size , the inflation factor should be set approximately to to give both a small RMSE and an analysis rank histogram close to uniform.

Concluding the best inflation factor, for a given ensemble size, based on Figure 15, however could be tricky. Figure 16 shows the inflation factors resulting in minimum average RMSE and minimum KL distance to uniformity. Specifically, for each ensemble size a red triangle refers to the experiment that resulted in minimum average RMSE over the testing timespan, out of all benchmarking experiments carried out with this ensemble size. Similarly, the experiment that yielded minimum KL-divergence to a uniform rank histogram, is indicated by a blue tripod.

Figure 16: Data assimilation results with DEnKF applied to Lorenz-96 system. The minimum average RMSE over the interval is indicated by red triangles. The minimum average KL distance between the analysis rank histogram and a uniformly distributed rank histogram is indicated by blue tripods. We show the results for every choice of the ensemble size. The localization radius is fixed to .

To answer the question about the ensemble size, we pick the ensemble size , given the current experimental setup. The reason is that is the smallest ensemble size that yields small RMSE, and well-dispersed ensemble as explained by Figure 16. As for the benchmark inflation factor, the results in Figures 16 show that for an ensemble size , the best choice of an inflation factor is approximately , for Gaspari-Cohn localization with a fixed radius of .

Despite being a relatively easy process, unfortunately generating a set of benchmarks for all possible combinations of numerical experiments is a time-consuming process, and is better carried out by the DA community. Some example scripts for generating and plotting benchmarking results are included in the package for guidance.

Note that, when the Gaussian assumption is severely violated, standard benchmarking tools, such as RMSE and rank histograms, should be replaced with, or at least supported by, tools capable of assessing ensemble coverage of the posterior distribution. In such cases, MCMC methods, including those implemented in DATeS attia2015hmcfilter ; attia2016clusterHMC ; attia2016PhD_advanced_sampling , could be used as a benchmarking tool law2012evaluating .

5 Extending DATeS

DATeS aims at being a collaborative environment, and is designed such that adding DA components to the package is as easy and flexible as possible. This section describes how new implementations of components such as numerical models and assimilation methodologies can be added to DATeS.

The most direct approach is to write the new implementation completely in Python. This, however, may sacrifice efficiency, or may not be feasible when existing code in other languages needs to be reused. One of the main characteristics of DATeS is the possibility of incorporating code written in low level languages. There are several strategies that can be followed to interface existing C or Fortran code with DATeS. Amongst the most popular tools are SWIG, and F2Py for interfacing Python code with existing implementations written in C and Fortran, respectively.

Whether the new contribution is written in Python, in C, or in Fortran,an appropriate Python class that inherits the corresponding base class, or a class derived from it, has to be created. The goal is to design new classes those are conformable with the existing structure of DATeS and can interact appropriately with new as well as existing components.

5.1 Adding a numerical model class

A new model class has to be created as a subclass of ModelsBase, or a class derived from it. The base class ModelsBase, similar to all base classes in DATeS, contains headers of all the functions that need to be provided by a model class to guarantee that it interacts properly with other components in DATeS.

The first step is to grant the model object access to linear algebra data structures, and to error models. Appropriate classes should be imported in a numerical model class:

  • Linear algebra: state vector, state matrix, observation vector, and observation matrix, and

  • Error models: background, model, and observation error models.

This gives the model object access to model-based data structures, and error entities necessary for DA applications. Figure 17 illustrates a class of a numerical model named “MyModel”, along with all the essential classes imported by it.

Figure 17: Illustration of a numerical model class named MyModel, and relations to the linear algebra and error models classes. A dashed arrow refers to an “import” relation, and a solid arrow represents an “inherit” relation.

The next step is to create Python-based implementations for the model functionalities. As shown in Figure 17, the corresponding methods have descriptive names in order to ease the use of DATeS functionality. For example, the method state_vector( ) creates (or initializes) a state vector data structure. Details of each of the methods in Figure 17 are given in the DATeS User Manual DATeS_manual_Attia .

As an example, suppose we want to create a model class name MyModel using Numpy and Scipy (for sparse matrices) linear algebra data structures. Code Snippet in Figure 18 shows the implementation of such class.

Figure 18: The leading lines of an implementation of a class for the model MyModel derived from the models base class ModelsBase. Linear algebra objects are derived from Numpy-based (or Scipy-based) objects.

Note that in order to guarantee extensibility of the package we have to fix the naming of the methods associated with linear algebra classes, and even if only binary files are provided, the Python-based linear algebra methods must be implemented. If the model functionality is fully written in Python, the implementation of the methods associated with a model class is straightforward, as illustrated in  DATeS_manual_Attia . On the other hand, if a low level implementation of a numerical model is given, these methods wrap the corresponding low level implementation.

5.2 Adding an assimilation class

The process of adding a new class for an assimilation methodology is similar to creating a class for a numerical model, however it is expected to require less effort. For example, a class implementation of a filtering algorithm uses components and tools provided by the passed model, and by the encapsulated linear algebra data structures and methods. Moreover, filtering algorithms belonging to the same family, such as different flavors of the well-known EnKF, are expected to share a considerable amount of infrastructure. Python inheritance enables the reuse of methods and variables from parent classes.

To create a new class for DA filtering one derives it from the base class FiltersBase, imports appropriate routines, and defines the necessary functionalities. Note that each assimilation object has access to a model object, and consequently to the proper linear algebra data structures and associated functionalities through that model.

Unlike the base class for numerical models (ModelsBase), the filtering base class FiltersBase includes actual implementations of several widely used solvers. For example, an implementation of the method FiltersBase.filtering_cycle( ) is provided to carry out a single filtering cycle by applying a forecast phase followed by an analysis phase (or vice-versa, depending on stated configurations).

Figure 19 illustrates a filtering class named MyFilter that works by carrying out analysis and forecast steps in the ensemble-based statistical framework.

Figure 19: Illustration of a DA filtering class MyFilter, and its relation to the filtering base class. A solid arrow represents an “inherit” relation.

Code Snippet in Figure 20 shows the leading lines of an implementation of the MyFilter class.

Figure 20: The leading lines of an implementation of a DA filter; the MyFilter class is derived from the filters base class FiltersBase.

6 Discussion and Concluding Remarks

This work describes DATeS, a flexible and highly-extensible package for solving data assimilation problems. DATeS seeks to provide a unified testing suite for data assimilation applications that allows researchers to easily compare different methodologies in different settings with minimal coding effort. The core of DATeS is written in Python. The main functionalities, such as model propagation, filtering, and smoothing code, can however be written in high-performance languages such as C or Fortran to attain high levels of computational efficiency.

While, we introduced several assimilation schemes in this paper, the current version, DATeS v1.0, emphasizes the statistical assimilation methods. DATeS provide the essential infrastructure required to combine elements of a variational assimilation algorithm with other parts of the package. The variational aspects of DATeS, however, require additional work that includes efficient evaluation of the adjoint model, checkpointing, and handling weak constraints. A new version of the package, under development, will carefully address these issues, and will provide implementations of several variational schemes. The variational implementations will be derived from the 3D- and 4D-Var classes implemented in the current version (DATeS v1.0).

The current version of the package presented in this work, DATeS v1.0, can be situated between professional data assimilation packages such as DART, and simplistic research-grade implementations. DATeS is well-suited for educational purposes as a learning tool for students and new comers to the data assimilation research field. It can also help data assimilation researchers develop specific components of the data assimilation process, and easily use them with the existing elements of the package. For example, one can develop a new filter, and interface an existing physical model, and error models, without the need to understand how these components are implemented. This requires unifying the interfaces between the different components of the data assimilation process, which is an essential feature of DATeS. These features allow for optimal collaboration between teams working on different aspects of a data assimilation system.

To contribute to DATeS, by adding new implementations, one must comply to the naming conventions given in the base classes. This requires building proper Python interfaces for the implementations intended to be incorporated with the package. Interfacing operational models, such the weather research and forecasting model (WRF) skamarock2005description , in the current version, DATeS v1.0, is expected to require substantial work. Moreover, DATeS does not yet support parallelization, which limits its applicability in operational settings.

The authors plan to continue developing DATeS with the long-term goal of making it a complete data assimilation testing suite that includes support for variational methods, as well as interfaces with complex models such as quasi-geostrophic global circulation models. Parallelization of DATeS, and interfacing large-scale models such as the weather research and forecasting model (WRF) skamarock2005description , will also be considered in the future.

Software Availability

The code of DATeS-v1.0 is available from https://doi.org/10.5281/zenodo.1247464. The online documentation, and alternative download links are available at https://sibiu.cs.vt.edu/dates/index.html, or http://people.cs.vt.edu/~attia/DATeS/index.html.

Acknowledgments

The authors would like to thank Paul Tranquilli, Ross Glandon, Mahesh Narayanamurthi, and Arash Sarshar, from the Computational Science Laboratory (CSL) at Virginia Tech, for their contributions to an initial version of DATeS. This work has been supported in part by awards NSF CCF-1613905, NSF ACI–1709727, AFOSR DDDAS 15RT1037, and by the Computational Science Laboratory at Virginia Tech.

References

  • [1] Melanie Ades and Peter Jan van Leeuwen. The equivalent-weights particle filter in a high-dimensional system. Quarterly Journal of the Royal Meteorological Society, 141(687):484–503, 2015.
  • [2] Jeffrey L Anderson. A method for producing and evaluating probabilistic forecasts from ensemble model integrations. Journal of Climate, 9(7):1518–1530, 1996.
  • [3] Jeffrey L Anderson. A local least squares framework for ensemble filtering. Monthly Weather Review, 131(4):634–642, 2003.
  • [4] Jeffrey L Anderson, Tim Hoar, Kevin Raeder, Hui Liu, Nancy Collins, Ryan Torn, and Avelino Avellano. The data assimilation research testbed: A community facility. Bulletin of the American Meteorological Society, 90(9):1283, 2009.
  • [5] Mark Asch, Marc Bocquet, and Maëlle Nodet. Data assimilation: methods, algorithms, and applications, volume 11. SIAM, 2016.
  • [6] Ahmed Attia. Advanced Sampling Methods for Solving Large-Scale Inverse Problems. PhD thesis, Virginia Tech, 2016.
  • [7] Ahmed Attia and Emil Constantinescu. An optimal experimental design framework for adaptive inflation and covariance localization for ensemble filters. arXiv preprint arXiv:1806.10655, 2018.
  • [8] Ahmed Attia, Ross Glandon, Paul Tranquilli, Mahesh Narayanamurthi, Arash Sarshar, and Adrian. Sandu. DATeS: A highly-extensible data assimilation testing suite. people.cs.vt.edu/~attia/DATeS, 2016. [Online; accessed September 2, 2019].
  • [9] Ahmed Attia, Azam Moosavi, and Adrian Sandu. Cluster sampling filters for non-Gaussian data assimilation. arXiv preprint arXiv:1607.03592, 2016.
  • [10] Ahmed Attia, Vishwas Rao, and Adrian Sandu. A sampling approach for four dimensional data assimilation. In Dynamic Data-Driven Environmental Systems Science, pages 215–226. Springer, 2015.
  • [11] Ahmed Attia, Vishwas Rao, and Adrian Sandu. A Hybrid Monte Carlo sampling smoother for four dimensional data assimilation. International Journal for Numerical Methods in Fluids, 2016. fld.4259.
  • [12] Ahmed Attia and Adrian Sandu. A Hybrid Monte Carlo sampling filter for non-Gaussian data assimilation. AIMS Geosciences, 1(geosci-01-00041):41–78, 2015.
  • [13] Ahmed Attia, Razvan Stefanescu, and Adrian Sandu. The reduced-order Hybrid Monte Carlo sampling smoother. International Journal for Numerical Methods in Fluids, 2016. fld.4255.
  • [14] David M Beazley et al. SWIG: An Easy to Use Tool for Integrating Scripting Languages with C and C++. In Proc. 4th USENIX Tcl/Tk Workshop, page 129–139, 1996.
  • [15] Alexandros Beskos, Dan Crisan, Ajay Jasra, Kengo Kamatani, and Yan Zhou. A stable particle filter for a class of high-dimensional state-space models. Advances in Applied Probability, 49(1):24–48, 2017.
  • [16] Julie Bessac, Emil Constantinescu, Mihai Anitescu, et al. Stochastic simulation of predictive space–time scenarios of wind speed using observations and physical model outputs. The Annals of Applied Statistics, 12(1):432–458, 2018.
  • [17] Craig H Bishop, Brian J Etherton, and Sharanya J Majumdar. Adaptive sampling with the ensemble transform Kalman filter. part I: Theoretical aspects. Monthly weather review, 129(3):420–436, 2001.
  • [18] Gerrit Burgers, Peter J van Leeuwen, and Geir Evensen. Analysis scheme in the Ensemble Kalman Filter. Monthly Weather Review, 126:1719–1724, 1998.
  • [19] Guillem Candille and Olivier Talagrand. Evaluation of probabilistic prediction systems for a scalar variable. Quarterly Journal of the Royal Meteorological Society, 131(609):2131–2150, 2005.
  • [20] Roger Daley. Atmospheric data analysis. Cambridge University Press, 1991.
  • [21] Arnaud Doucet, Nando De Freitas, and Neil Gordon. An introduction to sequential Monte Carlo methods. In Sequential Monte Carlo methods in practice, pages 3–14. Springer, 2001.
  • [22] Geir Evensen. Sequential data assimilation with a nonlinear quasi-geostrophic model using Monte Carlo methods to forcast error statistics . Journal of Geophysical Research, 99(C5):10143–10162, 1994.
  • [23] Geir Evensen. The ensemble Kalman filter: theoretical formulation and practical implementation. Ocean Dynamics, 53, 2003.
  • [24] Geir Evensen. Data assimilation: the ensemble Kalman filter. Springer Science & Business Media, 2009.
  • [25] Geir Evensen and Pavel Sakov. Data assimilation, the ensemble Kalman filter; EnKF-Matlab code. http://enkf.nersc.no/Code, 2009. [Online; accessed September 2, 2019].
  • [26] Gregory Gaspari and Stephen E Cohn. Construction of correlation functions in two and three dimensions. Quarterly Journal of the Royal Meteorological Society, 125:723–757, 1999.
  • [27] Neil J Gordon, David J Salmond, and Adrian FM Smith. Novel approach to nonlinear/non-Gaussian bayesian state estimation. In IEE Proceedings F-Radar and Signal Processing, volume 140, pages 107–113. IET, 1993.
  • [28] Bertil Gustafsson. An alternating direction implicit method for solving the shallow water equations. Journal of Computational Physics, 7:239–254, 1971.
  • [29] Thomas M Hamill and Jeffrey S Whitaker. Distance-dependent filtering of background error covariance estimates in an ensemble Kalman filter. Monthly Weather Review, 129:2776–2790, 2001.
  • [30] Pieter L Houtekamer and Herschel L Mitchell. Data assimilation using an ensemble Kalman filter technique . Monthly Weather Review, 126:796–811, 1998.
  • [31] Pieter L Houtekamer and Herschel L Mitchell. A sequential ensemble Kalman filter for atmospheric data assimilation . Monthly Weather Review, 129:123–137, 2001.
  • [32] Rudolf E. Kalman. A new approach to linear filtering and prediction problems. Transactions of the ASME–Journal of Basic Engineering, 82(Series D):35–45, 1960.
  • [33] Rudolf E Kalman and Richard S Bucy. New results in linear filtering and prediction theory. Journal of basic engineering, 83(1):95–108, 1961.
  • [34] Eugenia Kalnay. Atmospheric modeling, data assimilation and predictability. Cambridge University Press, 2002.
  • [35] Genshiro Kitagawa. Monte Carlo filter and smoother for non-Gaussian nonlinear state space models. Journal of computational and graphical statistics, 5(1):1–25, 1996.
  • [36] Solomon Kullback and Richard A Leibler. On information and sufficiency. The annals of mathematical statistics, 22(1):79–86, 1951.
  • [37] Kody Law, Andrew Stuart, and Konstantinos Zygalakis. Data assimilation: a mathematical introduction, volume 62. Springer, 2015.
  • [38] Kody JH Law and Andrew M Stuart. Evaluating data assimilation algorithms. Monthly Weather Review, 140(11):3757–3782, 2012.
  • [39] Francesc Pons Llopis, Nikolas Kantas, Alexandros Beskos, and Ajay Jasra. Particle filtering for stochastic navier-stokes signal observed with linear additive noise. arXiv preprint arXiv:1710.04586, 2017.
  • [40] Edward N Lorenz. Deterministic nonperiodic flow. Journal of the atmospheric sciences, 20(2):130–141, 1963.
  • [41] Edward N Lorenz. Predictability: A problem partly solved. In Proc. Seminar on predictability, volume 1, 1996.
  • [42] Edward N Lorenz and Kerry A Emanuel. Optimal sites for supplementary weather observations: Simulation with a small model. Journal of the Atmospheric Sciences, 55(3):399–414, 1998.
  • [43] Stefano Marelli and Bruno Sudret. UQLab: a framework for uncertainty quantification in MATLAB. In Vulnerability, Uncertainty, and Risk: Quantification, Mitigation, and Management, pages 2554–2563. 2014.
  • [44] Azam Moosavi, Ahmed Attia, and Adrian Sandu. A machine learning approach to adaptive covariance localization. arXiv preprint arXiv:1801.00548, 2018.
  • [45] Ionel M Navon and R De-Villiers. GUSTAF: A Quasi-Newton nonlinear ADI FORTRAN IV program for solving the shallow-water equations with augmented lagrangians. Computers & Geosciences, 12(2):151–173, 1986.
  • [46] Pearu Peterson. F2PY: a tool for connecting Fortran and Python programs. International Journal of Computational Science and Engineering, 4(4):296–305, 2009.
  • [47] Jonathan Poterjoy. A localized particle filter for high-dimensional nonlinear systems. Monthly Weather Review, 144(1):59–76, 2016.
  • [48] Roland Potthast, Anne Sophie Walter, Andreas Rhodin, and Data Assimilation Unit. A localised adaptive particle filter within an operational nwp framework. 2017.
  • [49] Patrick Rebeschini, Ramon Van Handel, et al.

    Can local particle filters beat the curse of dimensionality?

    The Annals of Applied Probability, 25(5):2809–2866, 2015.
  • [50] Pavel Sakov and Peter R Oke. A deterministic formulation of the ensemble Kalman filter: an alternative to ensemble square root filters. Tellus A, 60(2):361–371, 2008.
  • [51] Pavel Sakov, Dean S Oliver, and Laurent Bertino. An iterative EnKF for strongly nonlinear systems. Monthly Weather Review, 140(6):1988–2004, 2012.
  • [52] William C Skamarock, Joseph B Klemp, Jimy Dudhia, David O Gill, Dale M Barker, Wei Wang, and Jordan G Powers. A description of the advanced research wrf version 2. Technical report, National Center For Atmospheric Research Boulder Co Mesoscale and Microscale Meteorology Div, 2005.
  • [53] Keston W Smith. Cluster ensemble Kalman filter. Tellus A, 59(5):749–757, 2007.
  • [54] Michael K. Tippett, Jeffrey L. Anderson, and Craig H. Bishop. Ensemble square root filters. Monthly Weather Review, 131:1485–1490, 2003.
  • [55] Peter J Van Leeuwen. Particle filtering in geophysical systems. Monthly Weather Review, 137(12):4089–4114, 2009.
  • [56] Sanita Vetra-Carvalho, Peter Jan Van Leeuwen, Lars Nerger, Alexander Barth, M Umer Altaf, Pierre Brasseur, Paul Kirchgessner, and Jean-Marie Beckers. State-of-the-art stochastic data assimilation methods for high-dimensional non-gaussian problems. Tellus A: Dynamic Meteorology and Oceanography, 70(1):1445364, 2018.
  • [57] Jeffrey S Whitaker and Thomas M Hamill. Ensemble data assimilation without perturbed observations. Monthly Weather Review, 130:1913–1924, 2002.
  • [58] Hong Zhang and Adrian Sandu. Fatode: A library for forward, adjoint, and tangent linear integration of odes. SIAM Journal on Scientific Computing, 36(5):C504–C523, 2014.
  • [59] Milija Zupanski. Maximum likelihood ensemble filter: Theoretical aspects. Monthly Weather Review, 133(6):1710–1726, 2005.
  • [60] Milija Zupanski, Ionel M Navon, and Dusanka Zupanski. The maximum likelihood ensemble filter as a non-differentiable minimization algorithm. Quarterly Journal of the Royal Meteorological Society, 134(633):1039–1050, 2008.