Ioannis G. Kevrekidis

is this you? claim profile

0 followers

  • Linking Gaussian Process regression with data-driven manifold embeddings for nonlinear data fusion

    In statistical modeling with Gaussian Process regression, it has been shown that combining (few) high-fidelity data with (many) low-fidelity data can enhance prediction accuracy, compared to prediction based on the few high-fidelity data only. Such information fusion techniques for multifidelity data commonly approach the high-fidelity model f_h(t) as a function of two variables (t,y), and then using f_l(t) as the y data. More generally, the high-fidelity model can be written as a function of several variables (t,y_1,y_2....); the low-fidelity model f_l and, say, some of its derivatives, can then be substituted for these variables. In this paper, we will explore mathematical algorithms for multifidelity information fusion that use such an approach towards improving the representation of the high-fidelity function with only a few training data points. Given that f_h may not be a simple function -- and sometimes not even a function -- of f_l, we demonstrate that using additional functions of t, such as derivatives or shifts of f_l, can drastically improve the approximation of f_h through Gaussian Processes. We also point out a connection with "embedology" techniques from topology and dynamical systems.

    12/16/2018 ∙ by Seungjoon Lee, et al. ∙ 0 share

    read it

  • Optimal Transport on the Manifold of SPD Matrices for Domain Adaptation

    The problem of domain adaptation has become central in many applications from a broad range of fields. Recently, it was proposed to use Optimal Transport (OT) to solve it. In this paper, we model the difference between the two domains by a diffeomorphism and use the polar factorization theorem to claim that OT is indeed optimal for domain adaptation in a well-defined sense, up to a volume preserving map. We then focus on the manifold of Symmetric and Positive-Definite (SPD) matrices, whose structure provided a useful context in recent applications. We demonstrate the polar factorization theorem on this manifold. Due to the uniqueness of the weighted Riemannian mean, and by exploiting existing regularized OT algorithms, we formulate a simple algorithm that maps the source domain to the target domain. We test our algorithm on two Brain-Computer Interface (BCI) data sets and observe state of the art performance

    06/03/2019 ∙ by Or Yair, et al. ∙ 0 share

    read it

  • Transport map accelerated adaptive importance sampling, and application to inverse problems arising from multiscale stochastic reaction networks

    In many applications, Bayesian inverse problems can give rise to probability distributions which contain complexities due to the Hessian varying greatly across parameter space. This complexity often manifests itself as lower dimensional manifolds on which the likelihood function is invariant, or varies very little. This can be due to trying to infer unobservable parameters, or due to sloppiness in the model which is being used to describe the data. In such a situation, standard sampling methods for characterising the posterior distribution, which do not incorporate information about this structure, will be highly inefficient. In this paper, we seek to develop an approach to tackle this problem when using adaptive importance sampling methods, by using optimal transport maps to simplify posterior distributions which are concentrated on lower dimensional manifolds. This approach is applicable to a whole range of problems for which Monte Carlo Markov chain (MCMC) methods mix slowly. We demonstrate the approach by considering inverse problems arising from partially observed stochastic reaction networks. In particular, we consider systems which exhibit multiscale behaviour, but for which only the slow variables in the system are observable. We demonstrate that certain multiscale approximations lead to more consistent approximations of the posterior than others. The use of optimal transport maps stabilises the ensemble transform adaptive importance sampling (ETAIS) method, and allows for efficient sampling with smaller ensemble sizes.

    01/31/2019 ∙ by Simon L. Cotter, et al. ∙ 0 share

    read it

  • A geometric approach to the transport of discontinuous densities

    Different observations of a relation between inputs ("sources") and outputs ("targets") are often reported in terms of histograms (discretizations of the source and the target densities). Transporting these densities to each other provides insight regarding the underlying relation. In (forward) uncertainty quantification, one typically studies how the distribution of inputs to a system affects the distribution of the system responses. Here, we focus on the identification of the system (the transport map) itself, once the input and output distributions are determined, and suggest a modification of current practice by including data from what we call "an observation process". We hypothesize that there exists a smooth manifold underlying the relation; the sources and the targets are then partial observations (possibly projections) of this manifold. Knowledge of such a manifold implies knowledge of the relation, and thus of "the right" transport between source and target observations. When the source-target observations are not bijective (when the manifold is not the graph of a function over both observation spaces, either because folds over them give rise to density singularities, or because it marginalizes over several observables), recovery of the manifold is obscured. Using ideas from attractor reconstruction in dynamical systems, we demonstrate how additional information in the form of short histories of an observation process can help us recover the underlying manifold. The types of additional information employed and the relation to optimal transport based solely on density observations is illustrated and discussed, along with limitations in the recovery of the true underlying relation.

    07/18/2019 ∙ by Caroline Moosmüller, et al. ∙ 0 share

    read it

  • On the Koopman operator of algorithms

    A systematic mathematical framework for the study of numerical algorithms would allow comparisons, facilitate conjugacy arguments, as well as enable the discovery of improved, accelerated, data-driven algorithms. Over the course of the last century, the Koopman operator has provided a mathematical framework for the study of dynamical systems, which facilitates conjugacy arguments and can provide efficient reduced descriptions. More recently, numerical approximations of the operator have made it possible to analyze dynamical systems in a completely data-driven, essentially equation-free pipeline. Discrete or continuous time numerical algorithms (integrators, nonlinear equation solvers, optimization algorithms) are themselves dynamical systems. In this paper, we use the Koopman operator framework in the data-driven study of such algorithms and discuss benefits for analysis and acceleration of numerical computation. For algorithms acting on high-dimensional spaces by quickly contracting them towards low-dimensional manifolds, we demonstrate how basis functions adapted to the data help to construct efficient reduced representations of the operator. Our illustrative examples include the gradient descent and Nesterov optimization algorithms, as well as the Newton-Raphson algorithm.

    07/25/2019 ∙ by Felix Dietrich, et al. ∙ 0 share

    read it