DeepAI AI Chat
Log In Sign Up

Gaussian Process Latent Variable Alignment Learning

by   Ieva Kazlauskaite, et al.
University of Bath

We present a model that can automatically learn alignments between high-dimensional data in an unsupervised manner. Learning alignments is an ill-constrained problem as there are many different ways of defining a good alignment. Our proposed method casts alignment learning in a framework where both alignment and data are modelled simultaneously. We derive a probabilistic model built on non-parametric priors that allows for flexible warps while at the same time providing means to specify interpretable constraints. We show results on several datasets, including different motion capture sequences and show that the suggested model outperform the classical algorithmic approaches to the alignment task.


page 8

page 9

page 10

page 11


Sequence Alignment with Dirichlet Process Mixtures

We present a probabilistic model for unsupervised alignment of high-dime...

DP-GP-LVM: A Bayesian Non-Parametric Model for Learning Multivariate Dependency Structures

We present a non-parametric Bayesian latent variable model capable of le...

The Dynamical Gaussian Process Latent Variable Model in the Longitudinal Scenario

The Dynamical Gaussian Process Latent Variable Models provide an elegant...

Variational Gaussian Process Dynamical Systems

High dimensional time series are endemic in applications of machine lear...

Monotonic Gaussian Process Flow

We propose a new framework of imposing monotonicity constraints in a Bay...

A Data-dependent Approach for High Dimensional (Robust) Wasserstein Alignment

Many real-world problems can be formulated as the alignment between two ...

Alignment Elimination from Adams' Grammars

Adams' extension of parsing expression grammars enables specifying inden...