Measuring dissimilarity with diffeomorphism invariance

02/11/2022
by   Théophile Cantelobre, et al.
37

Measures of similarity (or dissimilarity) are a key ingredient to many machine learning algorithms. We introduce DID, a pairwise dissimilarity measure applicable to a wide range of data spaces, which leverages the data's internal structure to be invariant to diffeomorphisms. We prove that DID enjoys properties which make it relevant for theoretical study and practical use. By representing each datum as a function, DID is defined as the solution to an optimization problem in a Reproducing Kernel Hilbert Space and can be expressed in closed-form. In practice, it can be efficiently approximated via Nyström sampling. Empirical experiments support the merits of DID.

READ FULL TEXT

page 3

page 38

page 39

page 40

research
04/16/2012

Learning Sets with Separating Kernels

We consider the problem of learning a set from random samples. We show h...
research
01/28/2021

Reproducing kernel Hilbert spaces, polynomials and the classical moment problems

We show that polynomials do not belong to the reproducing kernel Hilbert...
research
12/30/2015

Technical Report: a tool for measuring Prosodic Accommodation

This article has been withdrawn by arXiv administrators because the subm...
research
06/04/2021

Provably Strict Generalisation Benefit for Invariance in Kernel Methods

It is a commonly held belief that enforcing invariance improves generali...
research
02/12/2020

A Random-Feature Based Newton Method for Empirical Risk Minimization in Reproducing Kernel Hilbert Space

In supervised learning using kernel methods, we encounter a large-scale ...
research
04/01/2020

Sampling based approximation of linear functionals in Reproducing Kernel Hilbert Spaces

In this paper we analyze a greedy procedure to approximate a linear func...

Please sign up or login with your details

Forgot password? Click here to reset