Generalization bound for kernel similarity learning

10/12/2016
by   Michael Rabadi, et al.
0

Similarity learning has received a large amount of interest and is an important tool for many scientific and industrial applications. In this framework, we wish to infer the distance (similarity) between points with respect to an arbitrary distance function d. Here, we formulate the problem as a regression from a feature space X to an arbitrary vector space Y, where the Euclidean distance is proportional to d. We then give Rademacher complexity bounds on the generalization error. We find that with high probability, the complexity is bounded by the maximum of the radius of X and the radius of Y.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/19/2018

Deterministic O(1)-Approximation Algorithms to 1-Center Clustering with Outliers

The 1-center clustering with outliers problem asks about identifying a p...
research
01/29/2019

On the distance α-spectral radius of a connected graph

For a connected graph G and α∈ [0,1), the distance α-spectral radius of ...
research
09/23/2020

Random points are optimal for the approximation of Sobolev functions

We show that independent and uniformly distributed sampling points are a...
research
10/27/2016

Local Similarity-Aware Deep Feature Embedding

Existing deep embedding methods in vision tasks are capable of learning ...
research
05/05/2021

A Theoretical-Empirical Approach to Estimating Sample Complexity of DNNs

This paper focuses on understanding how the generalization error scales ...
research
10/16/2012

Fast SVM-based Feature Elimination Utilizing Data Radius, Hard-Margin, Soft-Margin

Margin maximization in the hard-margin sense, proposed as feature elimin...
research
06/11/2019

On the Vector Space in Photoplethysmography Imaging

We study the vector space of visible wavelength intensities from face vi...

Please sign up or login with your details

Forgot password? Click here to reset