Laplacian Eigenmaps from Sparse, Noisy Similarity Measurements

03/12/2016
by   Keith Levin, et al.
0

Manifold learning and dimensionality reduction techniques are ubiquitous in science and engineering, but can be computationally expensive procedures when applied to large data sets or when similarities are expensive to compute. To date, little work has been done to investigate the tradeoff between computational resources and the quality of learned representations. We present both theoretical and experimental explorations of this question. In particular, we consider Laplacian eigenmaps embeddings based on a kernel matrix, and explore how the embeddings behave when this kernel matrix is corrupted by occlusion and noise. Our main theoretical result shows that under modest noise and occlusion assumptions, we can (with high probability) recover a good approximation to the Laplacian eigenmaps embedding based on the uncorrupted kernel matrix. Our results also show how regularization can aid this approximation. Experimentally, we explore the effects of noise and occlusion on Laplacian eigenmaps embeddings of two real-world data sets, one from speech processing and one from neuroscience, as well as a synthetic data set.

READ FULL TEXT

page 6

page 7

page 9

research
02/06/2018

The steerable graph Laplacian and its application to filtering image data-sets

In recent years, improvements in various scientific image acquisition te...
research
03/29/2023

The G-invariant graph Laplacian

Graph Laplacian based algorithms for data lying on a manifold have been ...
research
06/12/2023

G-invariant diffusion maps

The diffusion maps embedding of data lying on a manifold have shown succ...
research
02/13/2023

Kernelized Diffusion maps

Spectral clustering and diffusion maps are celebrated dimensionality red...
research
10/09/2018

Data-dependent compression of random features for large-scale kernel approximation

Kernel methods offer the flexibility to learn complex relationships in m...
research
09/08/2021

Multiscale Laplacian Learning

Machine learning methods have greatly changed science, engineering, fina...
research
06/24/2020

Approximation of the Diagonal of a Laplacian's Pseudoinverse for Complex Network Analysis

The ubiquity of massive graph data sets in numerous applications require...

Please sign up or login with your details

Forgot password? Click here to reset