For Manifold Learning, Deep Neural Networks can be Locality Sensitive Hash Functions

03/11/2021
by   Nishanth Dikkala, et al.
0

It is well established that training deep neural networks gives useful representations that capture essential features of the inputs. However, these representations are poorly understood in theory and practice. In the context of supervised learning an important question is whether these representations capture features informative for classification, while filtering out non-informative noisy ones. We explore a formalization of this question by considering a generative process where each class is associated with a high-dimensional manifold and different classes define different manifolds. Under this model, each input is produced using two latent vectors: (i) a "manifold identifier" γ and; (ii) a "transformation parameter" θ that shifts examples along the surface of a manifold. E.g., γ might represent a canonical image of a dog, and θ might stand for variations in pose, background or lighting. We provide theoretical and empirical evidence that neural representations can be viewed as LSH-like functions that map each input to an embedding that is a function of solely the informative γ and invariant to θ, effectively recovering the manifold identifier γ. An important consequence of this behavior is one-shot learning to unseen classes.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/29/2022

Effects of Data Geometry in Early Deep Learning

Deep neural networks can approximate functions on different types of dat...
research
12/02/2018

On variation of gradients of deep neural networks

We provide a theoretical explanation of the role of the number of nodes ...
research
11/27/2017

Structure propagation for zero-shot learning

The key of zero-shot learning (ZSL) is how to find the information trans...
research
12/23/2021

Manifold Learning Benefits GANs

In this paper, we improve Generative Adversarial Networks by incorporati...
research
09/07/2019

On Need for Topology-Aware Generative Models for Manifold-Based Defenses

ML algorithms or models, especially deep neural networks (DNNs), have sh...
research
08/23/2023

On-Manifold Projected Gradient Descent

This work provides a computable, direct, and mathematically rigorous app...
research
04/14/2022

Learning Invariances with Generalised Input-Convex Neural Networks

Considering smooth mappings from input vectors to continuous targets, ou...

Please sign up or login with your details

Forgot password? Click here to reset