For Manifold Learning, Deep Neural Networks can be Locality Sensitive Hash Functions
It is well established that training deep neural networks gives useful representations that capture essential features of the inputs. However, these representations are poorly understood in theory and practice. In the context of supervised learning an important question is whether these representations capture features informative for classification, while filtering out non-informative noisy ones. We explore a formalization of this question by considering a generative process where each class is associated with a high-dimensional manifold and different classes define different manifolds. Under this model, each input is produced using two latent vectors: (i) a "manifold identifier" γ and; (ii) a "transformation parameter" θ that shifts examples along the surface of a manifold. E.g., γ might represent a canonical image of a dog, and θ might stand for variations in pose, background or lighting. We provide theoretical and empirical evidence that neural representations can be viewed as LSH-like functions that map each input to an embedding that is a function of solely the informative γ and invariant to θ, effectively recovering the manifold identifier γ. An important consequence of this behavior is one-shot learning to unseen classes.
READ FULL TEXT