Grassmannian diffusion maps based dimension reduction and classification for high-dimensional data
Diffusion Maps is a nonlinear dimensionality reduction technique used to embed high-dimensional data in a low-dimensional Euclidean space where the notion of distance is due to the transition probability of a random walk over the dataset. However, the conventional approach is not capable to reveal the underlying subspace structure hidden in the same dataset. To circumvent this limitation, a novel nonlinear dimensionality reduction technique, referred to as Grassmannian Diffusion Maps, is developed herein relying on the affinity between subspaces represented by points on the Grassmann manifold. To this aim, the elements of a dataset are projected onto a low-dimensional Grassmann manifold where the subspace structure of each data point is revealed. Next, given a graph on the Grassmann manifold, a kernel matrix encoding the affinity between connected subspaces is obtained to define the transition probability of the random walk over the dataset. In this paper, three examples are used to evaluate the performance of both conventional and Grassmannian Diffusion Maps. First, a "toy" example shows that the Grassmannian Diffusion Maps can identify a well-defined parametrization of points on the unit sphere, representing a Grassmann manifold. The second example shows that the Grassmannian Diffusion Maps outperforms the conventional Diffusion Maps in classifying a random field. In the last example, a novel classification/recognition technique is developed based on the theory of sparse representation. This method incorporates the ability of the Grassmannian Diffusion Maps in revealing the underlying subspace structure of a dataset to create an overcomplete dictionary with atoms given by the diffusion coordinates. Indicatively, a face recognition technique presented high recognition rates (i.e., 95 of the data required by conventional methods.
READ FULL TEXT