DeepAI AI Chat
Log In Sign Up

Learning Inward Scaled Hypersphere Embedding: Exploring Projections in Higher Dimensions

by   Muhammad Kamran Janjua, et al.
SEECS Orientation

Majority of the current dimensionality reduction or retrieval techniques rely on embedding the learned feature representations onto a computable metric space. Once the learned features are mapped, a distance metric aids the bridging of gaps between similar instances. Since the scaled projection is not exploited in these methods, discriminative embedding onto a hyperspace becomes a challenge. In this paper, we propose to inwardly scale feature representations in proportional to projecting them onto a hypersphere manifold for discriminative analysis. We further propose a novel, yet simpler, convolutional neural network based architecture and extensively evaluate the proposed methodology in the context of classification and retrieval tasks obtaining results comparable to state-of-the-art techniques.


page 1

page 2

page 3

page 4


DEFRAG: Deep Euclidean Feature Representations through Adaptation on the Grassmann Manifold

We propose a novel technique for training deep networks with the objecti...

Interpretable Discriminative Dimensionality Reduction and Feature Selection on the Manifold

Dimensionality reduction (DR) on the manifold includes effective methods...

NDDR-CNN: Layer-wise Feature Fusing in Multi-Task CNN by Neural Discriminative Dimensionality Reduction

State-of-the-art Convolutional Neural Network (CNN) benefits a lot from ...

Deep Randomized Ensembles for Metric Learning

Learning embedding functions, which map semantically related inputs to n...

Probabilistic Dimensionality Reduction via Structure Learning

We propose a novel probabilistic dimensionality reduction framework that...

Learned versus Hand-Designed Feature Representations for 3d Agglomeration

For image recognition and labeling tasks, recent results suggest that ma...