Improvising the Learning of Neural Networks on Hyperspherical Manifold

09/29/2021
by   Lalith Bharadwaj Baru, et al.
0

The impact of convolution neural networks (CNNs) in the supervised settings provided tremendous increment in performance. The representations learned from CNN's operated on hyperspherical manifold led to insightful outcomes in face recognition, face identification and other supervised tasks. A broad range of activation functions is developed with hypersphere intuition which performs superior to softmax in euclidean space. The main motive of this research is to provide insights. First, the stereographic projection is implied to transform data from Euclidean space (ℝ^n) to hyperspherical manifold (𝕊^n) to analyze the performance of angular margin losses. Secondly, proving both theoretically and practically that decision boundaries constructed on hypersphere using stereographic projection obliges the learning of neural networks. Experiments have proved that applying stereographic projection on existing state-of-the-art angular margin objective functions led to improve performance for standard image classification data sets (CIFAR-10,100). The code is publicly available at: https://github.com/barulalithb/stereo-angular-margin.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
01/23/2018

ArcFace: Additive Angular Margin Loss for Deep Face Recognition

Convolutional neural networks have significantly boosted the performance...
research
03/25/2019

Noise-Tolerant Paradigm for Training Face Recognition CNNs

Benefit from large-scale training datasets, deep Convolutional Neural Ne...
research
01/17/2018

Additive Margin Softmax for Face Verification

In this paper, we propose a conceptually simple and geometrically interp...
research
05/21/2021

AngularGrad: A New Optimization Technique for Angular Convergence of Convolutional Neural Networks

Convolutional neural networks (CNNs) are trained using stochastic gradie...
research
12/29/2022

Effects of Data Geometry in Early Deep Learning

Deep neural networks can approximate functions on different types of dat...
research
11/14/2022

Interpreting Bias in the Neural Networks: A Peek Into Representational Similarity

Neural networks trained on standard image classification data sets are s...
research
11/08/2017

Deep Hyperspherical Learning

Convolution as inner product has been the founding basis of convolutiona...

Please sign up or login with your details

Forgot password? Click here to reset