A Riemannian Network for SPD Matrix Learning

08/15/2016
by   Zhiwu Huang, et al.
0

Symmetric Positive Definite (SPD) matrix learning methods have become popular in many image and video processing tasks, thanks to their ability to learn appropriate statistical representations while respecting Riemannian geometry of underlying SPD manifolds. In this paper we build a Riemannian network architecture to open up a new direction of SPD matrix non-linear learning in a deep model. In particular, we devise bilinear mapping layers to transform input SPD matrices to more desirable SPD matrices, exploit eigenvalue rectification layers to apply a non-linear activation function to the new SPD matrices, and design an eigenvalue logarithm layer to perform Riemannian computing on the resulting SPD matrices for regular output layers. For training the proposed deep network, we exploit a new backpropagation with a variant of stochastic gradient descent on Stiefel manifolds to update the structured connection weights and the involved SPD matrix data. We show through experiments that the proposed SPD matrix network can be simply trained and outperform existing SPD matrix learning and state-of-the-art methods in three typical visual classification tasks.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/17/2016

Building Deep Networks on Grassmann Manifolds

Learning representations on Grassmann manifolds is popular in quite a fe...
research
12/03/2021

Efficient Continuous Manifold Learning for Time Series Modeling

Modeling non-Euclidean data is drawing attention along with the unpreced...
research
11/17/2016

Generalized BackPropagation, Étude De Cas: Orthogonality

This paper introduces an extension of the backpropagation algorithm that...
research
11/17/2017

Learning a Robust Representation via a Deep Network on Symmetric Positive Definite Manifolds

Recent studies have shown that aggregating convolutional features of a p...
research
10/12/2019

Learning deep linear neural networks: Riemannian gradient flows and convergence to global minimizers

We study the convergence of gradient flows related to learning deep line...
research
04/13/2021

Learning Log-Determinant Divergences for Positive Definite Matrices

Representations in the form of Symmetric Positive Definite (SPD) matrice...
research
09/25/2015

Training Deep Networks with Structured Layers by Matrix Backpropagation

Deep neural network architectures have recently produced excellent resul...

Please sign up or login with your details

Forgot password? Click here to reset