Training Deep Networks with Structured Layers by Matrix Backpropagation

09/25/2015
by   Catalin Ionescu, et al.
0

Deep neural network architectures have recently produced excellent results in a variety of areas in artificial intelligence and visual recognition, well surpassing traditional shallow architectures trained using hand-designed features. The power of deep networks stems both from their ability to perform local computations followed by pointwise non-linearities over increasingly larger receptive fields, and from the simplicity and scalability of the gradient-descent training procedure based on backpropagation. An open problem is the inclusion of layers that perform global, structured matrix computations like segmentation (e.g. normalized cuts) or higher-order pooling (e.g. log-tangent space metrics defined over the manifold of symmetric positive definite matrices) while preserving the validity and efficiency of an end-to-end deep training framework. In this paper we propose a sound mathematical apparatus to formally integrate global structured computation into deep computation architectures. At the heart of our methodology is the development of the theory and practice of backpropagation that generalizes to the calculus of adjoint matrix variations. The proposed matrix backpropagation methodology applies broadly to a variety of problems in machine learning or computational perception. Here we illustrate it by performing visual segmentation experiments using the BSDS and MSCOCO benchmarks, where we show that deep networks relying on second-order pooling and normalized cuts layers, trained end-to-end using matrix backpropagation, outperform counterparts that do not take advantage of such global layers.

READ FULL TEXT
research
05/03/2015

Highway Networks

There is plenty of theoretical and empirical evidence that depth of neur...
research
05/02/2016

Simple2Complex: Global Optimization by Gradient Descent

A method named simple2complex for modeling and training deep neural netw...
research
01/09/2021

Training Deep Architectures Without End-to-End Backpropagation: A Brief Survey

This tutorial paper surveys training alternatives to end-to-end backprop...
research
02/08/2017

Backpropagation Training for Fisher Vectors within Neural Networks

Fisher-Vectors (FV) encode higher-order statistics of a set of multiple ...
research
08/15/2016

A Riemannian Network for SPD Matrix Learning

Symmetric Positive Definite (SPD) matrix learning methods have become po...
research
11/17/2016

Generalized BackPropagation, Étude De Cas: Orthogonality

This paper introduces an extension of the backpropagation algorithm that...
research
11/17/2016

Building Deep Networks on Grassmann Manifolds

Learning representations on Grassmann manifolds is popular in quite a fe...

Please sign up or login with your details

Forgot password? Click here to reset