A Neural Network model with Bidirectional Whitening

04/24/2017
by   Yuki Fujimoto, et al.
0

We present here a new model and algorithm which performs an efficient Natural gradient descent for Multilayer Perceptrons. Natural gradient descent was originally proposed from a point of view of information geometry, and it performs the steepest descent updates on manifolds in a Riemannian space. In particular, we extend an approach taken by the "Whitened neural networks" model. We make the whitening process not only in feed-forward direction as in the original model, but also in the back-propagation phase. Its efficacy is shown by an application of this "Bidirectional whitened neural networks" model to a handwritten character recognition data (MNIST data).

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/11/2019

A Supervised Modified Hebbian Learning Method On Feed-forward Neural Networks

In this paper, we present a new supervised learning algorithm that is ba...
research
02/25/2023

Achieving High Accuracy with PINNs via Energy Natural Gradients

We propose energy natural gradient descent, a natural gradient method wi...
research
07/01/2015

Natural Neural Networks

We introduce Natural Neural Networks, a novel family of algorithms that ...
research
04/02/2020

Mirrorless Mirror Descent: A More Natural Discretization of Riemannian Gradient Flow

We present a direct (primal only) derivation of Mirror Descent as a "par...
research
05/16/2019

Formal derivation of Mesh Neural Networks with their Forward-Only gradient Propagation

This paper proposes the Mesh Neural Network (MNN), a novel architecture ...
research
02/25/2020

Optimizing User Interface Layouts via Gradient Descent

Automating parts of the user interface (UI) design process has been a lo...
research
06/28/2016

Alternating Back-Propagation for Generator Network

This paper proposes an alternating back-propagation algorithm for learni...

Please sign up or login with your details

Forgot password? Click here to reset