A Common Framework for Natural Gradient and Taylor based Optimisation using Manifold Theory

03/26/2018
by   Adnan Haider, et al.
0

This technical report constructs a theoretical framework to relate standard Taylor approximation based optimisation methods with Natural Gradient (NG), a method which is Fisher efficient with probabilistic models. Such a framework will be shown to also provide mathematical justification to combine higher order methods with the method of NG.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/05/2021

Tensor Normal Training for Deep Learning Models

Despite the predominant use of first-order methods for training deep lea...
research
05/13/2023

Translating SUMO-K to Higher-Order Set Theory

We describe a translation from a fragment of SUMO (SUMO-K) into higher-o...
research
01/09/2023

Fast and Correct Gradient-Based Optimisation for Probabilistic Programming via Smoothing

We study the foundations of variational inference, which frames posterio...
research
04/11/2019

High dimensional optimal design using stochastic gradient optimisation and Fisher information gain

Finding high dimensional designs is increasingly important in applicatio...
research
04/11/2019

Bayesian optimal design using stochastic gradient optimisation and Fisher information gain

Finding high dimensional designs is increasingly important in applicatio...
research
06/23/2020

A Constructive, Type-Theoretic Approach to Regression via Global Optimisation

We examine the connections between deterministic, complete, and general ...
research
10/03/2018

Combining Natural Gradient with Hessian Free Methods for Sequence Training

This paper presents a new optimisation approach to train Deep Neural Net...

Please sign up or login with your details

Forgot password? Click here to reset