New optimization algorithms for neural network training using operator splitting techniques

04/29/2019
by   Cristian Daniel Alecsa, et al.
0

In the following paper we present a new type of optimization algorithms adapted for neural network training. These algorithms are based upon sequential operator splitting technique for some associated dynamical systems. Furthermore, we investigate through numerical simulations the empirical rate of convergence of these iterative schemes toward a local minimum of the loss function, with some suitable choices of the underlying hyper-parameters. We validate the convergence of these optimizers using the results of the accuracy and of the loss function on the MNIST, MNIST-Fashion and CIFAR 10 classification datasets.

READ FULL TEXT

page 11

page 12

research
10/12/2021

On Convergence of Training Loss Without Reaching Stationary Points

It is a well-known fact that nonconvex optimization is computationally i...
research
07/06/2017

Convergence Analysis of Optimization Algorithms

The regret bound of an optimization algorithms is one of the basic crite...
research
07/25/2019

On the Koopman operator of algorithms

A systematic mathematical framework for the study of numerical algorithm...
research
09/07/2021

Revisiting Recursive Least Squares for Training Deep Neural Networks

Recursive least squares (RLS) algorithms were once widely used for train...
research
06/23/2023

Minibatch training of neural network ensembles via trajectory sampling

Most iterative neural network training methods use estimates of the loss...
research
11/20/2019

Perceptual Loss Function for Neural Modelling of Audio Systems

This work investigates alternate pre-emphasis filters used as part of th...
research
06/03/2020

Optimizing Neural Networks via Koopman Operator Theory

Koopman operator theory, a powerful framework for discovering the underl...

Please sign up or login with your details

Forgot password? Click here to reset