Sequential training algorithm for neural networks

05/17/2019
by   Jongrae Kim, et al.
0

A sequential training method for large-scale feedforward neural networks is presented. Each layer of the neural network is decoupled and trained separately. After the training is completed for each layer, they are combined together. The performance of the network would be sub-optimal compared to the full network training if the optimal solution would be achieved. However, achieving the optimal solution for the full network would be infeasible or require long computing time. The proposed sequential approach reduces the required computer resources significantly and would have better convergences as a single layer is optimised for each optimisation step. The required modifications of existing algorithms to implement the sequential training are minimal. The performance is verified by a simple example.

READ FULL TEXT
research
10/30/2016

A Theoretical Study of The Relationship Between Whole An ELM Network and Its Subnetworks

A biological neural network is constituted by numerous subnetworks and m...
research
09/13/2019

Electro-optical Neural Networks based on Time-stretch Method

In this paper, a novel architecture of electro-optical neural networks b...
research
10/14/2021

Training Neural Networks for Solving 1-D Optimal Piecewise Linear Approximation

Recently, the interpretability of deep learning has attracted a lot of a...
research
10/07/2021

Ensemble Neural Representation Networks

Implicit Neural Representation (INR) has recently attracted considerable...
research
09/03/2020

It's Hard for Neural Networks To Learn the Game of Life

Efforts to improve the learning abilities of neural networks have focuse...
research
06/11/2021

Decoupled Greedy Learning of CNNs for Synchronous and Asynchronous Distributed Learning

A commonly cited inefficiency of neural network training using back-prop...

Please sign up or login with your details

Forgot password? Click here to reset