A SOM-based Gradient-Free Deep Learning Method with Convergence Analysis

01/12/2021
by   Shaosheng Xu, et al.
6

As gradient descent method in deep learning causes a series of questions, this paper proposes a novel gradient-free deep learning structure. By adding a new module into traditional Self-Organizing Map and introducing residual into the map, a Deep Valued Self-Organizing Map network is constructed. And analysis about the convergence performance of such a deep Valued Self-Organizing Map network is proved in this paper, which gives an inequality about the designed parameters with the dimension of inputs and the loss of prediction.

READ FULL TEXT

page 3

page 5

research
09/21/2021

A Novel Structured Natural Gradient Descent for Deep Learning

Natural gradient descent (NGD) provided deep insights and powerful tools...
research
02/21/2023

Deep Learning via Neural Energy Descent

This paper proposes the Nerual Energy Descent (NED) via neural network e...
research
05/25/2022

An Experimental Comparison Between Temporal Difference and Residual Gradient with Neural Network Approximation

Gradient descent or its variants are popular in training neural networks...
research
07/07/2020

Gradient Descent Converges to Ridgelet Spectrum

Deep learning achieves a high generalization performance in practice, de...
research
07/07/2020

Structure Probing Neural Network Deflation

Deep learning is a powerful tool for solving nonlinear differential equa...
research
01/04/2021

The Gaussian entropy map in valued fields

We exhibit the analog of the entropy map for multivariate Gaussian distr...
research
01/11/2023

Padding Module: Learning the Padding in Deep Neural Networks

During the last decades, many studies have been dedicated to improving t...

Please sign up or login with your details

Forgot password? Click here to reset