AutoRWN: automatic construction and training of random weight networks using competitive swarm of agents

11/14/2020
by   Ali Asghar Heidari, et al.
1

Random Weight Networks have been extensively used in many applications in the last decade because it has many strong features such as fast learning and good generalization performance. Most of the traditional training techniques for Random Weight Networks randomly select the connection weights and hidden biases and thus suffer from local optima stagnation and degraded convergence. The literature shows that stochastic population-based optimization techniques are well regarded and reliable alternative for Random Weight Networks optimization because of high local optima avoidance and flexibility. In addition, many practitioners and non-expert users find it difficult to set the other parameters of the network like the number of hidden neurons, the activation function, and the regularization factor. In this paper, an approach for training Random Weight Networks is proposed based on a recent variant of particle swarm optimization called competitive swarm optimization. Unlike most of Random Weight Networks training techniques, which are used to optimize only the input weights and hidden biases, the proposed approach will automatically tune the weights, biases, the number of hidden neurons, and regularization factor as well as the embedded activation function in the network, simultaneously. The goal is to help users to effectively identify a proper structure and hyperparameter values to their applications while obtaining reasonable prediction results. Twenty benchmark classification datasets are used to compare the proposed approach with different types of basic and hybrid Random Weight Network-based models. The experimental results on the benchmark datasets show that the reasonable classification results can be obtained by automatically tuning the hyperparameters using the proposed approach. More info please refer to http://aliasgharheidari.com

READ FULL TEXT
research
10/13/2017

A Method of Generating Random Weights and Biases in Feedforward Neural Networks with Random Hidden Nodes

Neural networks with random hidden nodes have gained increasing interest...
research
01/07/2019

On the effect of the activation function on the distribution of hidden nodes in a deep network

We analyze the joint probability distribution on the lengths of the vect...
research
11/22/2018

Conditioning Optimization of Extreme Learning Machine by Multitask Beetle Antennae Swarm Algorithm

Extreme learning machine (ELM) as a simple and rapid neural network has ...
research
01/16/2021

Slot Machines: Discovering Winning Combinations of Random Weights in Neural Networks

In contrast to traditional weight optimization in a continuous space, we...
research
06/29/2023

Weight Compander: A Simple Weight Reparameterization for Regularization

Regularization is a set of techniques that are used to improve the gener...
research
02/13/2014

Zero-bias autoencoders and the benefits of co-adapting features

Regularized training of an autoencoder typically results in hidden unit ...
research
09/04/2019

A Constructive Approach for Data-Driven Randomized Learning of Feedforward Neural Networks

Feedforward neural networks with random hidden nodes suffer from a probl...

Please sign up or login with your details

Forgot password? Click here to reset