AutoRWN: automatic construction and training of random weight networks using competitive swarm of agents

by   Ali Asghar Heidari, et al.

Random Weight Networks have been extensively used in many applications in the last decade because it has many strong features such as fast learning and good generalization performance. Most of the traditional training techniques for Random Weight Networks randomly select the connection weights and hidden biases and thus suffer from local optima stagnation and degraded convergence. The literature shows that stochastic population-based optimization techniques are well regarded and reliable alternative for Random Weight Networks optimization because of high local optima avoidance and flexibility. In addition, many practitioners and non-expert users find it difficult to set the other parameters of the network like the number of hidden neurons, the activation function, and the regularization factor. In this paper, an approach for training Random Weight Networks is proposed based on a recent variant of particle swarm optimization called competitive swarm optimization. Unlike most of Random Weight Networks training techniques, which are used to optimize only the input weights and hidden biases, the proposed approach will automatically tune the weights, biases, the number of hidden neurons, and regularization factor as well as the embedded activation function in the network, simultaneously. The goal is to help users to effectively identify a proper structure and hyperparameter values to their applications while obtaining reasonable prediction results. Twenty benchmark classification datasets are used to compare the proposed approach with different types of basic and hybrid Random Weight Network-based models. The experimental results on the benchmark datasets show that the reasonable classification results can be obtained by automatically tuning the hyperparameters using the proposed approach. More info please refer to



page 8


A Method of Generating Random Weights and Biases in Feedforward Neural Networks with Random Hidden Nodes

Neural networks with random hidden nodes have gained increasing interest...

On the effect of the activation function on the distribution of hidden nodes in a deep network

We analyze the joint probability distribution on the lengths of the vect...

Conditioning Optimization of Extreme Learning Machine by Multitask Beetle Antennae Swarm Algorithm

Extreme learning machine (ELM) as a simple and rapid neural network has ...

Slot Machines: Discovering Winning Combinations of Random Weights in Neural Networks

In contrast to traditional weight optimization in a continuous space, we...

Zero-bias autoencoders and the benefits of co-adapting features

Regularized training of an autoencoder typically results in hidden unit ...

A Constructive Approach for Data-Driven Randomized Learning of Feedforward Neural Networks

Feedforward neural networks with random hidden nodes suffer from a probl...

Orthogonal Stochastic Configuration Networks with Adaptive Construction Parameter for Data Analytics

As a randomized learner model, SCNs are remarkable that the random weigh...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.