Uso de GSO cooperativos com decaimentos de pesos para otimizacao de redes neurais

07/05/2021
by   Danielle Silva, et al.
0

Training of Artificial Neural Networks is a complex task of great importance in supervised learning problems. Evolutionary Algorithms are widely used as global optimization techniques and these approaches have been used for Artificial Neural Networks to perform various tasks. An optimization algorithm, called Group Search Optimizer (GSO), was proposed and inspired by the search behaviour of animals. In this article we present two new hybrid approaches: CGSO-Hk-WD and CGSO-Sk-WD. Cooperative GSOs are based on the divide-and-conquer paradigm, employing cooperative behaviour between GSO groups to improve the performance of the standard GSO. We also apply the weight decay strategy (WD, acronym for Weight Decay) to increase the generalizability of the networks. The results show that cooperative GSOs are able to achieve better performance than traditional GSO for classification problems in benchmark datasets such as Cancer, Diabetes, Ecoli and Glass datasets.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/12/2014

A swarm optimization algorithm inspired in the behavior of the social-spider

Swarm intelligence is a research field that models the collective behavi...
research
07/04/2013

Clustering of Complex Networks and Community Detection Using Group Search Optimization

Group Search Optimizer(GSO) is one of the best algorithms, is very new i...
research
12/31/2012

Training a Functional Link Neural Network Using an Artificial Bee Colony for Solving a Classification Problems

Artificial Neural Networks have emerged as an important tool for classif...
research
04/11/2022

Position-wise optimizer: A nature-inspired optimization algorithm

The human nervous system utilizes synaptic plasticity to solve optimizat...
research
10/28/2020

Harris Hawks Optimization: Algorithm and Applications

In this paper, a novel population-based, nature-inspired optimization pa...
research
09/15/2016

A Tutorial about Random Neural Networks in Supervised Learning

Random Neural Networks (RNNs) are a class of Neural Networks (NNs) that ...
research
02/17/2012

Extended Mixture of MLP Experts by Hybrid of Conjugate Gradient Method and Modified Cuckoo Search

This paper investigates a new method for improving the learning algorith...

Please sign up or login with your details

Forgot password? Click here to reset