Extended Mixture of MLP Experts by Hybrid of Conjugate Gradient Method and Modified Cuckoo Search

02/17/2012
by   Hamid Salimi, et al.
0

This paper investigates a new method for improving the learning algorithm of Mixture of Experts (ME) model using a hybrid of Modified Cuckoo Search (MCS) and Conjugate Gradient (CG) as a second order optimization technique. The CG technique is combined with Back-Propagation (BP) algorithm to yield a much more efficient learning algorithm for ME structure. In addition, the experts and gating networks in enhanced model are replaced by CG based Multi-Layer Perceptrons (MLPs) to provide faster and more accurate learning. The CG is considerably depends on initial weights of connections of Artificial Neural Network (ANN), so, a metaheuristic algorithm, the so-called Modified Cuckoo Search is applied in order to select the optimal weights. The performance of proposed method is compared with Gradient Decent Based ME (GDME) and Conjugate Gradient Based ME (CGME) in classification and regression problems. The experimental results show that hybrid MSC and CG based ME (MCS-CGME) has faster convergence and better performance in utilized benchmark data sets.

READ FULL TEXT
research
05/25/2021

Mixture of ELM based experts with trainable gating network

Mixture of experts method is a neural network based ensemble learning th...
research
12/08/2012

Hybrid Optimized Back propagation Learning Algorithm For Multi-layer Perceptron

Standard neural network based on general back propagation learning using...
research
03/03/2023

Feature Selection for Forecasting

This work investigates the importance of feature selection for improving...
research
07/27/2020

Binary Search and First Order Gradient Based Method for Stochastic Optimization

In this paper, we present a novel stochastic optimization method, which ...
research
07/05/2021

Uso de GSO cooperativos com decaimentos de pesos para otimizacao de redes neurais

Training of Artificial Neural Networks is a complex task of great import...
research
10/14/2022

Hybrid Decentralized Optimization: First- and Zeroth-Order Optimizers Can Be Jointly Leveraged For Faster Convergence

Distributed optimization has become one of the standard ways of speeding...
research
12/09/2005

Evolving Stochastic Learning Algorithm Based on Tsallis Entropic Index

In this paper, inspired from our previous algorithm, which was based on ...

Please sign up or login with your details

Forgot password? Click here to reset