Empowering swarm-based optimizers by multi-scale search to enhance Gradient Descent initialization performance
Swarm-based optimizers like Particle Swarm Optimization or Imperialistic Competitive Algorithm that act under influences of cooperation or competition among groups, are unable to search in multiple volumes of locality or globality and do not have nested localities. As hybrid optimizers, they may not give satisfactory results as initializers in Gradient Descent approximators used in plenty of multimodal problems like nonlinear subspace learning and neural network training, which have hierarchies of convex spaces due to nonlinearity and multi-layer nature of these models. To search in various levels of scale in a homogenous way, a framework is proposed to equip PSO and ICA a multi-scale search capability. Then, the resulted optimizers are evaluated in single and GD-hybridized mode. Hybrid evaluation as GD randomizer is implemented with the help of a nonlinear subspace filtering objective function over EEG data and optimization loss and validation data accuracy is compared with other hybrids containing GD. A single evaluation is also taken place between the proposed ones, PSO, ICA, CLPSO, and CICA, which are used more in hybrid learning-based approaches. Evaluations were with respect to solution error. Before concluding the paper, it is shown and analyzed that proposed optimizers outperform algorithms of related context both in single and hybrid-GD mode.
READ FULL TEXT