PSO-Convolutional Neural Networks with Heterogeneous Learning Rate

by   Nguyen Huu Phong, et al.

Convolutional Neural Networks (ConvNets or CNNs) have been candidly deployed in the scope of computer vision and related fields. Nevertheless, the dynamics of training of these neural networks lie still elusive: it is hard and computationally expensive to train them. A myriad of architectures and training strategies have been proposed to overcome this challenge and address several problems in image processing such as speech, image and action recognition as well as object detection. In this article, we propose a novel Particle Swarm Optimization (PSO) based training for ConvNets. In such framework, the vector of weights of each ConvNet is typically cast as the position of a particle in phase space whereby PSO collaborative dynamics intertwines with Stochastic Gradient Descent (SGD) in order to boost training performance and generalization. Our approach goes as follows: i) [regular phase] each ConvNet is trained independently via SGD; ii) [collaborative phase] ConvNets share among themselves their current vector of weights (or particle-position) along with their gradient estimates of the Loss function. Distinct step sizes are coined by distinct ConvNets. By properly blending ConvNets with large (possibly random) step-sizes along with more conservative ones, we propose an algorithm with competitive performance with respect to other PSO-based approaches on Cifar-10 (accuracy of 98.31 to only four ConvNets – such results are expected to scale with the number of collaborative ConvNets accordingly. We make our source codes available for download


page 1

page 5

page 11

page 12

page 13

page 14

page 17


Video Action Recognition Collaborative Learning with Dynamics via PSO-ConvNet Transformer

Human Action Recognition (HAR) involves the task of categorizing actions...

SGD with large step sizes learns sparse features

We showcase important features of the dynamics of the Stochastic Gradien...

Exploiting Adam-like Optimization Algorithms to Improve the Performance of Convolutional Neural Networks

Stochastic gradient descent (SGD) is the main approach for training deep...

AdaNorm: Adaptive Gradient Norm Correction based Optimizer for CNNs

The stochastic gradient descent (SGD) optimizers are generally used to t...

Tom: Leveraging trend of the observed gradients for faster convergence

The success of deep learning can be attributed to various factors such a...

Deep Frank-Wolfe For Neural Network Optimization

Learning a deep neural network requires solving a challenging optimizati...

On the distance between two neural networks and the stability of learning

How far apart are two neural networks? This is a foundational question i...

Please sign up or login with your details

Forgot password? Click here to reset