DeepAI AI Chat
Log In Sign Up

PSO-Convolutional Neural Networks with Heterogeneous Learning Rate

05/20/2022
by   Nguyen Huu Phong, et al.
University of Coimbra
9

Convolutional Neural Networks (ConvNets or CNNs) have been candidly deployed in the scope of computer vision and related fields. Nevertheless, the dynamics of training of these neural networks lie still elusive: it is hard and computationally expensive to train them. A myriad of architectures and training strategies have been proposed to overcome this challenge and address several problems in image processing such as speech, image and action recognition as well as object detection. In this article, we propose a novel Particle Swarm Optimization (PSO) based training for ConvNets. In such framework, the vector of weights of each ConvNet is typically cast as the position of a particle in phase space whereby PSO collaborative dynamics intertwines with Stochastic Gradient Descent (SGD) in order to boost training performance and generalization. Our approach goes as follows: i) [regular phase] each ConvNet is trained independently via SGD; ii) [collaborative phase] ConvNets share among themselves their current vector of weights (or particle-position) along with their gradient estimates of the Loss function. Distinct step sizes are coined by distinct ConvNets. By properly blending ConvNets with large (possibly random) step-sizes along with more conservative ones, we propose an algorithm with competitive performance with respect to other PSO-based approaches on Cifar-10 (accuracy of 98.31 to only four ConvNets – such results are expected to scale with the number of collaborative ConvNets accordingly. We make our source codes available for download https://github.com/leonlha/PSO-ConvNet-Dynamics.

READ FULL TEXT

page 1

page 5

page 11

page 12

page 13

page 14

page 17

02/17/2023

Video Action Recognition Collaborative Learning with Dynamics via PSO-ConvNet Transformer

Human Action Recognition (HAR) involves the task of categorizing actions...
10/11/2022

SGD with large step sizes learns sparse features

We showcase important features of the dynamics of the Stochastic Gradien...
03/26/2021

Exploiting Adam-like Optimization Algorithms to Improve the Performance of Convolutional Neural Networks

Stochastic gradient descent (SGD) is the main approach for training deep...
10/12/2022

AdaNorm: Adaptive Gradient Norm Correction based Optimizer for CNNs

The stochastic gradient descent (SGD) optimizers are generally used to t...
12/21/2013

GPU Asynchronous Stochastic Gradient Descent to Speed Up Neural Network Training

The ability to train large-scale neural networks has resulted in state-o...
07/19/2021

Rethinking the limiting dynamics of SGD: modified loss, phase space oscillations, and anomalous diffusion

In this work we explore the limiting dynamics of deep neural networks tr...
02/09/2020

On the distance between two neural networks and the stability of learning

How far apart are two neural networks? This is a foundational question i...