Accelerated Training via Incrementally Growing Neural Networks using Variance Transfer and Learning Rate Adaptation

06/22/2023
by   Xin Yuan, et al.
0

We develop an approach to efficiently grow neural networks, within which parameterization and optimization strategies are designed by considering their effects on the training dynamics. Unlike existing growing methods, which follow simple replication heuristics or utilize auxiliary gradient-based local optimization, we craft a parameterization scheme which dynamically stabilizes weight, activation, and gradient scaling as the architecture evolves, and maintains the inference functionality of the network. To address the optimization difficulty resulting from imbalanced training effort distributed to subnetworks fading in at different growth phases, we propose a learning rate adaption mechanism that rebalances the gradient contribution of these separate subcomponents. Experimental results show that our method achieves comparable or better accuracy than training large fixed-size models, while saving a substantial portion of the original computation budget for training. We demonstrate that these gains translate into real wall-clock training speedups.

READ FULL TEXT
research
03/05/2021

Unintended Effects on Adaptive Learning Rate for Training Neural Network with Output Scale Change

A multiplicative constant scaling factor is often applied to the model o...
research
11/22/2021

Towards a Principled Learning Rate Adaptation for Natural Evolution Strategies

Natural Evolution Strategies (NES) is a promising framework for black-bo...
research
04/23/2023

The Disharmony Between BN and ReLU Causes Gradient Explosion, but is Offset by the Correlation Between Activations

Deep neural networks based on batch normalization and ReLU-like activati...
research
06/24/2020

Accelerated Large Batch Optimization of BERT Pretraining in 54 minutes

BERT has recently attracted a lot of attention in natural language under...
research
04/20/2023

Angle based dynamic learning rate for gradient descent

In our work, we propose a novel yet simple approach to obtain an adaptiv...
research
01/13/2022

GradMax: Growing Neural Networks using Gradient Information

The architecture and the parameters of neural networks are often optimiz...
research
08/08/2020

Why to "grow" and "harvest" deep learning models?

Current expectations from training deep learning models with gradient-ba...

Please sign up or login with your details

Forgot password? Click here to reset