Reducing the Training Time of Neural Networks by Partitioning

11/10/2015
by   Conrado S. Miranda, et al.
0

This paper presents a new method for pre-training neural networks that can decrease the total training time for a neural network while maintaining the final performance, which motivates its use on deep neural networks. By partitioning the training task in multiple training subtasks with sub-models, which can be performed independently and in parallel, it is shown that the size of the sub-models reduces almost quadratically with the number of subtasks created, quickly scaling down the sub-models used for the pre-training. The sub-models are then merged to provide a pre-trained initial set of weights for the original model. The proposed method is independent of the other aspects of the training, such as architecture of the neural network, training method, and objective, making it compatible with a wide range of existing approaches. The speedup without loss of performance is validated experimentally on MNIST and on CIFAR10 data sets, also showing that even performing the subtasks sequentially can decrease the training time. Moreover, we show that larger models may present higher speedups and conjecture about the benefits of the method in distributed learning systems.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/20/2023

Deep Fusion: Efficient Network Training via Pre-trained Initializations

In recent years, deep learning has made remarkable progress in a wide ra...
research
02/19/2021

A Projection Algorithm for the Unitary Weights

Unitary neural networks are promising alternatives for solving the explo...
research
11/21/2018

Dynamic-Net: Tuning the Objective Without Re-training

One of the key ingredients for successful optimization of modern CNNs is...
research
10/07/2021

Ensemble Neural Representation Networks

Implicit Neural Representation (INR) has recently attracted considerable...
research
10/02/2019

Accelerating Data Loading in Deep Neural Network Training

Data loading can dominate deep neural network training time on large-sca...
research
07/02/2021

ResIST: Layer-Wise Decomposition of ResNets for Distributed Training

We propose , a novel distributed training protocol for Residual Networks...

Please sign up or login with your details

Forgot password? Click here to reset