Distributed learning of CNNs on heterogeneous CPU/GPU architectures

12/07/2017
by   Jose Marques, et al.
0

Convolutional Neural Networks (CNNs) have shown to be powerful classification tools in tasks that range from check reading to medical diagnosis, reaching close to human perception, and in some cases surpassing it. However, the problems to solve are becoming larger and more complex, which translates to larger CNNs, leading to longer training times that not even the adoption of Graphics Processing Units (GPUs) could keep up to. This problem is partially solved by using more processing units and distributed training methods that are offered by several frameworks dedicated to neural network training. However, these techniques do not take full advantage of the possible parallelization offered by CNNs and the cooperative use of heterogeneous devices with different processing capabilities, clock speeds, memory size, among others. This paper presents a new method for the parallel training of CNNs that can be considered as a particular instantiation of model parallelism, where only the convolutional layer is distributed. In fact, the convolutions processed during training (forward and backward propagation included) represent from 60-90% of global processing time. The paper analyzes the influence of network size, bandwidth, batch size, number of devices, including their processing capabilities, and other parameters. Results show that this technique is capable of diminishing the training time without affecting the classification performance for both CPUs and GPUs. For the CIFAR-10 dataset, using a CNN with two convolutional layers, and 500 and 1500 kernels, respectively, best speedups achieve 3.28× using four CPUs and 2.45× with three GPUs. Modern imaging datasets, larger and more complex than CIFAR-10 will certainly require more than 60-90% of processing time calculating convolutions, and speedups will tend to increase accordingly.

READ FULL TEXT
research
02/03/2020

DWM: A Decomposable Winograd Method for Convolution Acceleration

Winograd's minimal filtering algorithm has been widely used in Convoluti...
research
06/16/2020

Reusing Trained Layers of Convolutional Neural Networks to Shorten Hyperparameters Tuning Time

Hyperparameters tuning is a time-consuming approach, particularly when t...
research
04/19/2021

An Oracle for Guiding Large-Scale Model/Hybrid Parallel Training of Convolutional Neural Networks

Deep Neural Network (DNN) frameworks use distributed training to enable ...
research
02/06/2020

Fixed smooth convolutional layer for avoiding checkerboard artifacts in CNNs

In this paper, we propose a fixed convolutional layer with an order of s...
research
03/19/2015

Implementation of a Practical Distributed Calculation System with Browsers and JavaScript, and Application to Distributed Deep Learning

Deep learning can achieve outstanding results in various fields. However...
research
06/16/2020

Improving accuracy and speeding up Document Image Classification through parallel systems

This paper presents a study showing the benefits of the EfficientNet mod...

Please sign up or login with your details

Forgot password? Click here to reset