Improving Strong-Scaling of CNN Training by Exploiting Finer-Grained Parallelism

03/15/2019
by   Nikoli Dryden, et al.
0

Scaling CNN training is necessary to keep up with growing datasets and reduce training time. We also see an emerging need to handle datasets with very large samples, where memory requirements for training are large. Existing training frameworks use a data-parallel approach that partitions samples within a mini-batch, but limits to scaling the mini-batch size and memory consumption makes this untenable for large samples. We describe and implement new approaches to convolution, which parallelize using spatial decomposition or a combination of sample and spatial decomposition. This introduces many performance knobs for a network, so we develop a performance model for CNNs and present a method for using it to automatically determine efficient parallelization strategies. We evaluate our algorithms with microbenchmarks and image classification with ResNet-50. Our algorithms allow us to prototype a model for a mesh-tangling dataset, where sample sizes are very large. We show that our parallelization achieves excellent strong and weak scaling and enables training for previously unreachable datasets.

READ FULL TEXT
research
07/25/2020

The Case for Strong Scaling in Deep Learning: Training Large 3D CNNs with Hybrid Parallelism

We present scalable hybrid-parallel algorithms for training large-scale ...
research
11/20/2017

MegDet: A Large Mini-Batch Object Detector

The improvements in recent CNN-based object detection works, from R-CNN ...
research
05/01/2018

Accurate, Fast and Scalable Kernel Ridge Regression on Parallel and Distributed Systems

We propose two new methods to address the weak scaling problems of KRR: ...
research
12/11/2020

A fine-grained parallelization of the immersed boundary method

We present new algorithms for the parallelization of Eulerian-Lagrangian...
research
10/03/2019

Training Multiscale-CNN for Large Microscopy Image Classification in One Hour

Existing approaches to train neural networks that use large images requi...
research
12/19/2021

Efficient Strong Scaling Through Burst Parallel Training

As emerging deep neural network (DNN) models continue to grow in size, u...
research
11/28/2022

Distributed Parallelization of xPU Stencil Computations in Julia

We present a straightforward approach for distributed parallelization of...

Please sign up or login with your details

Forgot password? Click here to reset