ZNN - A Fast and Scalable Algorithm for Training 3D Convolutional Networks on Multi-Core and Many-Core Shared Memory Machines

10/22/2015
by   Aleksandar Zlateski, et al.
0

Convolutional networks (ConvNets) have become a popular approach to computer vision. It is important to accelerate ConvNet training, which is computationally costly. We propose a novel parallel algorithm based on decomposition into a set of tasks, most of which are convolutions or FFTs. Applying Brent's theorem to the task dependency graph implies that linear speedup with the number of processors is attainable within the PRAM model of parallel computation, for wide network architectures. To attain such performance on real shared-memory machines, our algorithm computes convolutions converging on the same node of the network with temporal locality to reduce cache misses, and sums the convergent convolution outputs via an almost wait-free concurrent method to reduce time spent in critical sections. We implement the algorithm with a publicly available software package called ZNN. Benchmarking with multi-core CPUs shows that ZNN can attain speedup roughly equal to the number of physical cores. We also show that ZNN can attain over 90x speedup on a many-core CPU (Xeon Phi Knights Corner). These speedups are achieved for network architectures with widths that are in common use. The task parallelism of the ZNN algorithm is suited to CPUs, while the SIMD parallelism of previous algorithms is compatible with GPUs. Through examples, we show that ZNN can be either faster or slower than certain GPU implementations depending on specifics of the network architecture, kernel sizes, and density and size of the output patch. ZNN may be less costly to develop and maintain, due to the relative ease of general-purpose CPU programming.

READ FULL TEXT

page 1

page 5

page 8

page 9

research
05/13/2021

Efficient executions of Pipelined Conjugate Gradient Method on Heterogeneous Architectures

The Preconditioned Conjugate Gradient (PCG) method is widely used for so...
research
05/02/2019

On Linear Learning with Manycore Processors

A new generation of manycore processors is on the rise that offers dozen...
research
12/20/2013

Fast Training of Convolutional Networks through FFTs

Convolutional networks are one of the most widely employed architectures...
research
11/05/2018

Parallel training of linear models without compromising convergence

In this paper we analyze, evaluate, and improve the performance of train...
research
03/08/2016

Testing fine-grained parallelism for the ADMM on a factor-graph

There is an ongoing effort to develop tools that apply distributed compu...
research
12/02/2019

GPU Algorithm for Earliest Arrival Time Problem in Public Transport Networks

Given a temporal graph G, a source vertex s, and a departure time at sou...
research
12/04/2019

L3 Fusion: Fast Transformed Convolutions on CPUs

Fast convolutions via transforms, either Winograd or FFT, had emerged as...

Please sign up or login with your details

Forgot password? Click here to reset