BPPSA: Scaling Back-propagation by Parallel Scan Algorithm

07/23/2019
by   Shang Wang, et al.
0

In an era when the performance of a single compute device plateaus, software must be designed to scale on a massively parallel system for better runtime performance. However, in the context of training deep learning models, the commonly used back-propagation (BP) algorithm imposes a strong sequential dependency in the process of gradient computation. Under model parallelism, BP has a theoretical step complexity of Θ (n) which hinders its scalability in a parallel computing environment, where n represents the number of compute devices into which a model is partitioned. Scan is a primitive operation that performs an in-order aggregation on a sequence of values and returns the partial result at each step. Parallel algorithms (e.g., Blelloch scan) have been developed to scale the scan operation on massively parallel systems. In this work, in order to improve the scalability of BP, we reformulate BP into a scan operation which is then scaled by our modified version of the Blelloch scan algorithm with a theoretical step complexity of Θ ( n). We evaluate our approach on a vanilla Recurrent Neural Network training with synthetic datasets, and demonstrate up to 2.75× speedup in terms of the overall training time and 8.8× speedup on the backward pass alone.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset