Leapfrogging for parallelism in deep neural networks

01/15/2018
by   Yatin Saraiya, et al.
0

We present a technique, which we term leapfrogging, to parallelize back- propagation in deep neural networks. We show that this technique yields a savings of 1-1/k of a dominant term in backpropagation, where k is the number of threads (or gpus).

READ FULL TEXT

page 1

page 2

page 3

page 4

research
01/19/2017

Deep Neural Networks - A Brief History

Introduction to deep neural networks and their history....
research
09/15/2015

Adapting Resilient Propagation for Deep Learning

The Resilient Propagation (Rprop) algorithm has been very popular for ba...
research
05/15/2021

Bilevel Programming and Deep Learning: A Unifying View on Inference Learning Methods

In this work we unify a number of inference learning methods, that are p...
research
06/08/2023

Detecting Neural Trojans Through Merkle Trees

Deep neural networks are utilized in a growing number of industries. Muc...
research
05/28/2020

Brief Announcement: On the Limits of Parallelizing Convolutional Neural Networks on GPUs

GPUs are currently the platform of choice for training neural networks. ...
research
10/17/2017

Nonlinear Interference Mitigation via Deep Neural Networks

A neural-network-based approach is presented to efficiently implement di...
research
08/04/2023

Deep neural networks from the perspective of ergodic theory

The design of deep neural networks remains somewhat of an art rather tha...

Please sign up or login with your details

Forgot password? Click here to reset