DeepAI AI Chat
Log In Sign Up

Source-to-Source Automatic Differentiation of OpenMP Parallel Loops

by   Jan Hückelheim, et al.

This paper presents our work toward correct and efficient automatic differentiation of OpenMP parallel worksharing loops in forward and reverse mode. Automatic differentiation is a method to obtain gradients of numerical programs, which are crucial in optimization, uncertainty quantification, and machine learning. The computational cost to compute gradients is a common bottleneck in practice. For applications that are parallelized for multicore CPUs or GPUs using OpenMP, one also wishes to compute the gradients in parallel. We propose a framework to reason about the correctness of the generated derivative code, from which we justify our OpenMP extension to the differentiation model. We implement this model in the automatic differentiation tool Tapenade and present test cases that are differentiated following our extended differentiation procedure. Performance of the generated derivative programs in forward and reverse mode is better than sequential, although our reverse mode often scales worse than the input programs.


page 13

page 25


Gradients without Backpropagation

Using backpropagation to compute gradients of objective functions for op...

Automatic Differentiation for Adjoint Stencil Loops

Stencil loops are a common motif in computations including convolutional...

Forward-Mode Differentiation of Maxwell's Equations

We present a previously unexplored forward-mode differentiation method f...

Optimized Sparse Matrix Operations for Reverse Mode Automatic Differentiation

Sparse matrix representations are ubiquitous in computational science an...

DrMAD: Distilling Reverse-Mode Automatic Differentiation for Optimizing Hyperparameters of Deep Neural Networks

The performance of deep neural networks is well-known to be sensitive to...

Inverse design of photonic crystals through automatic differentiation

Gradient-based inverse design in photonics has already achieved remarkab...

Learning Linear Programs from Optimal Decisions

We propose a flexible gradient-based framework for learning linear progr...