Parallelized Computation and Backpropagation Under Angle-Parametrized Orthogonal Matrices

05/30/2021
by   Firas Hamze, et al.
0

We present a methodology for parallel acceleration of learning in the presence of matrix orthogonality and unitarity constraints of interest in several branches of machine learning. We show how an apparently sequential elementary rotation parametrization can be restructured into blocks of commutative operations using a well-known tool for coloring the edges of complete graphs, in turn widely applied to schedule round-robin (all-against-all) sports tournaments. The resulting decomposition admits an algorithm to compute a fully-parametrized orthogonal matrix from its rotation parameters in O(n) sequential steps and one to compute the gradient of a training loss with respect to its parameters in O(nlog n) steps. We discuss parametric restrictions of interest to generative modeling and present promising performance results with a prototype GPU implementation.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/17/2018

Orthogonal Representations for Output System Pairs

A new class of canonical forms is given proposed in which (A, C) is in H...
research
12/02/2013

Efficient coordinate-descent for orthogonal matrices through Givens rotations

Optimizing over the set of orthogonal matrices is a central component in...
research
09/28/2021

Optimal Orthogonal Group Synchronization and Rotation Group Synchronization

We study the statistical estimation problem of orthogonal group synchron...
research
09/04/2019

On Orthogonal Vector Edge Coloring

Given a graph G and a positive integer d, an orthogonal vector d-colorin...
research
12/16/2005

Incremental and Transitive Discrete Rotations

A discrete rotation algorithm can be apprehended as a parametric applica...

Please sign up or login with your details

Forgot password? Click here to reset