On Simplifying Dependent Polyhedral Reductions
Reductions combine collections of input values with an associative (and usually also commutative) operator to produce either a single, or a collection of outputs. They are ubiquitous in computing, especially with big data and deep learning. When the same input value contributes to multiple output values, there is a tremendous opportunity for reducing (pun intended) the computational effort. This is called simplification. Polyhedral reductions are reductions where the input and output data collections are (dense) multidimensional arrays (i.e., tensors), accessed with linear/affine functions of the indices. tensor contractions Gautam and Rajopadhye <cit.> showed how polyhedral reductions could be simplified automatically (through compile time analysis) and optimally (the resulting program had minimum asymptotic complexity). Yang, Atkinson and Carbin <cit.> extended this to the case when (some) input values depend on (some) outputs. Specifically, they showed how the optimal simplification problem could be formulated as a bilinear programming problem, and for the case when the reduction operator admits an inverse, they gave a heuristic solution that retained optimality. In this note, we show that simplification of dependent reductions can be formulated as a simple extension of the Gautam-Rajopadhye backtracking search algorithm.
READ FULL TEXT