AutoHOOT: Automatic High-Order Optimization for Tensors

05/10/2020
by   Linjian Ma, et al.
0

High-order optimization methods, including Newton's method and its variants as well as alternating minimization methods, dominate the optimization algorithms for tensor decompositions and tensor networks. These tensor methods are used for data analysis and simulation of quantum systems. In this work, we introduce AutoHOOT, the first automatic differentiation (AD) framework targeting at high-order optimization for tensor computations. AutoHOOT takes input tensor computation expressions and generates optimized derivative expressions. In particular, AutoHOOT contains a new explicit Jacobian / Hessian expression generation kernel whose outputs maintain the input tensors' granularity and are easy to optimize. The expressions are then optimized by both the traditional compiler optimization techniques and specific tensor algebra transformations. Experimental results show that AutoHOOT achieves competitive CPU and GPU performance for both tensor decomposition and tensor network applications compared to existing AD software and other tensor computation libraries with manually written kernels. The tensor methods generated by AutoHOOT are also well-parallelizable, and we demonstrate good scalability on a distributed memory supercomputer.

READ FULL TEXT

page 7

page 10

page 17

research
06/22/2022

tntorch: Tensor Network Learning with PyTorch

We present tntorch, a tensor learning framework that supports multiple d...
research
02/28/2018

Automatic Generation of Sparse Tensor Kernels with Workspaces

Recent advances in compiler theory describe how to compile sparse tensor...
research
02/14/2020

Tensor train construction from tensor actions, with application to compression of large high order derivative tensors

We present a method for converting tensors into tensor train format base...
research
09/13/2023

Autotuning Apache TVM-based Scientific Applications Using Bayesian Optimization

Apache TVM (Tensor Virtual Machine), an open source machine learning com...
research
10/19/2021

The CoRa Tensor Compiler: Compilation for Ragged Tensors with Minimal Padding

There is often variation in the shape and size of input data used for de...
research
11/03/2017

Automatic Differentiation for Tensor Algebras

Kjolstad et. al. proposed a tensor algebra compiler. It takes expression...
research
08/02/2022

OLLIE: Derivation-based Tensor Program Optimizer

Boosting the runtime performance of deep neural networks (DNNs) is criti...

Please sign up or login with your details

Forgot password? Click here to reset