Named Tensor Notation

02/25/2021
by   David Chiang, et al.
0

We propose a notation for tensors with named axes, which relieves the author, reader, and future implementers from the burden of keeping track of the order of axes and the purpose of each. It also makes it easy to extend operations on low-order tensors to higher order ones (e.g., to extend an operation on images to minibatches of images, or extend the attention mechanism to multiple attention heads). After a brief overview of our notation, we illustrate it through several examples from modern machine learning, from building blocks like attention and convolution to full models like Transformers and LeNet. Finally, we give formal definitions and describe some extensions. Our proposals build on ideas from many previous papers and software libraries. We hope that this document will encourage more authors to use named tensors, resulting in clearer papers and less bug-prone implementations. The source code for this document can be found at https://github.com/namedtensor/notation/. We invite anyone to make comments on this proposal by submitting issues or pull requests on this repository.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/17/2018

Tensor-Tensor Product Toolbox

Tensors are higher-order extensions of matrices. In recent work [Kilmer ...
research
10/28/2021

TorchAudio: Building Blocks for Audio and Speech Processing

This document describes version 0.10 of torchaudio: building blocks for ...
research
11/30/2021

HOTTBOX: Higher Order Tensor ToolBOX

HOTTBOX is a Python library for exploratory analysis and visualisation o...
research
01/26/2018

Modeling of languages for tensor manipulation

Numerical applications and, more recently, machine learning applications...
research
11/06/2019

TensorTrace: an application to contract tensor networks

Tensor network methods are a conceptually elegant framework for encoding...
research
09/03/2023

Symbolically integrating tensor networks over various random tensors – the second version of Python RTNI

We are upgrading the Python-version of RTNI, which symbolically integrat...
research
08/17/2015

Molding CNNs for text: non-linear, non-consecutive convolutions

The success of deep learning often derives from well-chosen operational ...

Please sign up or login with your details

Forgot password? Click here to reset