
TensorTensor Product Toolbox
Tensors are higherorder extensions of matrices. In recent work [Kilmer ...
read it

On Fast Matrix Inversion via Fast Matrix Multiplication
Volker Strassen first suggested an algorithm to multiply matrices with w...
read it

SVDPHAT: A Fast Sound Source Localization Method
This paper introduces a new localization method called SVDPHAT. The SVD...
read it

SPIN: A Fast and Scalable Matrix Inversion Method in Apache Spark
The growth of big data in domains such as Earth Sciences, Social Network...
read it

Approximate Multiplication of Sparse Matrices with Limited Space
Approximate matrix multiplication with limited space has received everi...
read it

A New High Performance and Scalable SVD algorithm on Distributed Memory Systems
This paper introduces a high performance implementation of ZoloSVD algo...
read it

Fast and Accurate Proper Orthogonal Decomposition using Efficient Sampling and Iterative Techniques for Singular Value Decomposition
In this paper, we propose a computationally efficient iterative algorith...
read it
What if Neural Networks had SVDs?
Various Neural Networks employ timeconsuming matrix operations like matrix inversion. Many such matrix operations are faster to compute given the Singular Value Decomposition (SVD). Previous work allows using the SVD in Neural Networks without computing it. In theory, the techniques can speed up matrix operations, however, in practice, they are not fast enough. We present an algorithm that is fast enough to speed up several matrix operations. The algorithm increases the degree of parallelism of an underlying matrix multiplication H· X where H is an orthogonal matrix represented by a product of Householder matrices. Code is available at www.github.com/AlexanderMath/fasth .
READ FULL TEXT
Comments
There are no comments yet.