Impossibility Results for Grammar-Compressed Linear Algebra

10/27/2020
by   Amir Abboud, et al.
0

To handle vast amounts of data, it is natural and popular to compress vectors and matrices. When we compress a vector from size N down to size n ≪ N, it certainly makes it easier to store and transmit efficiently, but does it also make it easier to process? In this paper we consider lossless compression schemes, and ask if we can run our computations on the compressed data as efficiently as if the original data was that small. That is, if an operation has time complexity T(inputsize), can we perform it on the compressed representation in time T(n) rather than T(N)? We consider the most basic linear algebra operations: inner product, matrix-vector multiplication, and matrix multiplication. In particular, given two compressed vectors, can we compute their inner product in time O(n)? Or perhaps we must decompress first and then multiply, spending Ω(N) time? The answer depends on the compression scheme. While for simple ones such as Run-Length-Encoding (RLE) the inner product can be done in O(n) time, we prove that this is impossible for compressions from a richer class: essentially n^2 or even larger runtimes are needed in the worst case (under complexity assumptions). This is the class of grammar-compressions containing most popular methods such as the Lempel-Ziv family. These schemes are more compressing than the simple RLE, but alas, we prove that performing computations on them is much harder.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/25/2022

Compressed Matrix Computations

Frugal computing is becoming an important topic for environmental reason...
research
03/28/2022

Improving Matrix-vector Multiplication via Lossless Grammar-Compressed Matrices

As nowadays Machine Learning (ML) techniques are generating huge data co...
research
03/02/2018

Fine-Grained Complexity of Analyzing Compressed Data: Quantifying Improvements over Decompress-And-Solve

Can we analyze data without decompressing it? As our data keeps growing,...
research
06/10/2018

Convolutional number-theoretic method to optimise integer matrix multiplication

There have been several algorithms designed to optimise matrix multiplic...
research
06/01/2022

A Normal Form for Matrix Multiplication Schemes

Schemes for exact multiplication of small matrices have a large symmetry...
research
02/08/2019

Generic reductions for in-place polynomial multiplication

The polynomial multiplication problem has attracted considerable attenti...
research
07/29/2021

ATLAS: Interactive and Educational Linear Algebra System Containing Non-Standard Methods

While there are numerous linear algebra teaching tools, they tend to be ...

Please sign up or login with your details

Forgot password? Click here to reset