Sketching Transformed Matrices with Applications to Natural Language Processing

02/23/2020
by   Yingyu Liang, et al.
0

Suppose we are given a large matrix A=(a_i,j) that cannot be stored in memory but is in a disk or is presented in a data stream. However, we need to compute a matrix decomposition of the entry-wisely transformed matrix, f(A):=(f(a_i,j)) for some function f. Is it possible to do it in a space efficient way? Many machine learning applications indeed need to deal with such large transformed matrices, for example word embedding method in NLP needs to work with the pointwise mutual information (PMI) matrix, while the entrywise transformation makes it difficult to apply known linear algebraic tools. Existing approaches for this problem either need to store the whole matrix and perform the entry-wise transformation afterwards, which is space consuming or infeasible, or need to redesign the learning method, which is application specific and requires substantial remodeling. In this paper, we first propose a space-efficient sketching algorithm for computing the product of a given small matrix with the transformed matrix. It works for a general family of transformations with provable small error bounds and thus can be used as a primitive in downstream learning tasks. We then apply this primitive to a concrete application: low-rank approximation. We show that our approach obtains small error and is efficient in both space and time. We complement our theoretical results with experiments on synthetic and real data.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/16/2021

Single Pass Entrywise-Transformed Low Rank Approximation

In applications such as natural language processing or computer vision, ...
research
10/02/2019

Distilled embedding: non-linear embedding factorization using knowledge distillation

Word-embeddings are a vital component of Natural Language Processing (NL...
research
09/16/2021

Sparse Factorization of Large Square Matrices

Square matrices appear in many machine learning problems and models. Opt...
research
10/27/2022

Faster Linear Algebra for Distance Matrices

The distance matrix of a dataset X of n points with respect to a distanc...
research
03/17/2021

Simultaneous Decorrelation of Matrix Time Series

We propose a contemporaneous bilinear transformation for matrix time ser...
research
01/28/2022

Low-rank features based double transformation matrices learning for image classification

Linear regression is a supervised method that has been widely used in cl...
research
06/10/2019

Refinement of Low Rank Approximation of a Matrix at Sub-linear Cost

Low rank approximation (LRA) of a matrix is a hot subject of modern comp...

Please sign up or login with your details

Forgot password? Click here to reset